Walmart cracks enterprise AI at scale: Thousands of use cases, one framework

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Walmart continues to make strides in cracking the code on deploying agentic AI at enterprise scale. Their secret? Treating trust as an engineering requirement, not some compliance checkbox you tick at the end.

During the “Trust in the Algorithm: How Walmart’s Agentic AI Is Redefining Consumer Confidence and Retail Leadership” session at VB Transform 2025, Walmart’s VP of Emerging Technology Desirée Gosby, explained how the retail giant operationalizes thousands of AI use cases. One of the retailer’s primary objectives is to consistently maintain and strengthen customer confidence among its 255 million weekly shoppers.

“We see this as a pretty big inflection point, very similar to the internet,” Gosby told industry analyst Susan Etlinger during Tuesday’s morning session. “It’s as profound in terms of how we’re actually going to operate, how we actually do work.”

The session delivered valuable lessons learned from Walmart’s AI deployment experiences. Implicit throughout the discussion is the retail giant’s continual search for new ways to apply distributed systems architecture principles, thereby avoiding the creation of technical debt.

>>See all our Transform 2025 coverage here<<

Four-stakeholder framework structures AI deployment

Walmart’s AI architecture rejects horizontal platforms for targeted stakeholder solutions. Each group receives purpose-built tools that address specific operational frictions.

Customers engage Sparky for natural language shopping. Field associates get inventory and workflow optimization tools. Merchants access decision-support systems for category management. Sellers receive business integration capabilities. “And then, of course, we’ve got developers, and really, you know, giving them the superpowers and charging them up with, you know, the new agent of tools,” Gosby explained.

“We have hundreds, if not thousands, of different use cases across the company that we’re bringing to life,” Gosby revealed. The scale demands architectural discipline that most enterprises lack.

The segmentation acknowledges the fundamental need of each team in Walmart to have purpose-built tools for their specific jobs. Store associates managing inventory need different tools from merchants analyzing regional trends. Generic platforms fail because they ignore operational reality. Walmart’s specificity drives adoption through relevance, not mandate.

Trust economics are driving AI adoption at Walmart

Walmart discovered that trust is built through value delivery, not just mandatory training programs that associates, at times, question the value of.

Gosby’s example resonated as she explained her mother’s shopping evolution from weekly store visits to COVID-era deliveries, illustrating exactly how natural adoption works. Each step provided an immediate, tangible benefit. No friction, no forced change management, yet the progression happened faster than anyone could have predicted.

“She’s been interacting with AI through that whole time,” Gosby explained. “The fact that she was able to go to the store and get what she wanted, it was on the shelf. AI was used to do that.”

The benefits customers are getting from Walmart’s predictive commerce vision are further reflected in Gosby’s mother’s experiences. “Instead of having to go weekly, figure out what groceries you need to have delivered, what if it just showed up for you automatically?” That’s the essence of predictive commerce and how it delivers value at scale to every Walmart customer.

“If you’re adding value to their lives, helping them remove friction, helping them save money and live better, which is part of our mission, then the trust comes,” Gosby stated. Associates follow the same pattern. When AI actually improves their work, saves them time and helps them excel, adoption happens naturally and trust is earned.

Fashion cycles compress from months to weeks

Walmart’s Trend to Product system quantifies the operational value of AI. The platform synthesizes social media signals, customer behavior and regional patterns to slash product development from months to weeks.

“Trend to Product has gotten us down from months to weeks to getting the right products to our customers,” Gosby revealed. The system creates products in response to real-time demand rather than historical data.

The months-to-weeks compression transforms Walmart’s retail economics. Inventory turns accelerate. Markdown exposure shrinks. Capital efficiency multiplies. The company maintains price leadership while matching any competitor’s speed-to-market capabilities. Every high-velocity category can benefit from using AI to shrink time-to-market and deliver quantifiable gains.

How Walmart uses MCP Protocol to create a scalable agent architecture

Walmart’s approach to agent orchestration draws directly from its hard-won experience with distributed systems. The company uses Model Context Protocol (MCP) to standardize how agents interact with existing services.

“We break down our domains and really looking at how do we wrap those things as MCP protocol, and then exposing those things that we can then start to orchestrate different agents,” Gosby explained. The strategy transforms existing infrastructure rather than replacing it.

The architectural philosophy runs deeper than protocols. “The change that we’re seeing today is very similar to what we’ve seen when we went from monoliths to distributed systems. We don’t want to repeat those mistakes,” Gosby stated.

Gosby outlined the execution requirements: “How do you decompose your domains? What MCP servers should you have? What sort of agent orchestration should you have?” At Walmart, these represent daily operational decisions, not theoretical exercises.

“We’re looking to take our existing infrastructure, break it down, and then recompose it into the agents that we want to be able to build,” Gosby explained. This standardization-first approach enables flexibility. Services built years ago now power agentic experiences through proper abstraction layers.

Merchant expertise becomes enterprise intelligence

Walmart leverages decades of employee knowledge, making it a core component of its growing AI capabilities. The company systematically captures category expertise from thousands of merchants, creating a competitive advantage no digital-first retailer can match.

“We have thousands of merchants who are excellent at what they do. They are experts in the categories that they support,” Gosby explained. “We have a cheese merchant who knows exactly what wine goes or what cheese pairing, but that data isn’t necessarily captured in a structured way.”

AI operationalizes this knowledge. “With the tools that we have, we can capture that expertise that they have and really bring that to bear for our customers,” Gosby said. The application is specific: “When they’re trying to figure out, hey, I need to throw the party, what kind of appetizers should I have?”

The strategic advantage compounds. Decades of merchant expertise become accessible through natural language queries. Digital-first retailers lack this human knowledge foundation. Walmart’s 2.2 million associates represent proprietary intelligence that algorithms cannot synthesize independently.

New metrics measure autonomous success

Walmart pioneers measurement systems designed for autonomous AI rather than human-driven processes. Traditional funnel metrics fail when agents handle end-to-end workflows.

“In an agentic world, we’re starting to work through this, and it’s going to change,” Gosby said. “The metrics around conversion and things like that, those are not going to change, but we’re going to be looking at goal completion.”

The shift reflects operational reality. “Did we actually achieve what is the ultimate goal that our associate, that our customers, are actually solving?” Gosby asked. The question reframes success measurement.

“At the end of the day, it’s a measure of, are we delivering the benefit? Are we delivering the value that we expect, and then working back from there to basically figure out the right metrics?” Gosby explained. Problem resolution matters more than process compliance. How AI is helping customers achieve their goals is prioritized over conversion funnels.

Enterprise lessons from Walmart’s AI transformation

Walmart’s Transform 2025 session delivers actionable intelligence for enterprise AI deployment. The company’s operational approach provides a framework that has been validated at scale.

  • Apply architectural discipline from day one. The shift from monolithic to distributed systems provided Walmart with the lessons it needed to learn to succeed with AI deployments. The key lesson learned is to build proper foundations before scaling and define a systematic approach that prevents expensive rework.
  • Match solutions to specific user needs. One-size-fits-all AI fails every time. Store associates need different tools than merchants. Suppliers require different capabilities than developers. Walmart’s targeted approach drives adoption.
  • Build trust through proven value. Start with clear wins that deliver measurable results. Walmart moved from basic inventory management to predictive commerce step by step. Each success earns insights and knowledge for the next.
  • Turn employee knowledge into enterprise assets. Decades of specialist expertise exists within your organization. Walmart systematically captures merchant intelligence and operationalizes it across 255 million weekly transactions. This institutional knowledge creates competitive advantage no algorithm can replicate from scratch.
  • Measure what matters in autonomous systems. Conversion rates miss the point when AI handles entire workflows. Focus on problem resolution and value delivery. Walmart’s metrics evolved to match operational reality.
  • Standardize before complexity hits. Integration failures killed more projects than bad code ever did. Walmart’s protocol decisions prevent the chaos that derails most AI initiatives. Structure enables speed.

“It always comes back to basics,” Gosby advised. “Take a step back and first understand what problems do you really need to solve for your customers, for our associates. Where is there friction? Where is there manual work that you can now start to think differently about?”

Walmart’s blueprint scales beyond retail

Walmart demonstrates how enterprise AI succeeds through engineering discipline and systematic deployment. The company processes millions of daily transactions across 4,700 stores by treating each stakeholder group as a distinct challenge requiring tailored, real-time solutions.

“It’s permeating everything it is that we do,” Gosby explained. “But at the end of the day, the way that we look at it is we always start with our customers and our members and really understanding how it’s going to impact them.”

Their framework applies across industries. Financial services organizations balancing customer needs with regulatory requirements, healthcare systems coordinating patient care across providers, manufacturers managing complex supply chains are all facing similar multi-stakeholder challenges. Walmart’s approach provides a tested methodology for addressing this complexity.

“Our customers are trying to solve a problem for themselves. Same thing for our associates,” Gosby stated. “Did we actually solve that problem with these new tools?” This focus on problem resolution rather than technology deployment drives measurable outcomes. Walmart’s scale validates the approach for any enterprise ready to move beyond pilot programs.

Similar Posts

  • Our first long-duration energy storage partnership

    Electricity powers modern life. And we’re accelerating a wide range of technologies, from enhanced geothermal to advanced nuclear to even fusion technologies, that can enable a future where on-demand electricity needs are met with clean energy, every hour of every day.Today, we’re adding another technology to our portfolio: long duration energy storage (LDES). Through a new long-term partnership with Energy Dome, we plan to support multiple commercial projects globally to deploy their LDES technology.Energy Dome’s novel CO₂ Battery can store excess clean energy and then dispatch it back to the grid for 8-24 hours, bridging the gap between when renewable energy is generated and when it is needed. With this commercial partnership, as well as an investment in the company, we believe these projects can unlock new clean energy for grids where we operate before 2030, helping meet near-term electricity system needs and moving us closer to our 24/7 carbon-free energy goal.By bringing this first-of-a-kind LDES technology to market faster, we aim to rapidly bring its potential to communities everywhere — making reliable, affordable electricity available around the clock and supporting the resilience of grids as they integrate growing amounts of renewable energy sources.Why it’s importantLithium-ion batteries, which typically store and dispatch power for 4 hours or less, have been critical for adding electricity capacity to grids and managing short-term fluctuations in renewable generation — when the sun isn’t shining or the wind isn’t blowing. Google’s support for these shorter-duration batteries has helped the grids we rely on, from Belgium to Nevada, meet peak electricity demand and reduce the need to ramp up fossil fuel power plants.But what if we could store and dispatch clean energy for more than a few hours, or even a full day? Studies by the Electric Power Research Institute show that LDES technologies can cost-effectively integrate a growing volume of renewables onto power systems and contribute to more flexible, reliable grids. The LDES Council estimates that deploying up to 8 terawatts (TW) of LDES by 2040 could result in $540 billion in annual savings globally, thanks in part to their ability to optimize grids.How the technology worksEnergy Dome’s novel approach to energy storage uses carbon dioxide (CO₂) held in a unique dome-shaped battery. When there’s an abundance of renewable energy on the grid, the system uses that power to compress CO₂ gas into a liquid. When the grid needs more clean power, the liquid CO₂ expands back into a hot gas under pressure, creating a powerful force — much like steam escaping a pressure cooker — which spins a turbine. This spinning turbine generates carbon-free energy that can flow directly back into the grid for durations ranging from 8 to 24 hours.Energy Dome has already signed contracts to build commercial scale projects in Italy, the U.S., and India. And their technology has already proven successful, having injected electrons into the Italian grid for more than three years, thanks to their commercial demonstration facility and now with their full-scale 20 megawatt (MW) commercial plant in Sardinia, Italy.Why scale is crucialLDES has the potential to commercialize much faster than some of the other advanced clean energy technologies in our portfolio. This means we can use it in the near term to help the electricity system grow more flexibly and reliably, alongside other tools we’re developing such as data center demand response.By supporting multiple commercial deployments of Energy Dome’s technology globally, we aim to bring this technology to scale faster and at lower costs. Beyond our long-term collaboration with Energy Dome, we plan to support a growing range of LDES technologies under development through both commercial agreements that can catalyze wider market adoption of more mature technologies, like Energy Dome’s, as well as earlier-stage investments.To remove barriers to the deployment and commercialization of LDES and other advanced carbon-free energy technologies, we’re also advocating for clean energy policies, ensuring that energy markets fully value firm, flexible carbon-free technologies, and advancing policy measures that enable infrastructure essential for grid decarbonization and energy security.We’re excited to take this first step with Energy Dome to unlock the full potential of LDES. Our partnership will strengthen grid resilience while enabling us to power our technologies, grow our economies and keep the lights on in our homes with 24/7 clean energy.

  • Try on styles with AI, jump on great prices and more

    Whether you’re still on the hunt for the perfect summer maxi skirt, dreaming about a new fall jacket or starting your back to school shopping, our shopping tools can help you explore your personal style and get a good price. Here are a few ways you can use Google’s latest shopping features:Try clothes on, virtuallyAt I/O in May, we introduced our try on tool as a limited experiment in Search Labs, allowing shoppers to upload a photo of themselves and use AI to virtually try on clothes. Today, try on is launching in the U.S., letting you easily try on styles from the billions of apparel items in our Shopping Graph across Search, Google Shopping and even product results on Google Images.

  • Google DeepMind’s new AI can help historians understand ancient Latin inscriptions

    Google DeepMind has unveiled new artificial-intelligence software that could help historians recover the meaning and context behind ancient Latin engravings.  Aeneas can analyze words written in long-weathered stone to say when and where they were originally inscribed. It follows Google’s previous archaeological tool Ithaca, which also used deep learning to reconstruct and contextualize ancient text, in its case Greek. But while Ithaca and Aeneas use some similar systems, Aeneas also promises to give researchers jumping-off points for further analysis. To do this, Aeneas takes in partial transcriptions of an inscription alongside a scanned image of it. Using these, it gives possible dates and places of origins for the engraving, along with potential fill-ins for any missing text. For example, a slab damaged at the start and continuing with … us populusque Romanus would likely prompt Aeneas to guess that Senat comes before us to create the phrase Senatus populusque Romanus, “The Senate and the people of Rome.”  This is similar to how Ithaca works. But Aeneas also cross-references the text with a stored database of almost 150,000 inscriptions, which originated everywhere from modern-day Britain to modern-day Iraq, to give possible parallels—other catalogued Latin engravings that feature similar words, phrases, and analogies. 
    This database, alongside a few thousand images of inscriptions, makes up the training set for Aeneas’s deep neural network. While it may seem like a good number of samples, it pales in comparison to the billions of documents used to train general-purpose large language models like Google’s Gemini. There simply aren’t enough high-quality scans of inscriptions to train a language model to learn this kind of task. That’s why specialized solutions like Aeneas are needed.  The Aeneas team believes it could help researchers “connect the past,” said Yannis Assael, a researcher at Google DeepMind who worked on the project. Rather than seeking to automate epigraphy—the research field dealing with deciphering and understanding inscriptions—he and his colleagues are interested in “crafting a tool that will integrate with the workflow of a historian,” Assael said in a press briefing. 
    Their goal is to give researchers trying to analyze a specific inscription many hypotheses to work from, saving them the effort of sifting through records by hand. To validate the system, the team presented 23 historians with inscriptions that had been previously dated and tested their workflows both with and without Aeneas. The findings, which were published today in Nature, showed that Aeneas helped spur research ideas among the historians for 90% of inscriptions and that it led to more accurate determinations of where and when the inscriptions originated. In addition to this study, the researchers tested Aeneas on the Monumentum Ancyranum, a famous inscription carved into the walls of a temple in Ankara, Turkey. Here, Aeneas managed to give estimates and parallels that reflected existing historical analysis of the work, and in its attention to detail, the paper claims, it closely matched how a trained historian would approach the problem. “That was jaw-dropping,” Thea Sommerschield, an epigrapher at the University of Nottingham who also worked on Aeneas, said in the press briefing.  However, much remains to be seen about Aeneas’s capabilities in the real world. It doesn’t guess the meaning of texts, so it can’t interpret newly found engravings on its own, and it’s not clear yet how useful it will be to historians’ workflows in the long term, according to Kathleen Coleman, a professor of classics at Harvard. The Monumentum Ancyranum is considered to be one of the best-known and most well-studied inscriptions in epigraphy, raising the question of how Aeneas will fare on more obscure samples.  Google DeepMind has now made Aeneas open-source, and the interface for the system is freely available for teachers, students, museum workers, and academics. The group is working with schools in Belgium to integrate Aeneas into their secondary history education.  “To have Aeneas at your side while you’re in the museum or at the archaeological site where a new inscription has just been found—that is our sort of dream scenario,” Sommerschield said.

  • Trump’s AI Action Plan is a distraction

    On Wednesday, President Trump issued three executive orders, delivered a speech, and released an action plan, all on the topic of continuing American leadership in AI.  The plan contains dozens of proposed actions, grouped into three “pillars”: accelerating innovation, building infrastructure, and leading international diplomacy and security. Some of its recommendations are thoughtful even if incremental, some clearly serve ideological ends, and many enrich big tech companies, but the plan is just a set of recommended actions.  The three executive orders, on the other hand, actually operationalize one subset of actions from each pillar:  One aims to prevent “woke AI” by mandating that the federal government procure only large language models deemed “truth-seeking” and “ideologically neutral” rather than ones allegedly favoring DEI. This action purportedly accelerates AI innovation. A second aims to accelerate construction of AI data centers. A much more industry-friendly version of an order issued under President Biden, it makes available rather extreme policy levers, like effectively waiving a broad swath of environmental protections, providing government grants to the wealthiest companies in the world, and even offering federal land for private data centers. A third promotes and finances the export of US AI technologies and infrastructure, aiming to secure American diplomatic leadership and reduce international dependence on AI systems from adversarial countries. This flurry of actions made for glitzy press moments, including an hour-long speech from the president and onstage signings. But while the tech industry cheered these announcements (which will swell their coffers), they obscured the fact that the administration is currently decimating the very policies that enabled America to become the world leader in AI in the first place.
    To maintain America’s leadership in AI, you have to understand what produced it. Here are four specific long-standing public policies that helped the US achieve this leadership—advantages that the administration is undermining.  Investing federal funding in R&D  Generative AI products released recently by American companies, like ChatGPT, were developed with industry-funded research and development. But the R&D that enables today’s AI was actually funded in large part by federal government agencies—like the Defense Department, the National Science Foundation, NASA, and the National Institutes of Health—starting in the 1950s. This includes the first successful AI program in 1956, the first chatbot in 1961, and the first expert systems for doctors in the 1970s, along with breakthroughs in machine learning, neural networks, backpropagation, computer vision, and natural-language processing.
    American tax dollars also funded advances in hardware, communications networks, and other technologies underlying AI systems. Public research funding undergirded the development of lithium-ion batteries, micro hard drives, LCD screens, GPS, radio-frequency signal compression, and more in today’s smartphones, along with the chips used in AI data centers, and even the internet itself. Instead of building on this world-class research history, the Trump administration is slashing R&D funding, firing federal scientists, and squeezing leading research universities. This week’s action plan recommends investing in R&D, but the administration’s actual budget proposes cutting nondefense R&D by 36%. It also proposed actions to better coordinate and guide federal R&D, but coordination won’t yield more funding. Some say that companies’ R&D investments will make up the difference. However, companies conduct research that benefits their bottom line, not necessarily the national interest. Public investment allows broad scientific inquiry, including basic research that lacks immediate commercial applications but sometimes ends up opening massive markets years or decades later. That’s what happened with today’s AI industry. Supporting immigration and immigrants Beyond public R&D investment, America has long attracted the world’s best researchers and innovators. Today’s generative AI is based on the transformer model (the T in ChatGPT), first described by a team at Google in 2017. Six of the eight researchers on that team were born outside the US, and the other two are children of immigrants.  This isn’t an exception. Immigrants have been central to American leadership in AI. Of the 42 American companies included in the 2025 Forbes ranking of the 50 top AI startups, 60% have at least one immigrant cofounder, according to an analysis by the Institute for Progress. Immigrants also cofounded or head the companies at the center of the AI ecosystem: OpenAI, Anthropic, Google, Microsoft, Nvidia, Intel, and AMD. “Brain drain” is a term that was first coined to describe scientists’ leaving other countries for the US after World War II—to the Americans’ benefit. Sadly, the trend has begun reversing this year. Recent studies suggest that the US is already losing its AI talent edge through the administration’s anti-immigration actions (including actions taken against AI researchers) and cuts to R&D funding. Banning noncompetes Attracting talented minds is only half the equation; giving them freedom to innovate is just as crucial.

    Silicon Valley got its name because of mid-20th-century companies that made semiconductors from silicon, starting with the founding of Shockley Semiconductor in 1955. Two years later, a group of employees, the “Traitorous Eight,” quit to launch a competitor, Fairchild Semiconductor. By the end of the 1960s, successive groups of former Fairchild employees had left to start Intel, AMD, and others collectively dubbed the “Fairchildren.”  Software and internet companies eventually followed, again founded by people who had worked for their predecessors. In the 1990s, former Yahoo employees founded WhatsApp, Slack, and Cloudera; the “PayPal Mafia” created LinkedIn, YouTube, and fintech firms like Affirm. Former Google employees have launched more than 1,200 companies, including Instagram and Foursquare. AI is no different. OpenAI has founders that worked at other tech companies and alumni who have gone on to launch over a dozen AI startups, including notable ones like Anthropic and Perplexity. This labor fluidity and the innovation it has created were possible in large part, according to many historians, because California’s 1872 constitution has been interpreted to prohibit noncompete agreements in employment contracts—a statewide protection the state originally shared only with North Dakota and Oklahoma. These agreements bind one in five American workers. Last year, the Federal Trade Commission under President Biden moved to ban noncompetes nationwide, but a Trump-appointed federal judge has halted the action. The current FTC has signaled limited support for the ban and may be comfortable dropping it. If noncompetes persist, American AI innovation, especially outside California, will be limited. Pursuing antitrust actions One of this week’s announcements requires the review of FTC investigations and settlements that “burden AI innovation.” During the last administration the agency was reportedly investigating Microsoft’s AI actions, and several big tech companies have settlements that their lawyers surely see as burdensome, meaning this one action could thwart recent progress in antitrust policy. That’s an issue because, in addition to the labor fluidity achieved by banning noncompetes, antitrust policy has also acted as a key lubricant to the gears of Silicon Valley innovation.  Major antitrust cases in the second half of the 1900s, against AT&T, IBM, and Microsoft, allowed innovation and a flourishing market for semiconductors, software, and internet companies, as the antitrust scholar Giovanna Massarotto has described. William Shockley was able to start the first semiconductor company in Silicon Valley only because AT&T had been forced to license its patent on the transistor as part of a consent decree resolving a DOJ antitrust lawsuit against the company in the 1950s. 
    The early software market then took off because in the late 1960s, IBM unbundled its software and hardware offerings as a response to antitrust pressure from the federal government. As Massarotto explains, the 1950s AT&T consent decree also aided the flourishing of open-source software, which plays a major role in today’s technology ecosystem, including the operating systems for mobile phones and cloud computing servers. Meanwhile, many attribute the success of early 2000s internet companies like Google to the competitive breathing room created by the federal government’s antitrust lawsuit against Microsoft in the 1990s. 
    Over and over, antitrust actions targeting the dominant actors of one era enabled the formation of the next. And today, big tech is stifling the AI market. While antitrust advocates were rightly optimistic about this administration’s posture given key appointments early on, this week’s announcements should dampen that excitement.  I don’t want to lose focus on where things are: We should want a future in which lives are improved by the positive uses of AI.  But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition.  Prioritizing short-term industry profits over these bedrock principles won’t just put our technological future at risk—it will jeopardize America’s role as the world’s innovation superpower.  Asad Ramzanali is the director of artificial intelligence and technology policy at the Vanderbilt Policy Accelerator. He previously served as the chief of staff and deputy director of strategy of the White House Office of Science and Technology Policy under President Biden.

  • Gridcare thinks more than 100 GW of data center capacity is hiding in the grid

    Hyperscalers and data center developers are in a pickle: They all want to add computing power tomorrow, but utilities frequently play hard to get, citing years-long waits for grid connections.

    “All the AI data centers are struggling to get connected,” Amit Narayan, founder and CEO of Gridcare, told TechCrunch. “They’re so desperate. They are looking for solutions, which may or may not happen. Certainly not in the five-year timelines they cite.”

    That has led many data centers to pursue what’s called “behind the meter” power sources — basically, they build their own power plants, a costly endeavor that hints at just how desperate they are for electricity.

    But Narayan knew there was plenty of slack in the system, even if utilities themselves haven’t discovered it yet. He has studied the grid for the last 15 years, first as a Stanford researcher then as a founder of another company. “How do we create more capacity when everyone thinks that there is no capacity on the grid?” he said.

    Narayan said that Gridcare, which has been operating in stealth, has already discovered several places where extra capacity exists, and it’s ready to play matchmaker between data centers and utilities.

    Gridcare recently closed an oversubscribed $13.5 million seed round, the company told TechCrunch. The round was led by Xora, Temasek’s deep tech venture firm, with participation from Acclimate Ventures, Aina Climate AI Ventures, Breakthrough Energy Discovery, Clearvision, Clocktower Ventures, Overture Ventures, Sherpalo Ventures, and WovenEarth.

    For Narayan and his colleagues at Gridcare, the first step to finding untapped capacity was to map the existing grid. Then the company used generative AI to help forecast what changes might be implemented in the coming years. It also layers on other details, including the availability of fiber optic connections, natural gas, water, extreme weather, permitting, and community sentiment around data center construction and expansion. 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “There are 200,000-plus scenarios that you have to consider every time you’re running this study,” Narayan said.

    To make sure it’s not running afoul of regulations, Gridcare then takes that data and weighs it against federal guidelines that dictate grid usage. Once it finds a spot, it starts talking with the relevant utility to verify the data.

    “We’ll find out where the maximum bang for the buck is,” Narayan said.

    At the same time, Gridcare works with hyperscalers and data center developers to identify where they are looking to expand operations or build new ones. “They have already told us what they’re willing to do. We know the parameters under which they can operate,” he said.

    That’s when the matchmaking begins.

    Gridcare sells its services to data center developers, charging them a fee based on how many megawatts of capacity the startup can unlock for them. “That fee is significant for us, but it’s negligible for data centers,” Narayan said.

    For some data centers, the price of admission might be forgoing grid power for a few hours here and there, relying on on-site backup power instead. For others, the path might be clearer if their demand helps green-light a new grid-scale battery installation nearby. In the future, the winner might be the developer that is willing to pay more. Utilities have already approached Gridcare inquiring about auctioning access to newfound capacity.

    Regardless of how it happens, Narayan thinks that Gridcare can unlock more than 100 gigawatts of capacity using its approach. “We don’t have to solve nuclear fusion to do this,” he said.

    Update: Corrected spare capacity on the grid to gigawatts from megawatts.

Leave a Reply

Your email address will not be published. Required fields are marked *