Walmart cracks enterprise AI at scale: Thousands of use cases, one framework

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Walmart continues to make strides in cracking the code on deploying agentic AI at enterprise scale. Their secret? Treating trust as an engineering requirement, not some compliance checkbox you tick at the end.

During the “Trust in the Algorithm: How Walmart’s Agentic AI Is Redefining Consumer Confidence and Retail Leadership” session at VB Transform 2025, Walmart’s VP of Emerging Technology Desirée Gosby, explained how the retail giant operationalizes thousands of AI use cases. One of the retailer’s primary objectives is to consistently maintain and strengthen customer confidence among its 255 million weekly shoppers.

“We see this as a pretty big inflection point, very similar to the internet,” Gosby told industry analyst Susan Etlinger during Tuesday’s morning session. “It’s as profound in terms of how we’re actually going to operate, how we actually do work.”

The session delivered valuable lessons learned from Walmart’s AI deployment experiences. Implicit throughout the discussion is the retail giant’s continual search for new ways to apply distributed systems architecture principles, thereby avoiding the creation of technical debt.

>>See all our Transform 2025 coverage here<<

Four-stakeholder framework structures AI deployment

Walmart’s AI architecture rejects horizontal platforms for targeted stakeholder solutions. Each group receives purpose-built tools that address specific operational frictions.

Customers engage Sparky for natural language shopping. Field associates get inventory and workflow optimization tools. Merchants access decision-support systems for category management. Sellers receive business integration capabilities. “And then, of course, we’ve got developers, and really, you know, giving them the superpowers and charging them up with, you know, the new agent of tools,” Gosby explained.

“We have hundreds, if not thousands, of different use cases across the company that we’re bringing to life,” Gosby revealed. The scale demands architectural discipline that most enterprises lack.

The segmentation acknowledges the fundamental need of each team in Walmart to have purpose-built tools for their specific jobs. Store associates managing inventory need different tools from merchants analyzing regional trends. Generic platforms fail because they ignore operational reality. Walmart’s specificity drives adoption through relevance, not mandate.

Trust economics are driving AI adoption at Walmart

Walmart discovered that trust is built through value delivery, not just mandatory training programs that associates, at times, question the value of.

Gosby’s example resonated as she explained her mother’s shopping evolution from weekly store visits to COVID-era deliveries, illustrating exactly how natural adoption works. Each step provided an immediate, tangible benefit. No friction, no forced change management, yet the progression happened faster than anyone could have predicted.

“She’s been interacting with AI through that whole time,” Gosby explained. “The fact that she was able to go to the store and get what she wanted, it was on the shelf. AI was used to do that.”

The benefits customers are getting from Walmart’s predictive commerce vision are further reflected in Gosby’s mother’s experiences. “Instead of having to go weekly, figure out what groceries you need to have delivered, what if it just showed up for you automatically?” That’s the essence of predictive commerce and how it delivers value at scale to every Walmart customer.

“If you’re adding value to their lives, helping them remove friction, helping them save money and live better, which is part of our mission, then the trust comes,” Gosby stated. Associates follow the same pattern. When AI actually improves their work, saves them time and helps them excel, adoption happens naturally and trust is earned.

Fashion cycles compress from months to weeks

Walmart’s Trend to Product system quantifies the operational value of AI. The platform synthesizes social media signals, customer behavior and regional patterns to slash product development from months to weeks.

“Trend to Product has gotten us down from months to weeks to getting the right products to our customers,” Gosby revealed. The system creates products in response to real-time demand rather than historical data.

The months-to-weeks compression transforms Walmart’s retail economics. Inventory turns accelerate. Markdown exposure shrinks. Capital efficiency multiplies. The company maintains price leadership while matching any competitor’s speed-to-market capabilities. Every high-velocity category can benefit from using AI to shrink time-to-market and deliver quantifiable gains.

How Walmart uses MCP Protocol to create a scalable agent architecture

Walmart’s approach to agent orchestration draws directly from its hard-won experience with distributed systems. The company uses Model Context Protocol (MCP) to standardize how agents interact with existing services.

“We break down our domains and really looking at how do we wrap those things as MCP protocol, and then exposing those things that we can then start to orchestrate different agents,” Gosby explained. The strategy transforms existing infrastructure rather than replacing it.

The architectural philosophy runs deeper than protocols. “The change that we’re seeing today is very similar to what we’ve seen when we went from monoliths to distributed systems. We don’t want to repeat those mistakes,” Gosby stated.

Gosby outlined the execution requirements: “How do you decompose your domains? What MCP servers should you have? What sort of agent orchestration should you have?” At Walmart, these represent daily operational decisions, not theoretical exercises.

“We’re looking to take our existing infrastructure, break it down, and then recompose it into the agents that we want to be able to build,” Gosby explained. This standardization-first approach enables flexibility. Services built years ago now power agentic experiences through proper abstraction layers.

Merchant expertise becomes enterprise intelligence

Walmart leverages decades of employee knowledge, making it a core component of its growing AI capabilities. The company systematically captures category expertise from thousands of merchants, creating a competitive advantage no digital-first retailer can match.

“We have thousands of merchants who are excellent at what they do. They are experts in the categories that they support,” Gosby explained. “We have a cheese merchant who knows exactly what wine goes or what cheese pairing, but that data isn’t necessarily captured in a structured way.”

AI operationalizes this knowledge. “With the tools that we have, we can capture that expertise that they have and really bring that to bear for our customers,” Gosby said. The application is specific: “When they’re trying to figure out, hey, I need to throw the party, what kind of appetizers should I have?”

The strategic advantage compounds. Decades of merchant expertise become accessible through natural language queries. Digital-first retailers lack this human knowledge foundation. Walmart’s 2.2 million associates represent proprietary intelligence that algorithms cannot synthesize independently.

New metrics measure autonomous success

Walmart pioneers measurement systems designed for autonomous AI rather than human-driven processes. Traditional funnel metrics fail when agents handle end-to-end workflows.

“In an agentic world, we’re starting to work through this, and it’s going to change,” Gosby said. “The metrics around conversion and things like that, those are not going to change, but we’re going to be looking at goal completion.”

The shift reflects operational reality. “Did we actually achieve what is the ultimate goal that our associate, that our customers, are actually solving?” Gosby asked. The question reframes success measurement.

“At the end of the day, it’s a measure of, are we delivering the benefit? Are we delivering the value that we expect, and then working back from there to basically figure out the right metrics?” Gosby explained. Problem resolution matters more than process compliance. How AI is helping customers achieve their goals is prioritized over conversion funnels.

Enterprise lessons from Walmart’s AI transformation

Walmart’s Transform 2025 session delivers actionable intelligence for enterprise AI deployment. The company’s operational approach provides a framework that has been validated at scale.

  • Apply architectural discipline from day one. The shift from monolithic to distributed systems provided Walmart with the lessons it needed to learn to succeed with AI deployments. The key lesson learned is to build proper foundations before scaling and define a systematic approach that prevents expensive rework.
  • Match solutions to specific user needs. One-size-fits-all AI fails every time. Store associates need different tools than merchants. Suppliers require different capabilities than developers. Walmart’s targeted approach drives adoption.
  • Build trust through proven value. Start with clear wins that deliver measurable results. Walmart moved from basic inventory management to predictive commerce step by step. Each success earns insights and knowledge for the next.
  • Turn employee knowledge into enterprise assets. Decades of specialist expertise exists within your organization. Walmart systematically captures merchant intelligence and operationalizes it across 255 million weekly transactions. This institutional knowledge creates competitive advantage no algorithm can replicate from scratch.
  • Measure what matters in autonomous systems. Conversion rates miss the point when AI handles entire workflows. Focus on problem resolution and value delivery. Walmart’s metrics evolved to match operational reality.
  • Standardize before complexity hits. Integration failures killed more projects than bad code ever did. Walmart’s protocol decisions prevent the chaos that derails most AI initiatives. Structure enables speed.

“It always comes back to basics,” Gosby advised. “Take a step back and first understand what problems do you really need to solve for your customers, for our associates. Where is there friction? Where is there manual work that you can now start to think differently about?”

Walmart’s blueprint scales beyond retail

Walmart demonstrates how enterprise AI succeeds through engineering discipline and systematic deployment. The company processes millions of daily transactions across 4,700 stores by treating each stakeholder group as a distinct challenge requiring tailored, real-time solutions.

“It’s permeating everything it is that we do,” Gosby explained. “But at the end of the day, the way that we look at it is we always start with our customers and our members and really understanding how it’s going to impact them.”

Their framework applies across industries. Financial services organizations balancing customer needs with regulatory requirements, healthcare systems coordinating patient care across providers, manufacturers managing complex supply chains are all facing similar multi-stakeholder challenges. Walmart’s approach provides a tested methodology for addressing this complexity.

“Our customers are trying to solve a problem for themselves. Same thing for our associates,” Gosby stated. “Did we actually solve that problem with these new tools?” This focus on problem resolution rather than technology deployment drives measurable outcomes. Walmart’s scale validates the approach for any enterprise ready to move beyond pilot programs.

Similar Posts

  • The hidden scaling cliff that’s about to break your agent rollouts

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Enterprises that want to build and scale agents also need to embrace another reality: agents aren’t built like other software.  Agents are “categorically different” in how they’re built, how they operate, and…

  • Trump’s AI Action Plan is a distraction

    On Wednesday, President Trump issued three executive orders, delivered a speech, and released an action plan, all on the topic of continuing American leadership in AI.  The plan contains dozens of proposed actions, grouped into three “pillars”: accelerating innovation, building infrastructure, and leading international diplomacy and security. Some of its recommendations are thoughtful even if incremental, some clearly serve ideological ends, and many enrich big tech companies, but the plan is just a set of recommended actions.  The three executive orders, on the other hand, actually operationalize one subset of actions from each pillar:  One aims to prevent “woke AI” by mandating that the federal government procure only large language models deemed “truth-seeking” and “ideologically neutral” rather than ones allegedly favoring DEI. This action purportedly accelerates AI innovation. A second aims to accelerate construction of AI data centers. A much more industry-friendly version of an order issued under President Biden, it makes available rather extreme policy levers, like effectively waiving a broad swath of environmental protections, providing government grants to the wealthiest companies in the world, and even offering federal land for private data centers. A third promotes and finances the export of US AI technologies and infrastructure, aiming to secure American diplomatic leadership and reduce international dependence on AI systems from adversarial countries. This flurry of actions made for glitzy press moments, including an hour-long speech from the president and onstage signings. But while the tech industry cheered these announcements (which will swell their coffers), they obscured the fact that the administration is currently decimating the very policies that enabled America to become the world leader in AI in the first place.
    To maintain America’s leadership in AI, you have to understand what produced it. Here are four specific long-standing public policies that helped the US achieve this leadership—advantages that the administration is undermining.  Investing federal funding in R&D  Generative AI products released recently by American companies, like ChatGPT, were developed with industry-funded research and development. But the R&D that enables today’s AI was actually funded in large part by federal government agencies—like the Defense Department, the National Science Foundation, NASA, and the National Institutes of Health—starting in the 1950s. This includes the first successful AI program in 1956, the first chatbot in 1961, and the first expert systems for doctors in the 1970s, along with breakthroughs in machine learning, neural networks, backpropagation, computer vision, and natural-language processing.
    American tax dollars also funded advances in hardware, communications networks, and other technologies underlying AI systems. Public research funding undergirded the development of lithium-ion batteries, micro hard drives, LCD screens, GPS, radio-frequency signal compression, and more in today’s smartphones, along with the chips used in AI data centers, and even the internet itself. Instead of building on this world-class research history, the Trump administration is slashing R&D funding, firing federal scientists, and squeezing leading research universities. This week’s action plan recommends investing in R&D, but the administration’s actual budget proposes cutting nondefense R&D by 36%. It also proposed actions to better coordinate and guide federal R&D, but coordination won’t yield more funding. Some say that companies’ R&D investments will make up the difference. However, companies conduct research that benefits their bottom line, not necessarily the national interest. Public investment allows broad scientific inquiry, including basic research that lacks immediate commercial applications but sometimes ends up opening massive markets years or decades later. That’s what happened with today’s AI industry. Supporting immigration and immigrants Beyond public R&D investment, America has long attracted the world’s best researchers and innovators. Today’s generative AI is based on the transformer model (the T in ChatGPT), first described by a team at Google in 2017. Six of the eight researchers on that team were born outside the US, and the other two are children of immigrants.  This isn’t an exception. Immigrants have been central to American leadership in AI. Of the 42 American companies included in the 2025 Forbes ranking of the 50 top AI startups, 60% have at least one immigrant cofounder, according to an analysis by the Institute for Progress. Immigrants also cofounded or head the companies at the center of the AI ecosystem: OpenAI, Anthropic, Google, Microsoft, Nvidia, Intel, and AMD. “Brain drain” is a term that was first coined to describe scientists’ leaving other countries for the US after World War II—to the Americans’ benefit. Sadly, the trend has begun reversing this year. Recent studies suggest that the US is already losing its AI talent edge through the administration’s anti-immigration actions (including actions taken against AI researchers) and cuts to R&D funding. Banning noncompetes Attracting talented minds is only half the equation; giving them freedom to innovate is just as crucial.

    Silicon Valley got its name because of mid-20th-century companies that made semiconductors from silicon, starting with the founding of Shockley Semiconductor in 1955. Two years later, a group of employees, the “Traitorous Eight,” quit to launch a competitor, Fairchild Semiconductor. By the end of the 1960s, successive groups of former Fairchild employees had left to start Intel, AMD, and others collectively dubbed the “Fairchildren.”  Software and internet companies eventually followed, again founded by people who had worked for their predecessors. In the 1990s, former Yahoo employees founded WhatsApp, Slack, and Cloudera; the “PayPal Mafia” created LinkedIn, YouTube, and fintech firms like Affirm. Former Google employees have launched more than 1,200 companies, including Instagram and Foursquare. AI is no different. OpenAI has founders that worked at other tech companies and alumni who have gone on to launch over a dozen AI startups, including notable ones like Anthropic and Perplexity. This labor fluidity and the innovation it has created were possible in large part, according to many historians, because California’s 1872 constitution has been interpreted to prohibit noncompete agreements in employment contracts—a statewide protection the state originally shared only with North Dakota and Oklahoma. These agreements bind one in five American workers. Last year, the Federal Trade Commission under President Biden moved to ban noncompetes nationwide, but a Trump-appointed federal judge has halted the action. The current FTC has signaled limited support for the ban and may be comfortable dropping it. If noncompetes persist, American AI innovation, especially outside California, will be limited. Pursuing antitrust actions One of this week’s announcements requires the review of FTC investigations and settlements that “burden AI innovation.” During the last administration the agency was reportedly investigating Microsoft’s AI actions, and several big tech companies have settlements that their lawyers surely see as burdensome, meaning this one action could thwart recent progress in antitrust policy. That’s an issue because, in addition to the labor fluidity achieved by banning noncompetes, antitrust policy has also acted as a key lubricant to the gears of Silicon Valley innovation.  Major antitrust cases in the second half of the 1900s, against AT&T, IBM, and Microsoft, allowed innovation and a flourishing market for semiconductors, software, and internet companies, as the antitrust scholar Giovanna Massarotto has described. William Shockley was able to start the first semiconductor company in Silicon Valley only because AT&T had been forced to license its patent on the transistor as part of a consent decree resolving a DOJ antitrust lawsuit against the company in the 1950s. 
    The early software market then took off because in the late 1960s, IBM unbundled its software and hardware offerings as a response to antitrust pressure from the federal government. As Massarotto explains, the 1950s AT&T consent decree also aided the flourishing of open-source software, which plays a major role in today’s technology ecosystem, including the operating systems for mobile phones and cloud computing servers. Meanwhile, many attribute the success of early 2000s internet companies like Google to the competitive breathing room created by the federal government’s antitrust lawsuit against Microsoft in the 1990s. 
    Over and over, antitrust actions targeting the dominant actors of one era enabled the formation of the next. And today, big tech is stifling the AI market. While antitrust advocates were rightly optimistic about this administration’s posture given key appointments early on, this week’s announcements should dampen that excitement.  I don’t want to lose focus on where things are: We should want a future in which lives are improved by the positive uses of AI.  But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition.  Prioritizing short-term industry profits over these bedrock principles won’t just put our technological future at risk—it will jeopardize America’s role as the world’s innovation superpower.  Asad Ramzanali is the director of artificial intelligence and technology policy at the Vanderbilt Policy Accelerator. He previously served as the chief of staff and deputy director of strategy of the White House Office of Science and Technology Policy under President Biden.

  • Web Guide: An experimental AI-organized search results page

    We’re launching Web Guide, a Search Labs experiment that uses AI to intelligently organize the search results page, making it easier to find information and web pages.Web Guide groups web links in helpful ways — like pages related to specific aspects of your query. Under the hood, Web Guide uses a custom version of Gemini to better understand both a search query and content on the web, creating more powerful search capabilities that better surface web pages you may not have previously discovered. Similar to AI Mode, Web Guide uses a query fan-out technique, concurrently issuing multiple related searches to identify the most relevant results.For example, try it for open-ended searches like “how to solo travel in Japan.” Or try detailed queries in multiple sentences like, “My family is spread across multiple time zones. What are the best tools for staying connected and maintaining close relationships despite the distance?”

Leave a Reply

Your email address will not be published. Required fields are marked *