Get paid faster: How Intuit’s new AI agents help businesses get funds up to 5 days faster and save 12 hours a month with autonomous workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Intuit has been on a journey over the last several years with generative AI, incorporating the technology as part of its services at QuickBooks, Credit Karma,Turbotax and Mailchimp.

Today the company is taking the next step with a series of AI agents that go beyond that to transform how small and mid-market businesses operate. These new agents work as a virtual team that automates workflows and provides real-time business insights. They include capabilities for payments, accounts and finance that will directly impact business operations. According to Intuit, customers save up to 12 hours per month and, on average, will get paid up to five days faster thanks to the new agents.

“If you look at the trajectory of our AI experiences at Intuit in the early years, AI was built into the background, and with Intuit Assist, you saw a shift to provide information back to the customer,” Ashok Srivastava, chief AI and data officer at Intuit, told VentureBeat. “Now what you’re seeing is a complete redesign. The agents are actually doing work on behalf of the customer, with their permission.”

Technical architecture: From starter kit to production agents

Intuit has been working on the path from assistants to agentic AI for some time.

In September 2024, the company detailed its plans to use AI to automate complex tasks. It’s an approach built firmly on the company’s generative AI operating system (GenOS) platform, the foundation of its AI efforts.

Earlier this month, Intuit announced a series of efforts that further extend its capabilities. The company has developed its own prompt optimization service that will optimize queries for any large language model (LLM). It has also developed what it calls an intelligent data cognition layer for enterprise data that can understand different data sources required for enterprise workflows.

Going a step further, Intuit developed an agent starter kit that builds on the company’s technical foundation to enable agentic AI development.

The agent portfolio: From cash flow to customer management

With the technical foundation in place, including agent starter kits, Intuit has built out a series of new agents that help business owners get things done.

Intuit’s agent suite demonstrates the technical sophistication required to move from predictive AI to autonomous workflow execution. Each agent coordinates prediction, natural language processing (NLP) and autonomous decision-making within complete business processes. They include:

Payments agent: Autonomously optimizes cash flow by predicting late payments, generating invoices and executing follow-up sequences. 

Accounting agent: Represents Intuit’s evolution from rules-based systems to autonomous bookkeeping. The agent now autonomously handles transaction categorization, reconciliation and workflow completion, delivering cleaner and more accurate books.

Finance agent: Automates strategic analysis traditionally requiring dedicated business intelligence (BI) tools and human analysts. Provides key performance indicator (KPI) analysis, scenario planning and forecasting based on how the company is doing against peer benchmarks while autonomously generating growth recommendations.

Intuit is also building out customer hub agents that will help with customer acquisition tasks. Payroll processing as well as project management efforts are also part of the future release plans.

Beyond conversational UI: Task-oriented agent design

The new agents mark an evolution in how AI is presented to users.

Intuit’s interface redesign reveals important user experience principles for enterprise agent deployment. Rather than bolting AI capabilities onto existing software, the company fundamentally restructured the QuickBooks user experience for AI.

“The user interface now is really oriented around the business tasks that need to be done,” Srivastava explained. “It allows for real time insights and recommendations to come to the user directly.”

This task-centric approach contrasts with the chat-based interfaces dominating current enterprise AI tools. Instead of requiring users to learn prompting strategies or navigate conversational flows, the agents operate within existing business workflows. The system includes what Intuit calls a “business feed” that contextually surfaces agent actions and recommendations.

Trust and verification: The closed-loop challenge

One of the most technically significant aspects of Intuit’s implementation addresses a critical challenge in autonomous agent deployment: Verification and trust. Enterprise AI teams often struggle with the black box problem — how do you ensure AI agents are performing correctly when they operate autonomously?

“In order to build trust with artificial intelligence systems, we need to provide proof points back to the customer that what they think is happening is actually happening,” Srivastava emphasized. “That closed loop is very, very important.”

Intuit’s solution involves building verification capabilities directly into GenOS, allowing the system to provide evidence of agent actions and outcomes. For the payments agent, this means showing users that invoices were sent, tracking delivery and demonstrating the improvement in payment cycles that results from the agent’s actions.

This verification approach offers a template for enterprise teams deploying autonomous agents in high-stakes business processes. Rather than asking users to trust AI outputs, the system provides auditable trails and measurable outcomes.

What this means for enterprises looking to get into agentic AI

Intuit’s evolution offers a concrete roadmap for enterprise teams planning autonomous AI implementations:

Focus on workflow completion, not conversation: Target specific business processes for end-to-end automation rather than building general-purpose chat interfaces.

Build agent orchestration infrastructure: Invest in platforms that coordinate prediction, language processing and autonomous execution within unified workflows, not isolated AI tools.

Design verification systems upfront: Include comprehensive audit trails, outcome tracking and user notifications as core capabilities rather than afterthoughts.

Map workflows before building technology: Use customer advisory programs to define agent capabilities based on actual operational challenges.

Plan for interface redesign: Optimize UX for agent-driven workflows rather than traditional software navigation patterns.

“As large language models become commoditized, the experiences that are built upon them become much more important,” Srivastava said.

Similar Posts

  • Walmart cracks enterprise AI at scale: Thousands of use cases, one framework

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Walmart continues to make strides in cracking the code on deploying agentic AI at enterprise scale. Their secret? Treating trust as an engineering requirement, not some compliance checkbox you tick at the…

  • Trump’s AI Action Plan is a distraction

    On Wednesday, President Trump issued three executive orders, delivered a speech, and released an action plan, all on the topic of continuing American leadership in AI.  The plan contains dozens of proposed actions, grouped into three “pillars”: accelerating innovation, building infrastructure, and leading international diplomacy and security. Some of its recommendations are thoughtful even if incremental, some clearly serve ideological ends, and many enrich big tech companies, but the plan is just a set of recommended actions.  The three executive orders, on the other hand, actually operationalize one subset of actions from each pillar:  One aims to prevent “woke AI” by mandating that the federal government procure only large language models deemed “truth-seeking” and “ideologically neutral” rather than ones allegedly favoring DEI. This action purportedly accelerates AI innovation. A second aims to accelerate construction of AI data centers. A much more industry-friendly version of an order issued under President Biden, it makes available rather extreme policy levers, like effectively waiving a broad swath of environmental protections, providing government grants to the wealthiest companies in the world, and even offering federal land for private data centers. A third promotes and finances the export of US AI technologies and infrastructure, aiming to secure American diplomatic leadership and reduce international dependence on AI systems from adversarial countries. This flurry of actions made for glitzy press moments, including an hour-long speech from the president and onstage signings. But while the tech industry cheered these announcements (which will swell their coffers), they obscured the fact that the administration is currently decimating the very policies that enabled America to become the world leader in AI in the first place.
    To maintain America’s leadership in AI, you have to understand what produced it. Here are four specific long-standing public policies that helped the US achieve this leadership—advantages that the administration is undermining.  Investing federal funding in R&D  Generative AI products released recently by American companies, like ChatGPT, were developed with industry-funded research and development. But the R&D that enables today’s AI was actually funded in large part by federal government agencies—like the Defense Department, the National Science Foundation, NASA, and the National Institutes of Health—starting in the 1950s. This includes the first successful AI program in 1956, the first chatbot in 1961, and the first expert systems for doctors in the 1970s, along with breakthroughs in machine learning, neural networks, backpropagation, computer vision, and natural-language processing.
    American tax dollars also funded advances in hardware, communications networks, and other technologies underlying AI systems. Public research funding undergirded the development of lithium-ion batteries, micro hard drives, LCD screens, GPS, radio-frequency signal compression, and more in today’s smartphones, along with the chips used in AI data centers, and even the internet itself. Instead of building on this world-class research history, the Trump administration is slashing R&D funding, firing federal scientists, and squeezing leading research universities. This week’s action plan recommends investing in R&D, but the administration’s actual budget proposes cutting nondefense R&D by 36%. It also proposed actions to better coordinate and guide federal R&D, but coordination won’t yield more funding. Some say that companies’ R&D investments will make up the difference. However, companies conduct research that benefits their bottom line, not necessarily the national interest. Public investment allows broad scientific inquiry, including basic research that lacks immediate commercial applications but sometimes ends up opening massive markets years or decades later. That’s what happened with today’s AI industry. Supporting immigration and immigrants Beyond public R&D investment, America has long attracted the world’s best researchers and innovators. Today’s generative AI is based on the transformer model (the T in ChatGPT), first described by a team at Google in 2017. Six of the eight researchers on that team were born outside the US, and the other two are children of immigrants.  This isn’t an exception. Immigrants have been central to American leadership in AI. Of the 42 American companies included in the 2025 Forbes ranking of the 50 top AI startups, 60% have at least one immigrant cofounder, according to an analysis by the Institute for Progress. Immigrants also cofounded or head the companies at the center of the AI ecosystem: OpenAI, Anthropic, Google, Microsoft, Nvidia, Intel, and AMD. “Brain drain” is a term that was first coined to describe scientists’ leaving other countries for the US after World War II—to the Americans’ benefit. Sadly, the trend has begun reversing this year. Recent studies suggest that the US is already losing its AI talent edge through the administration’s anti-immigration actions (including actions taken against AI researchers) and cuts to R&D funding. Banning noncompetes Attracting talented minds is only half the equation; giving them freedom to innovate is just as crucial.

    Silicon Valley got its name because of mid-20th-century companies that made semiconductors from silicon, starting with the founding of Shockley Semiconductor in 1955. Two years later, a group of employees, the “Traitorous Eight,” quit to launch a competitor, Fairchild Semiconductor. By the end of the 1960s, successive groups of former Fairchild employees had left to start Intel, AMD, and others collectively dubbed the “Fairchildren.”  Software and internet companies eventually followed, again founded by people who had worked for their predecessors. In the 1990s, former Yahoo employees founded WhatsApp, Slack, and Cloudera; the “PayPal Mafia” created LinkedIn, YouTube, and fintech firms like Affirm. Former Google employees have launched more than 1,200 companies, including Instagram and Foursquare. AI is no different. OpenAI has founders that worked at other tech companies and alumni who have gone on to launch over a dozen AI startups, including notable ones like Anthropic and Perplexity. This labor fluidity and the innovation it has created were possible in large part, according to many historians, because California’s 1872 constitution has been interpreted to prohibit noncompete agreements in employment contracts—a statewide protection the state originally shared only with North Dakota and Oklahoma. These agreements bind one in five American workers. Last year, the Federal Trade Commission under President Biden moved to ban noncompetes nationwide, but a Trump-appointed federal judge has halted the action. The current FTC has signaled limited support for the ban and may be comfortable dropping it. If noncompetes persist, American AI innovation, especially outside California, will be limited. Pursuing antitrust actions One of this week’s announcements requires the review of FTC investigations and settlements that “burden AI innovation.” During the last administration the agency was reportedly investigating Microsoft’s AI actions, and several big tech companies have settlements that their lawyers surely see as burdensome, meaning this one action could thwart recent progress in antitrust policy. That’s an issue because, in addition to the labor fluidity achieved by banning noncompetes, antitrust policy has also acted as a key lubricant to the gears of Silicon Valley innovation.  Major antitrust cases in the second half of the 1900s, against AT&T, IBM, and Microsoft, allowed innovation and a flourishing market for semiconductors, software, and internet companies, as the antitrust scholar Giovanna Massarotto has described. William Shockley was able to start the first semiconductor company in Silicon Valley only because AT&T had been forced to license its patent on the transistor as part of a consent decree resolving a DOJ antitrust lawsuit against the company in the 1950s. 
    The early software market then took off because in the late 1960s, IBM unbundled its software and hardware offerings as a response to antitrust pressure from the federal government. As Massarotto explains, the 1950s AT&T consent decree also aided the flourishing of open-source software, which plays a major role in today’s technology ecosystem, including the operating systems for mobile phones and cloud computing servers. Meanwhile, many attribute the success of early 2000s internet companies like Google to the competitive breathing room created by the federal government’s antitrust lawsuit against Microsoft in the 1990s. 
    Over and over, antitrust actions targeting the dominant actors of one era enabled the formation of the next. And today, big tech is stifling the AI market. While antitrust advocates were rightly optimistic about this administration’s posture given key appointments early on, this week’s announcements should dampen that excitement.  I don’t want to lose focus on where things are: We should want a future in which lives are improved by the positive uses of AI.  But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition.  Prioritizing short-term industry profits over these bedrock principles won’t just put our technological future at risk—it will jeopardize America’s role as the world’s innovation superpower.  Asad Ramzanali is the director of artificial intelligence and technology policy at the Vanderbilt Policy Accelerator. He previously served as the chief of staff and deputy director of strategy of the White House Office of Science and Technology Policy under President Biden.

  • Our first long-duration energy storage partnership

    Electricity powers modern life. And we’re accelerating a wide range of technologies, from enhanced geothermal to advanced nuclear to even fusion technologies, that can enable a future where on-demand electricity needs are met with clean energy, every hour of every day.Today, we’re adding another technology to our portfolio: long duration energy storage (LDES). Through a new long-term partnership with Energy Dome, we plan to support multiple commercial projects globally to deploy their LDES technology.Energy Dome’s novel CO₂ Battery can store excess clean energy and then dispatch it back to the grid for 8-24 hours, bridging the gap between when renewable energy is generated and when it is needed. With this commercial partnership, as well as an investment in the company, we believe these projects can unlock new clean energy for grids where we operate before 2030, helping meet near-term electricity system needs and moving us closer to our 24/7 carbon-free energy goal.By bringing this first-of-a-kind LDES technology to market faster, we aim to rapidly bring its potential to communities everywhere — making reliable, affordable electricity available around the clock and supporting the resilience of grids as they integrate growing amounts of renewable energy sources.Why it’s importantLithium-ion batteries, which typically store and dispatch power for 4 hours or less, have been critical for adding electricity capacity to grids and managing short-term fluctuations in renewable generation — when the sun isn’t shining or the wind isn’t blowing. Google’s support for these shorter-duration batteries has helped the grids we rely on, from Belgium to Nevada, meet peak electricity demand and reduce the need to ramp up fossil fuel power plants.But what if we could store and dispatch clean energy for more than a few hours, or even a full day? Studies by the Electric Power Research Institute show that LDES technologies can cost-effectively integrate a growing volume of renewables onto power systems and contribute to more flexible, reliable grids. The LDES Council estimates that deploying up to 8 terawatts (TW) of LDES by 2040 could result in $540 billion in annual savings globally, thanks in part to their ability to optimize grids.How the technology worksEnergy Dome’s novel approach to energy storage uses carbon dioxide (CO₂) held in a unique dome-shaped battery. When there’s an abundance of renewable energy on the grid, the system uses that power to compress CO₂ gas into a liquid. When the grid needs more clean power, the liquid CO₂ expands back into a hot gas under pressure, creating a powerful force — much like steam escaping a pressure cooker — which spins a turbine. This spinning turbine generates carbon-free energy that can flow directly back into the grid for durations ranging from 8 to 24 hours.Energy Dome has already signed contracts to build commercial scale projects in Italy, the U.S., and India. And their technology has already proven successful, having injected electrons into the Italian grid for more than three years, thanks to their commercial demonstration facility and now with their full-scale 20 megawatt (MW) commercial plant in Sardinia, Italy.Why scale is crucialLDES has the potential to commercialize much faster than some of the other advanced clean energy technologies in our portfolio. This means we can use it in the near term to help the electricity system grow more flexibly and reliably, alongside other tools we’re developing such as data center demand response.By supporting multiple commercial deployments of Energy Dome’s technology globally, we aim to bring this technology to scale faster and at lower costs. Beyond our long-term collaboration with Energy Dome, we plan to support a growing range of LDES technologies under development through both commercial agreements that can catalyze wider market adoption of more mature technologies, like Energy Dome’s, as well as earlier-stage investments.To remove barriers to the deployment and commercialization of LDES and other advanced carbon-free energy technologies, we’re also advocating for clean energy policies, ensuring that energy markets fully value firm, flexible carbon-free technologies, and advancing policy measures that enable infrastructure essential for grid decarbonization and energy security.We’re excited to take this first step with Energy Dome to unlock the full potential of LDES. Our partnership will strengthen grid resilience while enabling us to power our technologies, grow our economies and keep the lights on in our homes with 24/7 clean energy.

  • Try on styles with AI, jump on great prices and more

    Whether you’re still on the hunt for the perfect summer maxi skirt, dreaming about a new fall jacket or starting your back to school shopping, our shopping tools can help you explore your personal style and get a good price. Here are a few ways you can use Google’s latest shopping features:Try clothes on, virtuallyAt I/O in May, we introduced our try on tool as a limited experiment in Search Labs, allowing shoppers to upload a photo of themselves and use AI to virtually try on clothes. Today, try on is launching in the U.S., letting you easily try on styles from the billions of apparel items in our Shopping Graph across Search, Google Shopping and even product results on Google Images.

  • Best Noise-Canceling Headphones: Sony, Bose, Apple, and More

    Honorable MentionsNow that the majority of new headphones and earbuds offer at least a modicum of noise canceling, it’d be impossible (and unproductive) to list everything we like above. If you haven’t yet found your fit, here are more favorites worth considering.Beyerdynamic Amiron 300 for $280: These simple-looking earbuds (8/10, WIRED Recommends) are a great way to experience quiet luxury. They have 10 hours of battery life with noise canceling engaged, and they have some of the best-sounding drivers for vocals I’ve heard in any earbuds.Sony WF-1000XM5 earbuds for $298: Sony’s fifth-generation flagship earbuds (7/10, WIRED Recommends) slim down while stepping up. These buds are smaller and slicker (maybe too slick when it comes to grabbing them) than the previous XM4 buds. As before, they provide great sound and noise canceling that outduels plenty of options, with a cost to match. In true Sony style, they serve up a truckload of adaptive features and EQ controls while retaining a solid eight hours of playback time per charge with ANC and 12 hours without it. —Ryan WaniataSoundcore Life Q30 for $60-85: Anker’s Soundcore line is nothing if not value-conscious, and the Life Q30 provide an embarrassing list of extras for their bargain-basement pricing. You’ll get clear and warm sound, great features, tons of battery life, and noise canceling that gets the job done even on a long flight, though it can’t keep up with flagship pairs. It’s hard to complain when they cost hundreds less, especially with sale pricing that sometimes drops to around $50.Sony WH-1000XM4 for $250-350: Sony’s WH-1000X lineup has produced some of the best noise-canceling headphones for nearly a decade, and the aging WH-1000XM4 (9/10, WIRED Recommends) are no exception. They periodically go on sale for under $300, but it’s getting harder to find them below full price, which is tough for a five-year-old model.Bowers & Wilkins Pi8 Earbuds for $400: Bowers & Wilkins’ Pi8 (8/10, WIRED Recommends) offer a sleek, comfortable design, solid (albeit not Bose-beating) noise canceling, and great sound. Call quality is also excellent, which makes these perhaps the perfect business-class earbuds, though their hefty price won’t appeal to everyone.Bowers and Wilkins PX7 S2e for $400: The Px7 S2e feature upgraded audio quality for fantastic sound in stylish and sophisticated design. They’re also among the most comfortable headphones we’ve tested, but their noise canceling doesn’t rise to the level of the top players for the money.Beyerdynamic Aventho 300 for $400: These over-ears from Beyerdynamic (7/10, WIRED Recommends) have the brand’s classic studio sound, with a tight crisp high range and punchy lows. The downside is that they don’t cancel noise quite as well as models from Sony, Bose, and others above. Still, they sound great and are worth considering, especially if you can snag them on sale.Soundcore Space A40 for $60: Another top value buy from Anker’s Soundcore brand, the Space A40 (8/10, WIRED Recommends) are some of our favorite cheap earbuds, especially as their price continues to fall. You’ll find a classy design, lots of features, quality sound, and great noise canceling for their class.Apple Beats Fit Pro for $199: The Beats Fit Pro are an aging but still knockout pair of wireless buds, with great sound, easy-access physical buttons, and solid noise canceling to boot. Add to that six hours of battery life, spatial audio compatibility with Apple Music and other services, and you’ve got one of the best pairs of earbuds ever “designed in California.”Epos/Sennheiser Adapt 660 for $210: Want excellent sound, a comfortable fit, and high-quality noise-canceling tech for less than what you’d pay for Sony or Bose headphones? Check out this collaboration between Epos and Sennheiser. The Epos/Sennheiser Adapt 660 (8/10, WIRED Recommends) sound fantastic and are some of the lightest noise-canceling headphones I’ve ever worn. They also feature excellent microphones for great silence on calls and Zooms.

Leave a Reply

Your email address will not be published. Required fields are marked *