Get paid faster: How Intuit’s new AI agents help businesses get funds up to 5 days faster and save 12 hours a month with autonomous workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Intuit has been on a journey over the last several years with generative AI, incorporating the technology as part of its services at QuickBooks, Credit Karma,Turbotax and Mailchimp.

Today the company is taking the next step with a series of AI agents that go beyond that to transform how small and mid-market businesses operate. These new agents work as a virtual team that automates workflows and provides real-time business insights. They include capabilities for payments, accounts and finance that will directly impact business operations. According to Intuit, customers save up to 12 hours per month and, on average, will get paid up to five days faster thanks to the new agents.

“If you look at the trajectory of our AI experiences at Intuit in the early years, AI was built into the background, and with Intuit Assist, you saw a shift to provide information back to the customer,” Ashok Srivastava, chief AI and data officer at Intuit, told VentureBeat. “Now what you’re seeing is a complete redesign. The agents are actually doing work on behalf of the customer, with their permission.”

Technical architecture: From starter kit to production agents

Intuit has been working on the path from assistants to agentic AI for some time.

In September 2024, the company detailed its plans to use AI to automate complex tasks. It’s an approach built firmly on the company’s generative AI operating system (GenOS) platform, the foundation of its AI efforts.

Earlier this month, Intuit announced a series of efforts that further extend its capabilities. The company has developed its own prompt optimization service that will optimize queries for any large language model (LLM). It has also developed what it calls an intelligent data cognition layer for enterprise data that can understand different data sources required for enterprise workflows.

Going a step further, Intuit developed an agent starter kit that builds on the company’s technical foundation to enable agentic AI development.

The agent portfolio: From cash flow to customer management

With the technical foundation in place, including agent starter kits, Intuit has built out a series of new agents that help business owners get things done.

Intuit’s agent suite demonstrates the technical sophistication required to move from predictive AI to autonomous workflow execution. Each agent coordinates prediction, natural language processing (NLP) and autonomous decision-making within complete business processes. They include:

Payments agent: Autonomously optimizes cash flow by predicting late payments, generating invoices and executing follow-up sequences. 

Accounting agent: Represents Intuit’s evolution from rules-based systems to autonomous bookkeeping. The agent now autonomously handles transaction categorization, reconciliation and workflow completion, delivering cleaner and more accurate books.

Finance agent: Automates strategic analysis traditionally requiring dedicated business intelligence (BI) tools and human analysts. Provides key performance indicator (KPI) analysis, scenario planning and forecasting based on how the company is doing against peer benchmarks while autonomously generating growth recommendations.

Intuit is also building out customer hub agents that will help with customer acquisition tasks. Payroll processing as well as project management efforts are also part of the future release plans.

Beyond conversational UI: Task-oriented agent design

The new agents mark an evolution in how AI is presented to users.

Intuit’s interface redesign reveals important user experience principles for enterprise agent deployment. Rather than bolting AI capabilities onto existing software, the company fundamentally restructured the QuickBooks user experience for AI.

“The user interface now is really oriented around the business tasks that need to be done,” Srivastava explained. “It allows for real time insights and recommendations to come to the user directly.”

This task-centric approach contrasts with the chat-based interfaces dominating current enterprise AI tools. Instead of requiring users to learn prompting strategies or navigate conversational flows, the agents operate within existing business workflows. The system includes what Intuit calls a “business feed” that contextually surfaces agent actions and recommendations.

Trust and verification: The closed-loop challenge

One of the most technically significant aspects of Intuit’s implementation addresses a critical challenge in autonomous agent deployment: Verification and trust. Enterprise AI teams often struggle with the black box problem — how do you ensure AI agents are performing correctly when they operate autonomously?

“In order to build trust with artificial intelligence systems, we need to provide proof points back to the customer that what they think is happening is actually happening,” Srivastava emphasized. “That closed loop is very, very important.”

Intuit’s solution involves building verification capabilities directly into GenOS, allowing the system to provide evidence of agent actions and outcomes. For the payments agent, this means showing users that invoices were sent, tracking delivery and demonstrating the improvement in payment cycles that results from the agent’s actions.

This verification approach offers a template for enterprise teams deploying autonomous agents in high-stakes business processes. Rather than asking users to trust AI outputs, the system provides auditable trails and measurable outcomes.

What this means for enterprises looking to get into agentic AI

Intuit’s evolution offers a concrete roadmap for enterprise teams planning autonomous AI implementations:

Focus on workflow completion, not conversation: Target specific business processes for end-to-end automation rather than building general-purpose chat interfaces.

Build agent orchestration infrastructure: Invest in platforms that coordinate prediction, language processing and autonomous execution within unified workflows, not isolated AI tools.

Design verification systems upfront: Include comprehensive audit trails, outcome tracking and user notifications as core capabilities rather than afterthoughts.

Map workflows before building technology: Use customer advisory programs to define agent capabilities based on actual operational challenges.

Plan for interface redesign: Optimize UX for agent-driven workflows rather than traditional software navigation patterns.

“As large language models become commoditized, the experiences that are built upon them become much more important,” Srivastava said.

Similar Posts

  • Obvio’s stop sign cameras use AI to root out unsafe drivers

    American streets are incredibly dangerous for pedestrians. A San Carlos, California-based startup called Obvio thinks it can change that by installing cameras at stop signs — a solution the founders also say won’t create a panopticon. 

    That’s a bold claim at a time when other companies like Flock have been criticized for how its license plate-reading cameras have become a crucial tool in an overreaching surveillance state. 

    Obvio founders Ali Rehan and Dhruv Maheshwari believe they can build a big enough business without indulging those worst impulses. They’ve designed the product with surveillance and data-sharing limitations to ensure they can follow through with that claim.

    They’ve found deep pockets willing to believe them, too. The company has just completed a $22 million Series A funding round led by Bain Capital Ventures. Obvio plans to use those funds to expand beyond the first five cities where it’s currently operating in Maryland. 

    Rehan and Maheshwari met while working at Motive, a company that makes dashboard cameras for the trucking industry. While there, Maheshwari told TechCrunch the pair realized “a lot of other normal passenger vehicles are awful drivers.” 

    The founders said they were stunned the more they looked into road safety. Not only were streets and crosswalks getting more dangerous for pedestrians, but in their eyes, the U.S. was also falling behind on enforcement. 

    [embedded content]

    “Most other countries are actually pretty good at this,” Maheshwari said. “They have speed camera technology. They have a good culture of driving safety. The U.S. is actually one of the worst across all the modern nations.”

    Maheshwari and Rehan began studying up on road safety by reading books and attending conferences. They found that people in the industry gravitated toward three general solutions: education, engineering, and enforcement. 

    In their eyes, those approaches were often too separated from each other. It’s hard to quantify the impact of educational efforts. Local officials may try to fix a problematic intersection by, say, installing a roundabout, but that can take years of work and millions of dollars. And law enforcement can’t camp out at every stop sign.

    Rehan and Maheshwari saw promise in combining them. 

    The result is a pylon (often brightly-colored) topped with a solar-powered camera that can be installed near almost any intersection. It’s designed not to blend in — part of the education and awareness aspect — and it’s also carefully engineered to be cheap and easy to install.

    The on-device AI is trained to spot the worst types of stop sign or other infractions. (The company also claims on its website it can catch speeding, crosswalk violations, illegal turns, unsafe lane changes, and even distracted driving.) When one of these things happen, the system matches a car’s license plate to the state’s DMV database. 

    All of that information — the accuracy of the violation, the license plate — is verified by either Obvio staff or contractors before it’s sent to law enforcement, which then has to review the infractions before issuing a citation.

    Obvio gives the tech to municipalities for free and makes money from the citations. Exactly how that citation revenue will get split between Obvio and the governments will vary from place to place, as Maheshwari said regulations about such agreements differ by state.

    That clearly creates an incentive for increasing the number of citations. But Rehan and Maheshwari said they can build a business around stopping the worst offenses across a wide swath of American cities. They also said they want Obvio to remain present in — and responsive to — the communities that use their tech.

    “Automated enforcement should be used in conjunction with community advocacy and community support, it shouldn’t be this camera that you put up that does revenue grab[s] and gotchas,” Maheshwari said. The goal is to “start using these cameras in a way to warn and deter the most egregious drivers [so] you can actually create communitywide support and behavior change.”

    Cities and their citizens “need to trust us,” Maheshwari said. 

    There’s also a technological explanation for why Obvio’s cameras may not become an overpowered surveillance tool for law enforcement beyond their intended use.

    Obvio’s camera pylon records and processes its footage locally. It’s only when a violation is spotted that the footage leaves the device. Otherwise, all other footage of vehicles and pedestrians passing through a given intersection stays on the device for about 12 hours before it gets deleted. (The footage is also technically owned by the municipalities, which have remote access.)

    This doesn’t eliminate the chance that law enforcement will use the footage to surveil citizens in other ways. But it does reduce that chance.

    That focus is what drove Bain Capital Ventures partner Ajay Agarwal to invest in Obvio.

    “Yes, in the short term, you can maximize profits, and erode those values, but I think over time, it will limit the ability of this company to be ubiquitous. It’ll create enemies or create people who don’t want this,” he told TechCrunch. “Great founders are willing to sacrifice entire lines of business, frankly, and lots of revenue, in pursuit of the ultimate mission.”

  • Walmart cracks enterprise AI at scale: Thousands of use cases, one framework

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Walmart continues to make strides in cracking the code on deploying agentic AI at enterprise scale. Their secret? Treating trust as an engineering requirement, not some compliance checkbox you tick at the…

  • Gridcare thinks more than 100 GW of data center capacity is hiding in the grid

    Hyperscalers and data center developers are in a pickle: They all want to add computing power tomorrow, but utilities frequently play hard to get, citing years-long waits for grid connections.

    “All the AI data centers are struggling to get connected,” Amit Narayan, founder and CEO of Gridcare, told TechCrunch. “They’re so desperate. They are looking for solutions, which may or may not happen. Certainly not in the five-year timelines they cite.”

    That has led many data centers to pursue what’s called “behind the meter” power sources — basically, they build their own power plants, a costly endeavor that hints at just how desperate they are for electricity.

    But Narayan knew there was plenty of slack in the system, even if utilities themselves haven’t discovered it yet. He has studied the grid for the last 15 years, first as a Stanford researcher then as a founder of another company. “How do we create more capacity when everyone thinks that there is no capacity on the grid?” he said.

    Narayan said that Gridcare, which has been operating in stealth, has already discovered several places where extra capacity exists, and it’s ready to play matchmaker between data centers and utilities.

    Gridcare recently closed an oversubscribed $13.5 million seed round, the company told TechCrunch. The round was led by Xora, Temasek’s deep tech venture firm, with participation from Acclimate Ventures, Aina Climate AI Ventures, Breakthrough Energy Discovery, Clearvision, Clocktower Ventures, Overture Ventures, Sherpalo Ventures, and WovenEarth.

    For Narayan and his colleagues at Gridcare, the first step to finding untapped capacity was to map the existing grid. Then the company used generative AI to help forecast what changes might be implemented in the coming years. It also layers on other details, including the availability of fiber optic connections, natural gas, water, extreme weather, permitting, and community sentiment around data center construction and expansion. 

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    “There are 200,000-plus scenarios that you have to consider every time you’re running this study,” Narayan said.

    To make sure it’s not running afoul of regulations, Gridcare then takes that data and weighs it against federal guidelines that dictate grid usage. Once it finds a spot, it starts talking with the relevant utility to verify the data.

    “We’ll find out where the maximum bang for the buck is,” Narayan said.

    At the same time, Gridcare works with hyperscalers and data center developers to identify where they are looking to expand operations or build new ones. “They have already told us what they’re willing to do. We know the parameters under which they can operate,” he said.

    That’s when the matchmaking begins.

    Gridcare sells its services to data center developers, charging them a fee based on how many megawatts of capacity the startup can unlock for them. “That fee is significant for us, but it’s negligible for data centers,” Narayan said.

    For some data centers, the price of admission might be forgoing grid power for a few hours here and there, relying on on-site backup power instead. For others, the path might be clearer if their demand helps green-light a new grid-scale battery installation nearby. In the future, the winner might be the developer that is willing to pay more. Utilities have already approached Gridcare inquiring about auctioning access to newfound capacity.

    Regardless of how it happens, Narayan thinks that Gridcare can unlock more than 100 gigawatts of capacity using its approach. “We don’t have to solve nuclear fusion to do this,” he said.

    Update: Corrected spare capacity on the grid to gigawatts from megawatts.

  • Google DeepMind’s new AI can help historians understand ancient Latin inscriptions

    Google DeepMind has unveiled new artificial-intelligence software that could help historians recover the meaning and context behind ancient Latin engravings.  Aeneas can analyze words written in long-weathered stone to say when and where they were originally inscribed. It follows Google’s previous archaeological tool Ithaca, which also used deep learning to reconstruct and contextualize ancient text, in its case Greek. But while Ithaca and Aeneas use some similar systems, Aeneas also promises to give researchers jumping-off points for further analysis. To do this, Aeneas takes in partial transcriptions of an inscription alongside a scanned image of it. Using these, it gives possible dates and places of origins for the engraving, along with potential fill-ins for any missing text. For example, a slab damaged at the start and continuing with … us populusque Romanus would likely prompt Aeneas to guess that Senat comes before us to create the phrase Senatus populusque Romanus, “The Senate and the people of Rome.”  This is similar to how Ithaca works. But Aeneas also cross-references the text with a stored database of almost 150,000 inscriptions, which originated everywhere from modern-day Britain to modern-day Iraq, to give possible parallels—other catalogued Latin engravings that feature similar words, phrases, and analogies. 
    This database, alongside a few thousand images of inscriptions, makes up the training set for Aeneas’s deep neural network. While it may seem like a good number of samples, it pales in comparison to the billions of documents used to train general-purpose large language models like Google’s Gemini. There simply aren’t enough high-quality scans of inscriptions to train a language model to learn this kind of task. That’s why specialized solutions like Aeneas are needed.  The Aeneas team believes it could help researchers “connect the past,” said Yannis Assael, a researcher at Google DeepMind who worked on the project. Rather than seeking to automate epigraphy—the research field dealing with deciphering and understanding inscriptions—he and his colleagues are interested in “crafting a tool that will integrate with the workflow of a historian,” Assael said in a press briefing. 
    Their goal is to give researchers trying to analyze a specific inscription many hypotheses to work from, saving them the effort of sifting through records by hand. To validate the system, the team presented 23 historians with inscriptions that had been previously dated and tested their workflows both with and without Aeneas. The findings, which were published today in Nature, showed that Aeneas helped spur research ideas among the historians for 90% of inscriptions and that it led to more accurate determinations of where and when the inscriptions originated. In addition to this study, the researchers tested Aeneas on the Monumentum Ancyranum, a famous inscription carved into the walls of a temple in Ankara, Turkey. Here, Aeneas managed to give estimates and parallels that reflected existing historical analysis of the work, and in its attention to detail, the paper claims, it closely matched how a trained historian would approach the problem. “That was jaw-dropping,” Thea Sommerschield, an epigrapher at the University of Nottingham who also worked on Aeneas, said in the press briefing.  However, much remains to be seen about Aeneas’s capabilities in the real world. It doesn’t guess the meaning of texts, so it can’t interpret newly found engravings on its own, and it’s not clear yet how useful it will be to historians’ workflows in the long term, according to Kathleen Coleman, a professor of classics at Harvard. The Monumentum Ancyranum is considered to be one of the best-known and most well-studied inscriptions in epigraphy, raising the question of how Aeneas will fare on more obscure samples.  Google DeepMind has now made Aeneas open-source, and the interface for the system is freely available for teachers, students, museum workers, and academics. The group is working with schools in Belgium to integrate Aeneas into their secondary history education.  “To have Aeneas at your side while you’re in the museum or at the archaeological site where a new inscription has just been found—that is our sort of dream scenario,” Sommerschield said.

  • The hidden scaling cliff that’s about to break your agent rollouts

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Enterprises that want to build and scale agents also need to embrace another reality: agents aren’t built like other software.  Agents are “categorically different” in how they’re built, how they operate, and…

Leave a Reply

Your email address will not be published. Required fields are marked *