The hidden scaling cliff that’s about to break your agent rollouts

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more


Enterprises that want to build and scale agents also need to embrace another reality: agents aren’t built like other software. 

Agents are “categorically different” in how they’re built, how they operate, and how they’re improved, according to Writer CEO and co-founder May Habib. This means ditching the traditional software development life cycle when dealing with adaptive systems.

“Agents don’t reliably follow rules,” Habib said on Wednesday while on stage at VB Transform. “They are outcome-driven. They interpret. They adapt. And the behavior really only emerges in real-world environments.”

Knowing what works — and what doesn’t work — comes from Habib’s experience helping hundreds of enterprise clients build and scale enterprise-grade agents. According to Habib, more than 350 of the Fortune 1000 are Writer customers, and more than half of the Fortune 500 will be scaling agents with Writer by the end of 2025.

Using non-deterministic tech to produce powerful outputs can even be “really nightmarish,” Habib said — especially when trying to scale agents systemically. Even if enterprise teams can spin up agents without product managers and designers, Habib thinks a “PM mindset” is still needed for collaborating, building, iterating and maintaining agents.

“Unfortunately or fortunately, depending on your perspective, IT is going to be left holding the bag if they don’t lead their business counterparts into that new way of building.”

>>See all our Transform 2025 coverage here<<

Why goal-based agents is the right approach 

One of the shifts in thinking includes understanding the outcome-based nature of agents. For example, she said that many customers request agents to assist their legal teams in reviewing or redlining contracts. But that’s too open-ended. Instead, a goal-oriented approach means designing an agent to reduce the time spent reviewing and redlining contracts.

“In the traditional software development life cycle, you are designing for a deterministic set of very predictable steps,” Habib said. “It’s input in, input out in a more deterministic way. But with agents, you’re seeking to shape agentic behavior. So you are seeking less of a controlled flow and much more to give context and guide decision-making by the agent.”

Another difference is building a blueprint for agents that instructs them with business logic, rather than providing them with workflows to follow. This includes designing reasoning loops and collaborating with subject experts to map processes that promote desired behaviors.

While there’s a lot of talk about scaling agents, Writer is still helping most clients with building them one at a time. That’s because it’s important first to answer questions about who owns and audits the agent, who makes sure it stays relevant and still checks if it’s still producing desired outcomes.

“There is a scaling cliff that folks get to very, very quickly without a new approach to building and scaling agents,” Habib said. “There is a cliff that folks are going to get to when their organization’s ability to manage agents responsibly really outstrips the pace of development happening department by department.”

QA for agents vs software

Quality assurance is also different for agents. Instead of an objective checklist, agentic evaluation includes accounting for non-binary behavior and assessing how agents act in real-world situations. That’s because failure isn’t always obvious — and not as black and white as checking if something broke. Instead, Habib said it’s better to check if an agent behaved well, asking if fail-safes worked, evaluating outcomes and intent: “The goal here isn’t perfection It is behavioral confidence, because there is a lot of subjectivity in this here.”

Businesses that don’t understand the importance of iteration end up playing “a constant game of tennis that just wears down each side until they don’t want to play anymore,” Habib said. It’s also important for teams to be okay with agents being less than perfect and more about “launching them safely and running fast and iterating over and over and over.”

Despite the challenges, there are examples of AI agents already helping bring in new revenue for enterprise businesses. For example, Habib mentioned a major bank that collaborated with Writer to develop an agent-based system, resulting in a new upsell pipeline worth $600 million by onboarding new customers into multiple product lines.

New version controls for AI agents

Agentic maintenance is also different. Traditional software maintenance involves checking the code when something breaks, but Habib said AI agents require a new kind of version control for everything that can shape behavior. It also requires proper governance and ensuring that agents remain useful over time, rather than incurring unnecessary costs.

Because models don’t map cleanly to AI agents, Habib said maintenance includes checking prompts, model settings, tool schemas and memory configuration. It also means fully tracing executions across inputs, outputs, reasoning steps, tool calls and human interactions. 

“You can update a [large language model] LLM prompt and watch the agent behave completely differently even though nothing in the git history actually changed,” Habib said. “The model links shift, retrieval indexes get updated, tool APIs evolve and suddenly the same prompt does not behave as expected…It can feel like we are debugging ghosts.”

Similar Posts

  • America’s AI watchdog is losing its bite

    Most Americans encounter the Federal Trade Commission only if they’ve been scammed: It handles identity theft, fraud, and stolen data. During the Biden administration, the agency went after AI companies for scamming customers with deceptive advertising or harming people by selling irresponsible technologies. With yesterday’s announcement of President Trump’s AI Action Plan, that era may now be over.  In the final months of the Biden administration under chair Lina Khan, the FTC levied a series of high-profile fines and actions against AI companies for overhyping their technology and bending the truth—or in some cases making claims that were entirely false. It found that the security giant Evolv lied about the accuracy of its AI-powered security checkpoints, which are used in stadiums and schools but failed to catch a seven-inch knife that was ultimately used to stab a student. It went after the facial recognition company Intellivision, saying the company made unfounded claims that its tools operated without gender or racial bias. It fined startups promising bogus “AI lawyer” services and one that sold fake product reviews generated with AI. These actions did not result in fines that crippled the companies, but they did stop them from making false statements and offered customers ways to recover their money or get out of contracts. In each case, the FTC found, everyday people had been harmed by AI companies that let their technologies run amok.
    The plan released by the Trump administration yesterday suggests it believes these actions went too far. In a section about removing “red tape and onerous regulation,” the White House says it will review all FTC actions taken under the Biden administration “to ensure that they do not advance theories of liability that unduly burden AI innovation.” In the same section, the White House says it will withhold AI-related federal funding from states with “burdensome” regulations. This move by the Trump administration is the latest in its evolving attack on the agency, which provides a significant route of redress for people harmed by AI in the US. It’s likely to result in faster deployment of AI with fewer checks on accuracy, fairness, or consumer harm.
    Under Khan, a Biden appointee, the FTC found fans in unexpected places. Progressives called for it to break up monopolistic behavior in Big Tech, but some in Trump’s orbit, including Vice President JD Vance, also supported Khan in her fights against tech elites, albeit for the different goal of ending their supposed censorship of conservative speech.  But in January, with Khan out and Trump back in the White House, this dynamic all but collapsed. Trump released an executive order in February promising to “rein in” independent agencies like the FTC that wage influence without consulting the president. The next month, he started taking that vow to—and past—its legal limits. In March, he fired the only two Democratic commissioners at the FTC. On July 17 a federal court ruled that one of those firings, of commissioner Rebecca Slaughter, was illegal given the independence of the agency, which restored Slaughter to her position (the other fired commissioner, Alvaro Bedoya, opted to resign rather than battle the dismissal in court, so his case was dismissed). Slaughter now serves as the sole Democrat. In naming the FTC in its action plan, the White House now goes a step further, painting the agency’s actions as a major obstacle to US victory in the “arms race” to develop better AI more quickly than China. It promises not just to change the agency’s tack moving forward, but to review and perhaps even repeal AI-related sanctions it has imposed in the past four years. How might this play out? Leah Frazier, who worked at the FTC for 17 years before leaving in May and served as an advisor to Khan, says it’s helpful to think about the agency’s actions against AI companies as falling into two areas, each with very different levels of support across political lines.  The first is about cases of deception, where AI companies mislead consumers. Consider the case of Evolv, or a recent case announced in April where the FTC alleges that a company called Workado, which offers a tool to detect whether something was written with AI, doesn’t have the evidence to back up its claims. Deception cases enjoyed fairly bipartisan support during her tenure, Frazier says. “Then there are cases about responsible use of AI, and those did not seem to enjoy too much popular support,” adds Frazier, who now directs the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under Law. These cases don’t allege deception; rather, they charge that companies have deployed AI in a way that harms people. The most serious of these, which resulted in perhaps the most significant AI-related action ever taken by the FTC and was investigated by Frazier, was announced in 2023. The FTC banned Rite Aid from using AI facial recognition in its stores after it found the technology falsely flagged people, particularly women and people of color, as shoplifters. “Acting on false positive alerts,” the FTC wrote, Rite Aid’s employees “followed consumers around its stores, searched them, ordered them to leave, [and] called the police to confront or remove consumers.”

    The FTC found that Rite Aid failed to protect people from these mistakes, did not monitor or test the technology, and did not properly train employees on how to use it. The company was banned from using facial recognition for five years.  This was a big deal. This action went beyond fact-checking the deceptive promises made by AI companies to make Rite Aid liable for how its AI technology harmed consumers. These types of responsible-AI cases are the ones Frazier imagines might disappear in the new FTC, particularly if they involve testing AI models for bias. “There will be fewer, if any, enforcement actions about how companies are deploying AI,” she says. The White House’s broader philosophy toward AI, referred to in the plan, is a “try first” approach that attempts to propel faster AI adoption everywhere from the Pentagon to doctor’s offices. The lack of FTC enforcement that is likely to ensue, Frazier says, “is dangerous for the public.”

  • Web Guide: An experimental AI-organized search results page

    We’re launching Web Guide, a Search Labs experiment that uses AI to intelligently organize the search results page, making it easier to find information and web pages.Web Guide groups web links in helpful ways — like pages related to specific aspects of your query. Under the hood, Web Guide uses a custom version of Gemini to better understand both a search query and content on the web, creating more powerful search capabilities that better surface web pages you may not have previously discovered. Similar to AI Mode, Web Guide uses a query fan-out technique, concurrently issuing multiple related searches to identify the most relevant results.For example, try it for open-ended searches like “how to solo travel in Japan.” Or try detailed queries in multiple sentences like, “My family is spread across multiple time zones. What are the best tools for staying connected and maintaining close relationships despite the distance?”

  • Walmart cracks enterprise AI at scale: Thousands of use cases, one framework

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Walmart continues to make strides in cracking the code on deploying agentic AI at enterprise scale. Their secret? Treating trust as an engineering requirement, not some compliance checkbox you tick at the…

  • Razer Pro Click V2 Vertical Review: A Hybrid Gaming Mouse

    Switching to a vertical mouse is a hard sell. Having to change how you use a mouse completely can be an intimidating task, especially with how unnatural the new hand position feels at first—you’re going entirely against the muscle memory you’ve spent years building up.One of the largest challenges to the switch is the initial loss of pointer accuracy. If you’re in an office setting, you may find yourself wandering around a bit or struggling to move your new mouse as quickly as you did before. But in a slow-paced setting like that, all you struggle with is a few mis-clicks or slightly slower navigation. If you try to make this transition with gaming, it’s far more jarring, and the consequences are much more immediately noticeable.But even if it’s difficult to adapt to, could vertical mice be the future of gaming? Razer’s new Pro Click V2 Vertical Edition is a hybrid productivity and gaming vertical mouse. Vertical mice typically cater to office workers, but the focus on gaming performance makes the $120 Pro Click V2 one of a kind.Desk PresenceThe Pro Click V2 Vertical looks, more than anything else, like a modern gaming mouse. It has the textured exterior, metallic highlights, and slightly organic, H.R. Giger-esque curvature typical of Razer’s design language. But everything has been shifted around. The curved, cutting thumb rest sits on top of the mouse instead of on the side. A flare juts out from the right side as a place to rest the underside of your hand. The gunmetal highlight sits at the peak of the mouse rather than between the two buttons. Even the USB port is vertical, a humorous attention to detail.It’s intentionally designed as a gaming mouse that just happens to be vertical. Aesthetically, the only downside is the minimal RGB lighting. With only one section of lighting that runs along the bottom of the mouse, RGB lighting fans might feel disappointed. Still, it’s bright, reactive, and has great color accuracy. It’s more than enough for me, especially with how customizable it is with Razer’s Chroma software.The Pro Click V2 Vertical has the same specs as the standard Pro Click V2, with a 1,000-Hz polling rate, a 2.4-GHz dongle that can be stored on the underside, Bluetooth multi-device connectivity, and a reprogrammable button on top. The only features lost are the mouse wheel’s horizontal scrolling and toggleable non-ratcheted rotation.This mouse includes two major productivity features: app-specific profiles and multi-device connectivity, and both work effortlessly. Razer Synapse immediately detected different software and changed the active profile in response, and pressing the button on the underside of the mouse swapped between paired devices instantaneously.Beyond that, Razer Synapse is as impressive as always. I consistently find the software to be one of the best and most intuitive on the market, and that’s the case here. All of the menus are simple and efficient, the settings can be changed in real time, and the adjustments all have tooltips and explanations to tell you exactly what you’re changing.Annoyingly, Razer Synapse has advertisements on the homepage, something I’ve complained about when reviewing SteelSeries products in the past. However, unlike Steelseries GG, these “recommendations” can be permanently disabled in the app’s settings.Performance and PracticeThe overall hand position of the Pro Click V2 Vertical is natural, but incredibly upright. While some vertical mice, like those from Logitech or Hansker, find a middle ground between a standard and truly “vertical” hand position, Razer opted for a nearly perpendicular shape. While this is technically an ideal ergonomic shape, it will be harder to adapt if you’re moving directly from a standard mouse, and might not be as comfortable during the adjustment period.It felt unnatural for the first week or so, and required practice to use comfortably and confidently. Once I had acclimated, my speed and accuracy were nearly at the same level as a standard mouse, although consistent use still felt clunky and unfamiliar compared to the horizontal mice I’d been using for most of my life.

  • Google DeepMind’s new AI can help historians understand ancient Latin inscriptions

    Google DeepMind has unveiled new artificial-intelligence software that could help historians recover the meaning and context behind ancient Latin engravings.  Aeneas can analyze words written in long-weathered stone to say when and where they were originally inscribed. It follows Google’s previous archaeological tool Ithaca, which also used deep learning to reconstruct and contextualize ancient text, in its case Greek. But while Ithaca and Aeneas use some similar systems, Aeneas also promises to give researchers jumping-off points for further analysis. To do this, Aeneas takes in partial transcriptions of an inscription alongside a scanned image of it. Using these, it gives possible dates and places of origins for the engraving, along with potential fill-ins for any missing text. For example, a slab damaged at the start and continuing with … us populusque Romanus would likely prompt Aeneas to guess that Senat comes before us to create the phrase Senatus populusque Romanus, “The Senate and the people of Rome.”  This is similar to how Ithaca works. But Aeneas also cross-references the text with a stored database of almost 150,000 inscriptions, which originated everywhere from modern-day Britain to modern-day Iraq, to give possible parallels—other catalogued Latin engravings that feature similar words, phrases, and analogies. 
    This database, alongside a few thousand images of inscriptions, makes up the training set for Aeneas’s deep neural network. While it may seem like a good number of samples, it pales in comparison to the billions of documents used to train general-purpose large language models like Google’s Gemini. There simply aren’t enough high-quality scans of inscriptions to train a language model to learn this kind of task. That’s why specialized solutions like Aeneas are needed.  The Aeneas team believes it could help researchers “connect the past,” said Yannis Assael, a researcher at Google DeepMind who worked on the project. Rather than seeking to automate epigraphy—the research field dealing with deciphering and understanding inscriptions—he and his colleagues are interested in “crafting a tool that will integrate with the workflow of a historian,” Assael said in a press briefing. 
    Their goal is to give researchers trying to analyze a specific inscription many hypotheses to work from, saving them the effort of sifting through records by hand. To validate the system, the team presented 23 historians with inscriptions that had been previously dated and tested their workflows both with and without Aeneas. The findings, which were published today in Nature, showed that Aeneas helped spur research ideas among the historians for 90% of inscriptions and that it led to more accurate determinations of where and when the inscriptions originated. In addition to this study, the researchers tested Aeneas on the Monumentum Ancyranum, a famous inscription carved into the walls of a temple in Ankara, Turkey. Here, Aeneas managed to give estimates and parallels that reflected existing historical analysis of the work, and in its attention to detail, the paper claims, it closely matched how a trained historian would approach the problem. “That was jaw-dropping,” Thea Sommerschield, an epigrapher at the University of Nottingham who also worked on Aeneas, said in the press briefing.  However, much remains to be seen about Aeneas’s capabilities in the real world. It doesn’t guess the meaning of texts, so it can’t interpret newly found engravings on its own, and it’s not clear yet how useful it will be to historians’ workflows in the long term, according to Kathleen Coleman, a professor of classics at Harvard. The Monumentum Ancyranum is considered to be one of the best-known and most well-studied inscriptions in epigraphy, raising the question of how Aeneas will fare on more obscure samples.  Google DeepMind has now made Aeneas open-source, and the interface for the system is freely available for teachers, students, museum workers, and academics. The group is working with schools in Belgium to integrate Aeneas into their secondary history education.  “To have Aeneas at your side while you’re in the museum or at the archaeological site where a new inscription has just been found—that is our sort of dream scenario,” Sommerschield said.

Leave a Reply

Your email address will not be published. Required fields are marked *