Gridcare thinks more than 100 GW of data center capacity is hiding in the grid

Hyperscalers and data center developers are in a pickle: They all want to add computing power tomorrow, but utilities frequently play hard to get, citing years-long waits for grid connections.

“All the AI data centers are struggling to get connected,” Amit Narayan, founder and CEO of Gridcare, told TechCrunch. “They’re so desperate. They are looking for solutions, which may or may not happen. Certainly not in the five-year timelines they cite.”

That has led many data centers to pursue what’s called “behind the meter” power sources — basically, they build their own power plants, a costly endeavor that hints at just how desperate they are for electricity.

But Narayan knew there was plenty of slack in the system, even if utilities themselves haven’t discovered it yet. He has studied the grid for the last 15 years, first as a Stanford researcher then as a founder of another company. “How do we create more capacity when everyone thinks that there is no capacity on the grid?” he said.

Narayan said that Gridcare, which has been operating in stealth, has already discovered several places where extra capacity exists, and it’s ready to play matchmaker between data centers and utilities.

Gridcare recently closed an oversubscribed $13.5 million seed round, the company told TechCrunch. The round was led by Xora, Temasek’s deep tech venture firm, with participation from Acclimate Ventures, Aina Climate AI Ventures, Breakthrough Energy Discovery, Clearvision, Clocktower Ventures, Overture Ventures, Sherpalo Ventures, and WovenEarth.

For Narayan and his colleagues at Gridcare, the first step to finding untapped capacity was to map the existing grid. Then the company used generative AI to help forecast what changes might be implemented in the coming years. It also layers on other details, including the availability of fiber optic connections, natural gas, water, extreme weather, permitting, and community sentiment around data center construction and expansion. 

Techcrunch event

San Francisco
|
October 27-29, 2025

“There are 200,000-plus scenarios that you have to consider every time you’re running this study,” Narayan said.

To make sure it’s not running afoul of regulations, Gridcare then takes that data and weighs it against federal guidelines that dictate grid usage. Once it finds a spot, it starts talking with the relevant utility to verify the data.

“We’ll find out where the maximum bang for the buck is,” Narayan said.

At the same time, Gridcare works with hyperscalers and data center developers to identify where they are looking to expand operations or build new ones. “They have already told us what they’re willing to do. We know the parameters under which they can operate,” he said.

That’s when the matchmaking begins.

Gridcare sells its services to data center developers, charging them a fee based on how many megawatts of capacity the startup can unlock for them. “That fee is significant for us, but it’s negligible for data centers,” Narayan said.

For some data centers, the price of admission might be forgoing grid power for a few hours here and there, relying on on-site backup power instead. For others, the path might be clearer if their demand helps green-light a new grid-scale battery installation nearby. In the future, the winner might be the developer that is willing to pay more. Utilities have already approached Gridcare inquiring about auctioning access to newfound capacity.

Regardless of how it happens, Narayan thinks that Gridcare can unlock more than 100 gigawatts of capacity using its approach. “We don’t have to solve nuclear fusion to do this,” he said.

Update: Corrected spare capacity on the grid to gigawatts from megawatts.

Similar Posts

  • Razer Pro Click V2 Vertical Review: A Hybrid Gaming Mouse

    Switching to a vertical mouse is a hard sell. Having to change how you use a mouse completely can be an intimidating task, especially with how unnatural the new hand position feels at first—you’re going entirely against the muscle memory you’ve spent years building up.One of the largest challenges to the switch is the initial loss of pointer accuracy. If you’re in an office setting, you may find yourself wandering around a bit or struggling to move your new mouse as quickly as you did before. But in a slow-paced setting like that, all you struggle with is a few mis-clicks or slightly slower navigation. If you try to make this transition with gaming, it’s far more jarring, and the consequences are much more immediately noticeable.But even if it’s difficult to adapt to, could vertical mice be the future of gaming? Razer’s new Pro Click V2 Vertical Edition is a hybrid productivity and gaming vertical mouse. Vertical mice typically cater to office workers, but the focus on gaming performance makes the $120 Pro Click V2 one of a kind.Desk PresenceThe Pro Click V2 Vertical looks, more than anything else, like a modern gaming mouse. It has the textured exterior, metallic highlights, and slightly organic, H.R. Giger-esque curvature typical of Razer’s design language. But everything has been shifted around. The curved, cutting thumb rest sits on top of the mouse instead of on the side. A flare juts out from the right side as a place to rest the underside of your hand. The gunmetal highlight sits at the peak of the mouse rather than between the two buttons. Even the USB port is vertical, a humorous attention to detail.It’s intentionally designed as a gaming mouse that just happens to be vertical. Aesthetically, the only downside is the minimal RGB lighting. With only one section of lighting that runs along the bottom of the mouse, RGB lighting fans might feel disappointed. Still, it’s bright, reactive, and has great color accuracy. It’s more than enough for me, especially with how customizable it is with Razer’s Chroma software.The Pro Click V2 Vertical has the same specs as the standard Pro Click V2, with a 1,000-Hz polling rate, a 2.4-GHz dongle that can be stored on the underside, Bluetooth multi-device connectivity, and a reprogrammable button on top. The only features lost are the mouse wheel’s horizontal scrolling and toggleable non-ratcheted rotation.This mouse includes two major productivity features: app-specific profiles and multi-device connectivity, and both work effortlessly. Razer Synapse immediately detected different software and changed the active profile in response, and pressing the button on the underside of the mouse swapped between paired devices instantaneously.Beyond that, Razer Synapse is as impressive as always. I consistently find the software to be one of the best and most intuitive on the market, and that’s the case here. All of the menus are simple and efficient, the settings can be changed in real time, and the adjustments all have tooltips and explanations to tell you exactly what you’re changing.Annoyingly, Razer Synapse has advertisements on the homepage, something I’ve complained about when reviewing SteelSeries products in the past. However, unlike Steelseries GG, these “recommendations” can be permanently disabled in the app’s settings.Performance and PracticeThe overall hand position of the Pro Click V2 Vertical is natural, but incredibly upright. While some vertical mice, like those from Logitech or Hansker, find a middle ground between a standard and truly “vertical” hand position, Razer opted for a nearly perpendicular shape. While this is technically an ideal ergonomic shape, it will be harder to adapt if you’re moving directly from a standard mouse, and might not be as comfortable during the adjustment period.It felt unnatural for the first week or so, and required practice to use comfortably and confidently. Once I had acclimated, my speed and accuracy were nearly at the same level as a standard mouse, although consistent use still felt clunky and unfamiliar compared to the horizontal mice I’d been using for most of my life.

  • Trump’s AI Action Plan is a distraction

    On Wednesday, President Trump issued three executive orders, delivered a speech, and released an action plan, all on the topic of continuing American leadership in AI.  The plan contains dozens of proposed actions, grouped into three “pillars”: accelerating innovation, building infrastructure, and leading international diplomacy and security. Some of its recommendations are thoughtful even if incremental, some clearly serve ideological ends, and many enrich big tech companies, but the plan is just a set of recommended actions.  The three executive orders, on the other hand, actually operationalize one subset of actions from each pillar:  One aims to prevent “woke AI” by mandating that the federal government procure only large language models deemed “truth-seeking” and “ideologically neutral” rather than ones allegedly favoring DEI. This action purportedly accelerates AI innovation. A second aims to accelerate construction of AI data centers. A much more industry-friendly version of an order issued under President Biden, it makes available rather extreme policy levers, like effectively waiving a broad swath of environmental protections, providing government grants to the wealthiest companies in the world, and even offering federal land for private data centers. A third promotes and finances the export of US AI technologies and infrastructure, aiming to secure American diplomatic leadership and reduce international dependence on AI systems from adversarial countries. This flurry of actions made for glitzy press moments, including an hour-long speech from the president and onstage signings. But while the tech industry cheered these announcements (which will swell their coffers), they obscured the fact that the administration is currently decimating the very policies that enabled America to become the world leader in AI in the first place.
    To maintain America’s leadership in AI, you have to understand what produced it. Here are four specific long-standing public policies that helped the US achieve this leadership—advantages that the administration is undermining.  Investing federal funding in R&D  Generative AI products released recently by American companies, like ChatGPT, were developed with industry-funded research and development. But the R&D that enables today’s AI was actually funded in large part by federal government agencies—like the Defense Department, the National Science Foundation, NASA, and the National Institutes of Health—starting in the 1950s. This includes the first successful AI program in 1956, the first chatbot in 1961, and the first expert systems for doctors in the 1970s, along with breakthroughs in machine learning, neural networks, backpropagation, computer vision, and natural-language processing.
    American tax dollars also funded advances in hardware, communications networks, and other technologies underlying AI systems. Public research funding undergirded the development of lithium-ion batteries, micro hard drives, LCD screens, GPS, radio-frequency signal compression, and more in today’s smartphones, along with the chips used in AI data centers, and even the internet itself. Instead of building on this world-class research history, the Trump administration is slashing R&D funding, firing federal scientists, and squeezing leading research universities. This week’s action plan recommends investing in R&D, but the administration’s actual budget proposes cutting nondefense R&D by 36%. It also proposed actions to better coordinate and guide federal R&D, but coordination won’t yield more funding. Some say that companies’ R&D investments will make up the difference. However, companies conduct research that benefits their bottom line, not necessarily the national interest. Public investment allows broad scientific inquiry, including basic research that lacks immediate commercial applications but sometimes ends up opening massive markets years or decades later. That’s what happened with today’s AI industry. Supporting immigration and immigrants Beyond public R&D investment, America has long attracted the world’s best researchers and innovators. Today’s generative AI is based on the transformer model (the T in ChatGPT), first described by a team at Google in 2017. Six of the eight researchers on that team were born outside the US, and the other two are children of immigrants.  This isn’t an exception. Immigrants have been central to American leadership in AI. Of the 42 American companies included in the 2025 Forbes ranking of the 50 top AI startups, 60% have at least one immigrant cofounder, according to an analysis by the Institute for Progress. Immigrants also cofounded or head the companies at the center of the AI ecosystem: OpenAI, Anthropic, Google, Microsoft, Nvidia, Intel, and AMD. “Brain drain” is a term that was first coined to describe scientists’ leaving other countries for the US after World War II—to the Americans’ benefit. Sadly, the trend has begun reversing this year. Recent studies suggest that the US is already losing its AI talent edge through the administration’s anti-immigration actions (including actions taken against AI researchers) and cuts to R&D funding. Banning noncompetes Attracting talented minds is only half the equation; giving them freedom to innovate is just as crucial.

    Silicon Valley got its name because of mid-20th-century companies that made semiconductors from silicon, starting with the founding of Shockley Semiconductor in 1955. Two years later, a group of employees, the “Traitorous Eight,” quit to launch a competitor, Fairchild Semiconductor. By the end of the 1960s, successive groups of former Fairchild employees had left to start Intel, AMD, and others collectively dubbed the “Fairchildren.”  Software and internet companies eventually followed, again founded by people who had worked for their predecessors. In the 1990s, former Yahoo employees founded WhatsApp, Slack, and Cloudera; the “PayPal Mafia” created LinkedIn, YouTube, and fintech firms like Affirm. Former Google employees have launched more than 1,200 companies, including Instagram and Foursquare. AI is no different. OpenAI has founders that worked at other tech companies and alumni who have gone on to launch over a dozen AI startups, including notable ones like Anthropic and Perplexity. This labor fluidity and the innovation it has created were possible in large part, according to many historians, because California’s 1872 constitution has been interpreted to prohibit noncompete agreements in employment contracts—a statewide protection the state originally shared only with North Dakota and Oklahoma. These agreements bind one in five American workers. Last year, the Federal Trade Commission under President Biden moved to ban noncompetes nationwide, but a Trump-appointed federal judge has halted the action. The current FTC has signaled limited support for the ban and may be comfortable dropping it. If noncompetes persist, American AI innovation, especially outside California, will be limited. Pursuing antitrust actions One of this week’s announcements requires the review of FTC investigations and settlements that “burden AI innovation.” During the last administration the agency was reportedly investigating Microsoft’s AI actions, and several big tech companies have settlements that their lawyers surely see as burdensome, meaning this one action could thwart recent progress in antitrust policy. That’s an issue because, in addition to the labor fluidity achieved by banning noncompetes, antitrust policy has also acted as a key lubricant to the gears of Silicon Valley innovation.  Major antitrust cases in the second half of the 1900s, against AT&T, IBM, and Microsoft, allowed innovation and a flourishing market for semiconductors, software, and internet companies, as the antitrust scholar Giovanna Massarotto has described. William Shockley was able to start the first semiconductor company in Silicon Valley only because AT&T had been forced to license its patent on the transistor as part of a consent decree resolving a DOJ antitrust lawsuit against the company in the 1950s. 
    The early software market then took off because in the late 1960s, IBM unbundled its software and hardware offerings as a response to antitrust pressure from the federal government. As Massarotto explains, the 1950s AT&T consent decree also aided the flourishing of open-source software, which plays a major role in today’s technology ecosystem, including the operating systems for mobile phones and cloud computing servers. Meanwhile, many attribute the success of early 2000s internet companies like Google to the competitive breathing room created by the federal government’s antitrust lawsuit against Microsoft in the 1990s. 
    Over and over, antitrust actions targeting the dominant actors of one era enabled the formation of the next. And today, big tech is stifling the AI market. While antitrust advocates were rightly optimistic about this administration’s posture given key appointments early on, this week’s announcements should dampen that excitement.  I don’t want to lose focus on where things are: We should want a future in which lives are improved by the positive uses of AI.  But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition.  Prioritizing short-term industry profits over these bedrock principles won’t just put our technological future at risk—it will jeopardize America’s role as the world’s innovation superpower.  Asad Ramzanali is the director of artificial intelligence and technology policy at the Vanderbilt Policy Accelerator. He previously served as the chief of staff and deputy director of strategy of the White House Office of Science and Technology Policy under President Biden.

  • RealSense spins out of Intel to scale its stereoscopic imaging technology

    After 14 years of developing inside of semiconductor giant Intel, RealSense is striking out on its own.

    RealSense sells cameras that use stereoscopic imaging, a process that combines two images of the same object from different angles to create depth, enhanced with infrared light. This technology helps machines like robots, drones, and autonomous vehicles have a better perception of the physical world around them. The tech is also used for facial authentication.

    “The common denominator of all of them is they live in the real, physical world,” CEO Nadav Orbach told TechCrunch. “They need to understand the surroundings in 3D and based on that, take and plan actions right in the world. And for that, they need a real-time, high-accuracy ability to understand the surrounding in 3D. And that’s what we do best.”

    Orbach joined Intel back in 2006 as a CPU architect in Israel. He started working on vision technology in 2011 before becoming the general manager of incubation and disruptive innovation in 2022 and moving to San Francisco last year.

    “We knew and understood that 3D perception was going to be big,” Orbach said about the early days of RealSense. “To be honest, we weren’t quite sure in which domain. We tried that across different market segments and different applications, all the way from gesture recognition with computers, phones, until we really found our sweet spot over the years, mostly in robotics.”

    The company works with numerous industries outside of robotics, too. Orbach said they’ve heard from fish farms looking to track the volume inside their pens. Chipotle has also used RealSense cameras, in a partnership with AI restaurant software company PreciTaste, to track when food containers are low.

    RealSense has more than 3,000 customers and has seen a surge in new interest over the last three to four years as AI has improved. With that, the applications for robotics, especially, have scaled.

    Techcrunch event

    Boston, MA
    |
    July 15

    The company realized it may have a better chance keeping up with demand — and scaling itself — if it spun out of Intel and raised its own capital, Orbach said.

    The spinout plans hatched last year and got the approval from former Intel CEO Pat Gelsinger. The company is now independent and raised a $50 million Series A funding round from Intel Capital and other strategic investors to get started on its own.

    “For me, it was exciting, to be honest,” Orbach said. “I’m a veteran executive in the company, but it’s first time that I’m, you know, I was on the other side of the table. It was a very humbling experience for me as a first-time CEO to go and and raise money.”

    RealSense will put the capital toward building out its go-to-market team and making improvements to its technology. The company is particularly focused on improving the tech so it can help improve safety during humans and robot interactions and to improve access control.

    “There is a learning curve of, you know, stepping out,” Orbach said. “I’m extremely excited about that. I’m fortunate to have a very strong team with a lot of people in my team that that have entrepreneurial experience. I feel that with my background, together with with some strong teammates, I think we have the right mix for success. And for me, it’s a dream coming true.”

  • The hidden scaling cliff that’s about to break your agent rollouts

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Enterprises that want to build and scale agents also need to embrace another reality: agents aren’t built like other software.  Agents are “categorically different” in how they’re built, how they operate, and…

  • Get paid faster: How Intuit’s new AI agents help businesses get funds up to 5 days faster and save 12 hours a month with autonomous workflows

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Intuit has been on a journey over the last several years with generative AI, incorporating the technology as part of its services at QuickBooks, Credit Karma,Turbotax and Mailchimp. Today the company is…

Leave a Reply

Your email address will not be published. Required fields are marked *