Back to Blog
Published:
Oct 21, 2025
How Portland General Electric is Unlocking Hidden Capacity for Data Centers
By partnering with startup GridCARE, Portland General Electric (PGE) is investing in AI-powered flexibility solutions to accelerate the connection of new data centers to its transmission-constrained grid.
The Pacific Northwest finds itself at an unexpected crossroads where the relentless appetite of artificial intelligence meets the stubborn physics of electrical infrastructure. Portland General Electric has managed to free up over 80 megawatts of capacity for data center interconnections using AI-enabled flexibility tools developed by California startup GridCARE, marking a pragmatic departure from the industry's traditional "build first, connect later" approach to grid expansion.
This isn't merely a technical achievement, it's a philosophical shift in how utilities approach capacity constraints. GridCARE uses artificial intelligence, detailed hourly demand modeling, and optimized flexible resources like batteries and onsite generators to identify spare capacity, allowing PGE to interconnect multiple data center customers years earlier than initially expected.
What makes this development particularly noteworthy is that it leverages existing assets rather than waiting years for new transmission lines to materialize.
Hillsboro's Unlikely Rise as a Pacific Gateway
The backdrop to this innovation is Hillsboro's transformation from an Intel-dominated semiconductor suburb into a transcontinental digital crossroads. Hillsboro's position at the mouth of an undersea fiber highway connecting North America and Asia has driven its development into a major data center hub, with 800 megawatts of data center capacity already operating on a grid with total system load of approximately 4.5 gigawatts.
Multiple transpacific submarine cable systems, including the New Cross Pacific Cable and Hawaiki Cable, terminate at data centers in Hillsboro after landing at Pacific City, Oregon, connected via the Hillsboro Data Center Fiber Ring. This infrastructure creates a compelling value proposition: low-latency access to Asia Pacific markets, comprehensive local connectivity, and more favorable tax incentives than California or Washington.
Yet this strategic advantage has created its own problem. PGE has approximately 3 gigawatts of active data center load requests, with over 400 megawatts potentially energizing by 2029 - a staggering queue that would have spelled years-long delays under conventional interconnection procedures.
The Flexibility Paradigm: From Liability to Asset
What GridCARE fundamentally recognizes is that the steady deployment of distributed energy resources over the past decade has created opportunities that traditional grid planning methodologies weren't designed to capture. The company's CEO, Amit Narayan, who previously founded and led virtual power plant platform AutoGrid before its acquisition by Uplight, notes that customers typically use batteries, onsite generation, and microgrids for reliability and price hedging rather than mitigating strain on the grid.
The crucial insight is moving these proven operational tools into the planning phase. PGE's senior vice president Larry Bekkedahl explains that data centers eager to connect to the transmission-constrained Pacific Northwest grid are realizing that more onsite or nodal flexibility can accelerate the interconnection process - a significant change from just a few years ago when data centers presented as flat, round-the-clock loads that could strain the system during the five or ten days each year when demand peaks.
This represents a maturation in how the industry conceptualizes data center operations. Rather than demanding perfect, uninterrupted grid capacity for theoretical peak loads that occur only hours per year, facilities can now negotiate more nuanced service agreements. The economic incentive is substantial: every megawatt of additional capacity adds between $30 million and $40 million in value for large data centers.
The Interconnection Crisis Context
To appreciate GridCARE's significance, one must understand the broader interconnection crisis paralyzing American grid expansion. Hyperscale companies like Meta and Microsoft have experienced data center delays due to interconnection bottlenecks, with Northern Virginia seeing seven-year delays despite hosting over 300 data centers contributing $9.1 billion annually to the state economy.
In Texas, CenterPoint Energy reported a 700% increase in large load interconnection requests, growing from one gigawatt to eight gigawatts between late 2023 and late 2024. The fundamental problem is straightforward: the power system lacks sufficient transmission capacity and generation to serve dozens of gigawatts of new, high-utilization demand continuously, and building new transmission infrastructure requires years of permitting, land acquisition, supply chain management, and construction.
Speculative load requests, dubbed "phantom projects" or "vaporwatts", further exacerbate the situation by flooding utilities with applications from developers hedging their bets across multiple sites, inflating load growth predictions and creating delays for all projects seeking connection.
Projects built in 2023 are experiencing an average of five years between when their interconnection request is made and commercial operation, a significant increase over typical times in years past.
The Federal Energy Regulatory Commission has responded with Order 2023, implementing cluster studies and enhanced readiness requirements, but the rule alone cannot solve underlying transmission capacity constraints.
Virtual Power Plants: The Technology Foundation
GridCARE's approach builds on the virtual power plant concept that has gained traction over the past decade. Virtual power plants aggregate distributed energy resources at scale to provide grid services, strategically adjusting demand to maintain grid reliability as variable renewable energy capacity increases.
The Department of Energy estimates that tripling VPP capacity this decade, reaching 80 to 160 gigawatts, could address 10 to 20 percent of peak demand and save over $10 billion in annual grid costs by 2030.
AutoGrid's VPP platform, where Narayan honed his expertise, collectively represented 5 gigawatts of capacity and 37 gigawatt-hours of energy across 15 countries as of summer 2021, with assets dispatched 1,500 times to meet grid needs. The technology has proven its reliability during stress tests - during California's extreme August-September 2022 heat wave, OhmConnect's VPP automatically dispatched member devices 1.3 million times in response to real-time signals from the grid operator.
What distinguishes GridCARE's application is the extension of these operational capabilities into the planning domain. Rather than deploying flexibility resources reactively during grid emergencies, the company helps utilities incorporate flexibility parameters into interconnection studies and capacity planning from the outset.
Data Center Flexibility: Promise and Pragmatism
The conversation around data center flexibility requires careful calibration between theoretical potential and operational reality. Recent analysis suggests data centers could unlock up to 76 gigawatts of U.S. grid capacity through optional curtailment, with average curtailment time of two hours facilitating up to 100 gigawatts of new large loads on the grid.
Google has pioneered demand response capabilities at its data center fleet, partnering with utilities in Belgium, Taiwan, and more recently Indiana and Washington to shift or reduce power demand during peak hours. The company emphasizes that flexible demand enables large electricity loads to interconnect more quickly, helps reduce the need to build new transmission and power plants, and assists grid operators in managing power systems more effectively.
Yet industry veterans urge caution about overstating flexibility potential. Data center infrastructure expert Brian Janous argues that the incredibly high value of data center facilities - often costing $30 billion - makes using them primarily as demand response machines economically irrational, and that flexibility will more likely come from leveraging existing backup generators, batteries, and onsite storage rather than fundamentally rearchitecting computing workloads.
The distinction matters: Internet data centers handling near-instantaneous tasks cannot easily provide temporal flexibility, while AI compute data centers that maintain job queues offer more opportunities for workload modulation.
A Department of Energy report found that aside from Google's activities, researchers identified no examples of grid-aware flexible operation at U.S. data centers, potentially because electricity providers only recently started declining data center interconnection requests.
Geopolitical Dimensions and National Security
Narayan frames GridCARE's mission in explicitly strategic terms. He argues that because the United States takes longer to build energy infrastructure than its principal geopolitical competitors, there's a national security dimension to maximizing existing grid capacity, noting that while the U.S. leads in chips and algorithms, the speed at which things are being built in China and elsewhere creates competitive risk.
This perspective resonates with broader concerns about AI competitiveness and critical infrastructure. The ability to bring computing capacity online rapidly (or delay it for years) has implications beyond quarterly earnings reports. It affects which nations can train the largest models, deploy the most sophisticated applications, and maintain technological leadership in an increasingly digital economy.
Limitations and Scale Considerations
PGE's Bekkedahl is refreshingly candid about the technology's boundaries. He emphasizes that GridCARE's tools cannot solve the challenge posed by gigawatt-scale hyperscale AI data center proposals, such as former Energy Secretary Rick Perry's Fermi America project planning to deploy 11 gigawatts in northwest Texas or Meta's Louisiana facility potentially requiring up to 2.2 gigawatts.
What GridCARE does enable is bringing on data center loads ranging from 50 to 500 megawatts with confidence, helping utilities and data centers develop operational parameters and ramp up service in parallel as new computing capacity comes online.
This is the pragmatic middle ground between business-as-usual transmission expansion and the moonshot promises of entirely flexible, grid-responsive computing infrastructure.
The Risk-Averse Utility Culture and Proof of Concept
Narayan acknowledges that famously risk-averse utilities typically require proof of concept before investing in non-wires alternatives, making PGE's adoption particularly significant for GridCARE, which emerged from the Stanford Sustainability Accelerator program.
This first real-world deployment at utility scale provides the validation necessary to approach other grid operators facing similar constraints.
The pattern mirrors the slow adoption curve of other grid innovations, from smart meters to battery storage to renewable integration, where early-adopter utilities shoulder disproportionate risk and provide the operating track record that enables broader industry adoption. PGE's willingness to partner with a relatively unproven startup suggests both the severity of interconnection pressures and the company's confidence in the underlying technical approach.
Broader Implications for Grid Planning
GridCARE's success challenges some fundamental assumptions embedded in grid planning methodology. Traditional reliability studies evaluate worst-case scenarios: peak system demand, contingency events like transmission line failures, and loads operating at maximum capacity simultaneously. Under these conservative assumptions, many otherwise feasible projects are flagged as too risky and forced to wait until sufficient infrastructure is constructed to ensure reliability even under extreme circumstances.
By incorporating flexibility parameters and probabilistic analysis, planning tools can more accurately assess what capacity is actually needed versus what might theoretically be required during the most extreme conceivable conditions. This shift from deterministic to more sophisticated modeling approaches represents what Narayan describes as taking tools that have been well understood and utilized for many years and moving them into the grid planning space, revealing considerable hidden capacity.
The methodology has broader applicability beyond data centers. Any large load with inherent flexibility - manufacturing facilities with batch processes, hydrogen production, desalination plants, electric vehicle charging networks - could potentially benefit from similar analytical approaches that credit flexibility in interconnection studies rather than assuming rigid, constant demand.
The Path Forward
As electricity demand growth returns for the first time in decades—driven by data centers, manufacturing reshoring, building electrification, and electric vehicles - the GridCARE model offers a blueprint for extracting more capacity from existing infrastructure while longer-term transmission expansion proceeds. Narayan's observation that getting more from existing infrastructure benefits all consumers using that infrastructure captures the essential value proposition.
The innovation here isn't revolutionary technology but rather the application of existing capabilities in new contexts with appropriate analytical frameworks. It demonstrates that sometimes the most impactful solutions come not from inventing entirely new approaches but from intelligently deploying proven tools in ways that challenge conventional practices.
For an industry facing unprecedented demand growth against the backdrop of decade-long transmission development timelines, GridCARE's work with PGE suggests that creative applications of flexibility - properly modeled and confidently integrated into planning processes - can bridge at least part of the chasm between computing ambitions and grid reality.