Listen to the article
The rapid expansion of artificial intelligence data centres in the US is exposing critical vulnerabilities in the nation’s power infrastructure, risking blackouts, economic losses, and higher consumer energy costs unless urgent upgrades and demand management are enacted.
The accelerating demand for electricity driven by artificial intelligence (AI) data centres is creating unprecedented strain on the United States power grid, exposing critical vulnerabilities that could have wide-reaching consequences for the technology sector and beyond. Recent reports reveal that despite massive investment in high-tech facilities, many AI data centres are unable to operate at full capacity because local power infrastructure cannot keep pace with explosive growth in energy needs.
Two flagship data centres in Silicon Valley, just minutes from Nvidia’s headquarters, stand as stark examples of this challenge. The SJC37 facility by Digital Realty and STACK Infrastructure’s SVY02A were purpose-built for high-density AI processing, each requiring roughly 48 megawatts of power. Yet, the local utility, Silicon Valley Power, lacks sufficient grid capacity to energise these buildings. According to the publicly owned utility’s plans, a $450 million upgrade involving new substations and power lines will only be completed by around 2028, meaning these state-of-the-art centres may remain idle or underpowered for years. Nearly 100 megawatts of AI data centre capacity in Silicon Valley sits dark due to this shortfall, a problem mirrored in other regions as AI expansion accelerates.
A Department of Energy (DOE) study estimates that U.S. data centres consumed around 176 terawatt-hours of electricity in 2023, accounting for approximately 4.4% of all U.S. power usage. This figure is forecast to nearly triple by 2028, potentially reaching up to 12% as AI workloads surge. Utilities and grid operators are struggling to meet requests totaling gigawatts of new demand, for example, AEP Ohio has received load-study requests reaching roughly 13 gigawatts within its territory alone. Major corporations like Amazon have publicly accused utilities, such as PacifiCorp in Oregon, of failing to deliver contracted power to new data centre campuses.
The economic implications for companies like Nvidia are significant. Industry estimates suggest a 1-gigawatt “hyperscale AI” campus can cost around $35 billion to build, with substantial investment allocated to GPUs and infrastructure. Yet, a fraction of this capital, such as 100 megawatts of capacity, translates into billions of dollars that generate no return when power is unavailable. Nvidia’s annual cadence of GPU releases underscores the urgency; each idle year represents lost performance milestones and pricing power in an intensely competitive market. Nvidia CEO Jensen Huang has acknowledged the paradox, noting that any weaker earnings could be misinterpreted as evidence of an AI bubble, while strong results could further fuel the bubble. However, he clarifies that these supply issues stem not from Nvidia’s technology but from broader power limitations.
This power bottleneck is not merely a localised Silicon Valley challenge but a national one. Reports warn that U.S. power outages could double by 2030 unless electricity suppliers expand capacity significantly. The Department of Energy links this risk partly to the continued decommissioning of traditional power sources, driven by green energy policies that have not yet been matched by sufficient new capacity. Emerging technologies, including AI and expansive data centres, exacerbate this growing supply-demand gap.
To address peak load challenges, some technology firms are beginning to engage in demand-response programs. For instance, Google has agreed with utilities in Indiana and Tennessee to curb power usage at its AI data centres during peak demand periods starting August 2025. Such initiatives, more common in manufacturing and crypto mining sectors, aim to alleviate stress on grids by temporarily scaling back large energy consumers.
Meanwhile, states are reconsidering energy policy to safeguard grid stability amid surging demand. Texas, for example, has enacted legislation permitting the disconnection of large electricity consumers, including data centres, during emergencies. Other regions, including grids managed by PJM and the Southwest Power Pool, are exploring similar strategies to manage the rapidly increasing load from the tech sector.
The scale of electricity consumption by AI infrastructure is staggering. Elon Musk’s ‘Colossus’ supercomputer alone reportedly consumes 260 megawatts, comparable to a quarter of a nuclear reactor. Combined with data centres operated by Nvidia, OpenAI, Microsoft, Meta, Amazon, and Google, total AI-related demand amounts to tens of gigawatts. Projections expect national demand to rise 25% by 2030 and 78% by 2050, with consumer electricity prices potentially increasing by 15% to 40% in response. This trajectory signals a looming energy crisis unless infrastructure upgrades and policy measures accelerate.
In summary, the explosive growth of AI data centres exposes a critical tension: cutting-edge technology expansion is outpacing the electricity grid’s ability to supply power reliably. The consequences extend beyond financial losses for companies like Nvidia, who supply the essential chips and systems for these centres, to broader economic and societal risks including increased blackout frequency and higher consumer energy costs. Addressing this power challenge requires coordinated infrastructure investment, innovative demand management, and proactive regulatory frameworks to ensure the promise of AI is not dimmed by insufficient power.
📌 Reference Map:
- [1] (Contrarian Unicus) – Paragraph 1, Paragraph 4, Paragraph 7
- [4] (Tom’s Hardware) – Paragraph 1, Paragraph 4
- [3] (Reuters, July 2025) – Paragraph 5
- [2] (Reuters, August 2025) – Paragraph 6
- [6] (AP News) – Paragraph 7
- [5] (Le Monde) – Paragraph 8
Source: Fuse Wire


