Listen to the article
As artificial intelligence models grow larger and denser, the industry’s physical limits are being tested, with a race for grid access, land, and advanced cooling shaping the future of hyperscale data centres amid increasing pressures on infrastructure and regulatory hurdles.
The expansion of artificial intelligence is colliding with the physical limits of the infrastructure that must host it, shifting the bottleneck of the industry away from chips and towards power, land and cooling, according to analysis from Heligan Group and corroborating industry reports. As hyperscale operators and cloud providers pursue ever-larger models and denser compute, investors and engineers are now competing for grid access, expandable megawatts and sites that can be retrofitted quickly for high‑density workloads. [1][3][5]
Heligan’s study frames the problem simply: AI hardware is power‑hungry and thermally intense. “As hyperscalers push past historic investment levels and model sizes continue to accelerate, the physical layer of AI infrastructure has become the limiting factor to global innovation. A single NVIDIA H100 draws up to 1,000 watts; racks exceed 100 kW. This is no longer a race for silicon – it’s a battle for grid access, land and engineering capability,” Andrew Dickinson, Head of Infrastructure Services at Heligan Group, said. Industry analysis and consultancy work reflect the same trajectory: average rack power densities have risen sharply and are forecast to climb further in the coming years. [1][3][5]
That rise in density is driving rapid change in how data centres are designed and valued. Where older facilities were engineered around predictable, air‑cooled loads, the newest AI centres favour liquid cooling technologies , direct‑to‑chip cooling, immersion tanks and other liquid loops , and require power distribution and mechanical systems capable of very different thermal dynamics. According to technology coverage and industry primers, the shift to liquid cooling and real‑time thermal management is now a mainstream engineering response to AI workloads. [2][4][6]
Power architecture is also being rethought. Operators are increasingly relying on vertical substations, on‑site generation, battery storage and dynamic load balancing rather than conventional cabling and legacy redundancy patterns. Heligan argues that platforms able to bring in and scale power rapidly, demonstrate low‑carbon delivery and retrofit to changing chip designs command a premium from buyers and investors. McKinsey and other consultancies echo this, noting the need for larger power distribution units and rack‑level reconfiguration as densities rise. [1][3][5]
The market response is visible in transaction activity and capital allocation. Heligan forecasts an acceleration in large deals and says global data‑centre mergers and acquisitions could top USD 80 billion in 2025 while hyperscale and technology companies may invest roughly USD 400 billion this year to secure power‑ready capacity. The firm describes deals in excess of $10 billion as “routine” as buyers seek platforms with expandable megawatts and proximity to reliable power. Industry observers report similar consolidation trends as customers prioritise grid‑adjacent campuses and retrofit potential. [1][3]
These pressures are already stressing electricity networks and permitting systems in established data‑centre hubs. Heligan highlights planning delays, moratoria and multi‑year construction lead times in regions such as Northern Virginia, Dublin and Frankfurt. Other industry sources document grid strain and utility limits as a recurring bottleneck, with lead times for significant capacity increases often stretching beyond three years and permitting and community resistance becoming material project risks. [1][6][7]
Energy and sustainability considerations are altering investment criteria. Cooling can account for a large share of overall site energy use, and investors are increasingly attracted to assets that offer waste‑heat recovery, district heating integration or access to lower‑carbon baseloads. Regulatory scrutiny , including tighter reporting and efficiency rules in some jurisdictions , is prompting operators to design for demonstrable decarbonisation as well as capacity. These attributes are emerging as drivers of higher valuations for AI‑ready platforms. [1][5][6]
The UK is singled out by Heligan as a market that has the momentum and engineering depth to capture a significant share of AI infrastructure investment. The firm cites a national project pipeline exceeding GBP 36 billion and a government Compute Roadmap projection that data‑centre‑dedicated power could reach 6 GW by 2030. Heligan says UK construction and engineering firms are scaling modular and advanced‑cooling builds, positioning the country as one of Europe’s more competitive markets for hyperscale campuses. Other industry coverage notes similar shifts toward modular, grid‑adjacent campuses and the practical difficulties of upgrading legacy sites to meet AI demands. [1][3][7]
The result is a changed competitive landscape in which control of physical systems , land with permissions, robust grid connections, flexible cooling and engineering capability to reconfigure facilities , is as strategically important as chip design or cloud software. “AI’s expansion is no longer defined by chips; it is constrained by infrastructure. Control of physical systems is now driving competitive edge, and the winners will be those who can build, power, and cool at speed,” Dickinson said. Investors and operators are already aligning strategies accordingly, privileging platforms that can deliver power, cooling and scalability at pace while meeting emerging sustainability and regulatory expectations. [1][3]
📌 Reference Map:
##Reference Map:
- [1] (ITBrief / Heligan Group coverage) – Paragraph 1, Paragraph 2, Paragraph 4, Paragraph 5, Paragraph 6, Paragraph 8, Paragraph 9
- [3] (Heligan Group report) – Paragraph 1, Paragraph 2, Paragraph 5, Paragraph 8, Paragraph 9
- [5] (McKinsey) – Paragraph 2, Paragraph 4, Paragraph 7
- [2] (TechRadar Pro) – Paragraph 3
- [4] (Intelligent Data Centres) – Paragraph 3
- [6] (DataCenterKnowledge) – Paragraph 3, Paragraph 6, Paragraph 7
- [7] (WWT) – Paragraph 6, Paragraph 8
Source: Fuse Wire Services


