The data centre industry is scrambling to accommodate a wave of demand unlike anything we’ve seen before.
According to research from Goldman Sachs, current global data centre capacity demand sits around 62GW. That figure is predicted to grow by around 50% to 92 GW by 2027. Right now, cloud workloads account for 58% of demand, traditional enterprise workloads 29%, and AI workloads 13%. By next year, however, AI is on track to represent around 28% of total demand, while cloud drops to 50% and traditional workloads to 21%. The five largest US hyperscale technology companies are expected to spend a combined $736 billion (slightly higher than the GDP of Belgium, to put that into perspective) across 2025 and 2026 alone.
As data centre builders race to support demand for AI workloads, the industry is experiencing a sea of change in the ways these organisations approach site selection and data centre design. In the AI age, securing access to power, not to mention managing that power within a data centre facility, are posing new questions. Questions without simple answers. Questions that are sending ripples through the data centre supply chain.
Site selecting around stumbling blocks in the AI race
For decades, site selection in Europe has revolved around land availability, fibre connectivity and proximity to end users. When you’re hosting responsive cloud services and streaming platforms, it pays to be as close as possible to the end user. Today, power has overtaken physical footprint as the decisive factor.
These changing priorities are due primarily to the fact that, across much of Europe, grid investment has lagged behind rising electrification demands for decades. AI deployments drive order-of-magnitude density increases versus conventional colocation, and developers are facing the reality that available megawatts, not square metres, are what make a site viable. Some operators are being forced to leave entire floors of new facilities empty, not for lack of demand (far, far from it), but because the grid connections needed to meet that demand are insufficient.
In order to secure that access to power (as well as other bonuses like lower ambient temperatures that allow for free cooling), data centre operators are starting to cast their eyes farther afield.
Luckily, AI workloads are less latency-sensitive than traditional cloud applications like streaming or payments infrastructure. That fact has opened up the possibility of building more facilities outside the saturated FLAP-D markets.
That being said, relocating outside established regions introduces its own pain points: under-industrialised areas with less mature construction ecosystems, transport bottlenecks, and shortages of skilled labour. Solving one problem creates several more.
AI power loads pose a “spiky” problem
Clearly, the next generation of data centres built to support AI workloads will not only need more power coming in, but will need to be designed in such a way that handles said power very differently once it’s in the facility.
At the core of the issue is power management and the difference between managing an AI workload versus a more traditional cloud or colocation one.
AI server clusters are built around high-density GPU arrays. When used to train or inference AI, these racks behave very differently to conventional workloads. Loads ramp from idle to full capacity in a few seconds, then drop off again just as quickly. Also, because deep learning systems are fundamentally a black box, predicting these jumps is challenging. These “spiky” load profiles introduce issues throughout power systems, and concerns are emerging over premature ageing of UPS batteries, overstressing of UPS systems themselves, transformer fatigue, and potential impacts on low-voltage switchgear protection modules.
Mitigation strategies are already emerging. Some projects are evaluating supercapacitors and battery energy storage systems positioned upstream of UPS installations to buffer volatility and smooth load transitions before they propagate through the network.
Building for the unknown
There’s a problem, however: the first generation of “AI native” data centres are still being built. The facilities of tomorrow are still holes in the ground awaiting concrete, or metal skeletons. Facilities coming online today were designed over 12 months ago, when the challenges AI workloads pose were less well understood. Very few sites today are running 100% AI workloads, and it’s difficult to predict exactly which problems will arise when the rubber inevitably meets the road.
For now, many of the risks are still theoretical, and the next wave of facilities will test assumptions in real-world conditions. If the AI boom continues, and these power management issues aren’t resolved, it could present a major hurdle for the sector. Or it might be more akin to Y2K and never materialise into the doomsday scenario many engineers spent the late 90s dreading.
It’s worth noting that, while Y2K didn’t result in a digital apocalypse, the world was only spared disaster by the efforts of skilled engineers working very hard behind the scenes to fix the issue before the clocks ticked over into 2000. With the looming power problems of the AI boom, the data centre sector could be facing its own Y2K. Whether or not the rest of the world notices or not will be down to the steps taken today. Flexible electrical architectures can reduce future redesign costs.
A generational Infrastructure Challenge
The AI boom is often framed as a software revolution. Or a productivity revolution. Or as the unstoppable rise of the machines. Take your pick.
The reality is that no revolution will scale without a re-imagined electrical backbone. If meeting the demands of the AI buildout means building bigger, more complex facilities in places that are more remote without an established data centre industry, then the sector can’t expect the same supply chain, procurement, construction, and design techniques to be successful as on a more traditional project. The challenge is not only getting significantly more power into sites but also distributing that power within facilities in a way that accounts for new kinds of workloads without burning out switchgear, transformers, or a multi-million dollar rack of GPUs.
The next generation of data centre projects will be defined by their ability to absorb, manage and optimise unprecedented power densities. They will be defined by where they are built, how they are designed, and how quickly those designs are delivered.
Whether the industry experiences disruption or navigates these hurdles will depend on the decisions being made now in design offices, and on factory floors.