The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.
Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.
This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.
The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.
Design Under Pressure
The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.
In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.
To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.
Exploring the Physical Gap
There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.
Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.
What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.
Risk in the Seams
The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.
Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.
Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.
Physical Security Under AI Conditions
As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.
More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.
In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.
Infrastructure as an Adaptive System
The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.
The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.
Discover more at vertiv.com
