How AI Is Forcing Asia‑Pacific Data Centres to Become ‘AI Factories’ — Infrastructure Changes You Need to Know
- 26 October, 2025
Why AI is reshaping data-centre design in Asia‑Pacific
AI isn’t a small upgrade to existing operations. From what I’ve seen talking with operators across the region — over coffee, at data‑centre tours, in long technical calls — it’s a tectonic shift. Legacy halls built for general-purpose servers are gasping under the weight of dense high-density GPU racks that spit out heat like tiny furnaces and suck wildly variable power. Incremental retrofits help in a pinch, but they’re a bandage. The market is pivoting toward purpose-built, AI‑optimised facilities — the kind of sites people now call “AI factories.”
How big is the change? Market scale and demand signals
The numbers are blunt: industry estimates put the AI data‑centre market rising from roughly $236 billion in 2025 to nearly $934 billion by 2030, largely driven by GPU-centric workloads. In Asia‑Pacific, add national digitalisation pushes, rapid 5G rollouts and a flood of generative AI apps — and you’ve got demand that isn’t incremental. It rewrites capacity planning, sustainability targets, and how operators handle peak shaving and real‑time load balancing.
Key takeaway
What are the main technical challenges?
It boils down to three gnarly problems: cooling, power delivery, and operational flexibility. Air cooling hits a brick wall as you push rack density. Power systems must be rethought to tolerate higher voltages and fast, unpredictable swings. And you can’t scale at hyperscaler speed with bespoke, monolithic builds. The practical answer? Rethink the stack from chip to grid.
Cooling: from air to hybrid liquid systems
Direct‑to‑chip liquid cooling, often paired with air handling in hybrid setups, is fast becoming the default for high‑density pods. Why? Because it extracts heat where it originates — at the silicon — using much less energy and, importantly, less water if designed right. In practice that looks like coolant distribution units (CDUs) feeding cold plates at the card or chip level, while air pathways handle lower-density zones and service access. It’s a middle ground that keeps racks serviceable without baking technicians in server rooms. Seen it twice this year alone at sites in Singapore and Seoul — night‑and‑day difference in thermal control.
Power delivery: smarter, higher‑voltage architectures
AI workloads are spiky. One minute idle, the next running inferencing farms at full tilt. That requires smarter distribution: higher voltage rails to cut conversion losses, robust busway systems for rapid capacity changes, and intelligent controls that smooth demand in real time. In markets with flaky grids, these upgrades aren’t luxury — they’re survival tools. I recall an operator in the Philippines telling me their brownout mitigation strategy was the same thing that saved them when an unexpected grid event hit: a mix of local storage, DC distribution and fast transfer switching.
Operational flexibility: modular and prefabricated buildouts
Modularity is the practical lifeline. Factory‑tested modules — pods, containerised racks, vertical stacks — let operators add capacity in predictable chunks. That matters when land is scarce, capital constrained, or skilled labour is thin on the ground. Prefab modules can shave deployment time by up to 50% and dramatically lower onsite surprises. I’ve watched teams move from planning to live traffic in six months with prefab pods — compared with the 12–18 months that traditional builds demanded.
Designing an “AI factory” — what changes architecturally?
An AI factory isn’t just a new machine in an old hall. It’s holistic design: floorplans, coolant distribution, power topology, monitoring and maintenance all conceived together. You’ll see deeper integration from chip to grid. If you’re drawing a blueprint, expect:
Practical example: a hyperscale campus might roll out 250 kW modular rack pods, each with direct‑to‑chip cooling, DC power shelves at rack level, and local lithium‑ion storage to smooth peaks — all factory tested and dropped in as units. Faster to deploy. Easier to operate. More predictable performance. Seen similar architectures validated in vendor testbeds; they behave in the wild much the same way — with fewer surprises if planned end‑to‑end.
DC power: efficient and grid‑friendly
DC architectures are making a comeback — and for good reasons. Fewer AC/DC conversions mean fewer losses. That alignment works well with renewables and battery storage, and in energy‑constrained markets (I’m thinking Vietnam, the Philippines) it makes a measurable difference to uptime and operating costs. Don’t expect everyone to flip overnight, but expect hybrid AC/DC topologies to appear more often — particularly where on‑site renewables and storage are part of the P&L calculation.
Sustainability: a central piece, not an afterthought
Regulators and increasingly sophisticated customers are forcing sustainability out of the margins. Operators are responding with hybrid energy systems, lithium‑ion banks, solar‑backed UPS designs and grid‑interactive UPS strategies. Cooling choice matters here too: modern liquid systems can cut PUE materially and often use less water than legacy evaporative or once‑through cooling setups. It’s not just good PR — it’s risk management and cost control.
Practical sustainability moves
Regional realities: why Asia‑Pacific is unique
Asia‑Pacific is a patchwork: rapid capital flows and greenfield builds in some cities, constrained grids and land in others, and wildly varying regulation across borders. That forces flexibility. The same AI factory concept will look different in Jakarta versus Tokyo. Where land is cheap, go denser and spread out; where space is tight, vertical or containerised modules win. The nuance here matters — one‑size solutions fail fast.
Operational roadmap: stages to transition
You don’t flip a switch and get an AI factory overnight. In my experience, a pragmatic roadmap works best:
Teams that do this avoid costly downtime and scale more predictably as workloads — and costs — evolve.
Business impact and risk management
Yes, building AI‑ready infrastructure is capital intensive. But the hidden cost of not doing it — repeated retrofits, degraded performance, unhappy customers and compliance headaches — is real and recurring. Early movers gain uptime advantages, lower long‑term energy bills, and stronger ESG credentials. Those are defensible competitive edges, not just buzzwords.
One hypothetical case study
Picture a mid‑sized cloud provider in Indonesia suddenly swamped by local generative AI demand. Instead of gutting an old hall, they choose a phased AI factory path: two prefab pods with liquid cooling, DC rack power and a 10 MWh lithium bank. They migrate non‑latency workloads to the new pods, expand every six months, and tune operations as they go. Result: deployment time down ~40%, PUE improved by ~18%, and much better availability during local grid events. It’s not magic — it’s planning and modular execution.
References and further reading
If you want to dig deeper, vendor white papers and recent market analyses on AI data‑centre spending and direct‑to‑chip cooling are the practical next steps. Learn more in our guide to enterprise AI infrastructure. Look for technical briefs that include field test data — those are the ones that separate glossy slides from real engineering outcomes. [Source: industry market reports; Source: vendor white papers]
Final thoughts — preparing for an AI‑first future
AI is rewriting the data‑centre playbook across Asia‑Pacific. Thermal systems, power architectures and modular footprints all need rethinking. If you run a site, start with integrated planning and modular deployment. If you’re an enterprise customer, ask data‑centre partners about liquid cooling, DC options and renewables integration. Plan for AI — not for yesterday’s servers. Because if history is any guide, the vendors and operators who treat this as a strategic rebuild — not a short‑term patch — will be the ones still standing when the next wave hits.
Photo credit: İsmail Enes Ayhan