According to Anarock, India’s data centre capacity is at 1.4 GW at present, up from just 590 MW in 2019. 

According to Anarock, India’s data centre capacity is at 1.4 GW at present, up from just 590 MW in 2019. 

As the hunger for AI compute shoots up among start-ups and enterprises to build solutions around Generative AI and deploy them in their networks, data centre companies have begun to shift gears to meet the demand for high-end computing capabilities. 

They’re deploying high-density GPU infrastructure (up to 130kW/rack) with advanced liquid/immersion cooling and sustainable, modular designs. As they are alive to the new demand, they are fanning out to tier-ii and iii cities to locate edge computing, with a focus on cloud rebalancing, and a strong focus on data sovereignty. Liquid cooling, depending on renewable power sources and setting up edge data centres close to the users are some of the other strategies that they are working on.

According to Anarock, India’s data centre capacity is at 1.4 GW at present, up from just 590 MW in 2019. This is going to grow to 2.0 GW by the end of 2025 with a potential to hit 2.6 GW by the end of 2027.

It said that the growth in AI-led demand was a critical driver, with AI technologies requiring vast amounts of data storage, computational power and connectivity. 

Sharad Agarwal, Chief Executive Officer of Sify Infinit Spaces, admits that the company has been receiving a lot of enquiries from start-ups and enterprises regarding access to AI computing capacities.

“How the infrastructure should be ready to take care of these demands, and the elements of the infrastructure are all in the floor loading capacity so that the same floor could take heavier racks,” he said.

“Higher power density means getting ready with the liquid cooling architecture. It also means more demand for power. Getting your sustainability-proof design is also a key element,” he said.

Manoj Paul, Managing Director, Equinix India, said that data centres in the country were transforming rapidly to meet the newer requirements. 

“Operators are deploying high-density computing environments with GPU and TPU clusters and embracing liquid cooling technologies to manage rising thermal loads efficiently. With the growing need for real-time AI at the edge, edge data centres are being considered closer to end-users,” he said.

To manage the high-power requirements, data centres are turning to renewable energy and AI-driven energy optimisation tools to improve operational efficiency. To meet the scale of AI adoption, providers are turning to modular, hyperscale-ready designs that can flex with demand.

“What once averaged 10–15 kW per rack is now escalating to 40–100 kW in AI-intensive environments, forcing operators to rethink infrastructure at every level. Traditional air cooling methods are no longer sufficient,” he said.

“Liquid cooling technologies, including direct-to-chip and immersion cooling, are becoming mainstream. Enterprises are shifting some of their workloads from cloud to on-premises infrastructure for cost optimisation, better control and higher security,” he said.

Vipin Jain, President, Datacentre Operations, CtrlS Datacenters, said operators are now prioritising “AI-optimised campuses, with dedicated high-density zones for GPU clusters and integrating advanced technologies like liquid and immersion cooling, modular power systems and low-latency network fabrics.”

“There’s a rapid acceleration in AI cluster deployments and hyperscale AI data centre parks. Demand for sovereign AI cloud infrastructure is growing, particularly from BFSI, healthcare and government sectors,” he said.

“Edge data centre deployments in Tier-2 and Tier-3 cities are gaining momentum to support low-latency AI, 5G, and IoT workloads,” he said.

More Like ThisREUTERSPocket FM reported ₹1,768 crore in revenue for FY25, a 68 per cent year-on-year increase, and is now profitable across core markets

Published on July 20, 2025