– Advertisement –

Asia-Pacific is racing into the AI era, with spending on AI and generative AI expected to soar to US$175 billion by 2028, growing more than fourfold in just five years.

But here is the uncomfortable truth: Despite the billions pouring in, most organisations remain stuck at the starting line. A recent IDC study commissioned by Akamai, The Edge Evolution: Powering Success from Core to Edge, found that 31% of APAC enterprises already have generative AI in production and another 64% are actively testing applications. Meanwhile, IBM’s 2025 CEO Study: CEO Decision-Making in the Age of AI, found that only 25% of AI initiatives have delivered their expected return on investment.

This gap between ambition and impact is concerning, more so since the source of this gap isn’t a lack of ideas or investment. It is the invisible foundation beneath it: infrastructure. As organisations deploy generative AI, they are discovering that the centralised cloud architectures that powered their digital transformation over the last decade are fundamentally incompatible with today’s real-time demands from AI.

APAC is in the middle of an innovation race, but for enterprises relying solely on cloud architectures, it is like entering a relay and insisting on running every leg with the same runner.

The infrastructure trap

Over the last decade or so, cloud has been the default answer to every digital challenge. Enterprises lifted and shifted workloads to hyperscalers, consolidated data, and centralised control. That model worked well for traditional business systems such as large-scale ERPs, centralised data warehouses, and media transcoding farms, where latency was a secondary concern to raw compute power.

But AI workloads are different. They do not just crunch data and store it. AI requires constant real-time processing, and this exposes three critical bottlenecks that are preventing APAC enterprises from realising generative AI’s full potential.

Performance bottlenecks: Customers expect experiences that feel instant — responses in under 100 milliseconds. That may not sound like much, but when millions of people are logging in at once, those fractions of a second add up. In industries such as finance or e-commerce, even a delay of 100 milliseconds can mean a missed trade or an abandoned cart.

As generative AI models scale to hundreds of billions of parameters, delivering sub-100 millisecond responses becomes impossible if computation happens thousands of kilometres away. The only way forward is smarter workload placement and bringing processing closer to where customers actually are.

Regulatory complexity: Asia-Pacific is one of the most fragmented regulatory environments in the world. According to a 2025 IDC Infobrief, half of the region’s top 1,000 enterprises will struggle with divergent regulations and fast-changing compliance standards. This will result in a compliance trap, where well-funded AI projects stall because the centralised cloud cannot satisfy both performance needs and regulatory obligations at once. Enterprises across APAC are already adopting edge strategies to process and store sensitive data closer to where it is generated, reducing both compliance risk and latency.

Costs are spiralling beyond sustainable levels. Scaling AI workloads is expensive. As models grow larger and data demands multiply, cloud bills can quickly get out of control. Many enterprises discover too late that their ROI assumptions no longer hold, forcing them to rethink deployment strategies midstream. Some have responded by re-evaluating their cloud strategies to reduce dependency on single providers, achieving significant decreases in data-transfer costs and latency through more distributed infrastructure designs.

Rethinking cloud and edge 

This doesn’t mean abandoning the cloud altogether. Instead, enterprises need to reframe cloud and edge as teammates in a relay race, each with a distinct leg to run.

Cloud platforms excel at scale. Their strengths align with the power and flexibility of large data centres. When organisations need immense computational power, handle massive data volumes, or face unpredictable storage spikes, cloud delivers. Think of cloud as an endurance runner, built to shoulder the heavy lifting of training AI models where demands are massive but intermittent.

Edge platforms, on the other hand, are the nimble sprinters who shine in the final stretch, where speed and proximity matter most. By processing data closer to end users and devices, edge servers reduce latency, strengthen security, and naturally meet compliance needs. For ultra-low-latency applications and real-time analysis, the edge handles critical moments close to users, where every millisecond counts.

The winning strategy is clear: train in the cloud for scale, deploy at the edge for speed. Together, they form a hybrid relay team built to handle both the marathon and the sprint.

Crossing the finishing line 

The time for monolithic infrastructure strategies is over, and enterprises can no longer afford to sit on the sidelines. As leaders, we must ask ourselves: Are we building an AI architecture for the future, or are we running today’s race on yesterday’s track? The difference between leading and lagging organisations comes down to infrastructure agility; the ability to run the AI relay effectively, with cloud and edge each taking their leg.

Enterprises that delay modernising their infrastructure risk running the entire race with the same runner: slow, inefficient, and unable to respond in real time. This leaves them with AI initiatives that underperform, consume excessive resources, and fail to deliver measurable business outcomes.

The strategic imperative is clear: Organisations that act now can train their AI models in the cloud for scale while deploying workloads at the edge for speed, proximity, and compliance. This hybrid approach improves performance, reduces operational risk, and delivers the near-instant experiences customers expect. Those that cling to monolithic, centralised architectures risk falling behind competitors who are already executing the relay efficiently and capturing opportunities through faster, smarter AI deployment.

Beyond competitive positioning, the economic stakes are substantial. AI investments are costly, and delays in adopting hybrid strategies reduce ROI while inflating operational costs. Every quarter of hesitation widens the gap between leaders and laggards, making it harder to catch up as the pace of innovation accelerates.

Ultimately, the race is on. Enterprises must embrace the hybrid relay model today, training in the cloud and sprinting at the edge, to secure performance, resilience, and lasting competitive advantage. The organisations that execute this relay decisively will lead the AI era, while those that wait risk watching from the sidelines.

– Advertisement –