AROUND the world, governments and industries are moving quickly to secure leadership in artificial intelligence. For many countries, this momentum intersects with pressing demographic realities: aging populations, shrinking workforces and the urgent need to reinvent productivity. We cannot afford to fall behind. The rise of agentic AI promises to accelerate this transformation.
Unlike traditional AI models, agentic AI does not simply respond to queries; it reasons, plans and takes action across systems. For example, instead of merely answering a question about travel recommendations, an agentic system could book flights, update calendars, send reminders and adjust itineraries based on weather or delays — without being prompted for each step. This marks a shift from passive AI responses to proactive, collaborative systems that work alongside humans. The rise of agentic AI will require significantly more compute power, not just for single tasks or queries, but for extended workflows involving reasoning, planning and continuous adaptation.
As the technology matures and adoption expands, the world is effectively adding billions of virtual users into the compute fabric. The question for every country, including the Philippines, is whether AI infrastructure is ready to support this scale and complexity.

HIGH-PERFORMANCE The rise of agentic AI will require significantly more compute power, not just for single tasks or queries, but for extended workflows involving reasoning, planning and continuous adaptation. CONTRIBUTED PHOTO
AI is more than GPUs
Get the latest news
delivered to your inbox
Sign up for The Manila Times newsletters
By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy.
High-performance graphics processing units often dominate AI discussions, especially for training and running large-scale models. However, central processing units are just as critical in powering AI systems behind the scenes, handling tasks such as data movement, memory management, thread coordination and the orchestration of GPU workloads.
In fact, many AI workloads — including language models with up to 13 billion parameters, image recognition, fraud detection and recommendation systems — can run efficiently on CPU-only servers, particularly when powered by high-performance CPUs such as the AMD EPYC 9005 Series processors.
As AI models evolve into more modular architectures, such as mixture-of-experts systems popularized by DeepSeek and others, the need for smarter resource orchestration grows. CPUs must deliver high instructions per clock, fast input/output and the ability to manage multiple concurrent tasks with precision.
Equally critical is connectivity, the “glue” that binds modern AI systems together. Advanced networking components, such as smart network interface controllers, help route data efficiently and securely between components, offloading traffic from GPUs and reducing latency. High-speed, low-latency interconnects ensure data flows seamlessly across systems, while scalable fabric ties nodes together into powerful distributed AI clusters.
In the age of agentic AI, heterogeneous system design becomes essential. AI infrastructure must go beyond raw compute and integrate CPUs, GPUs, networking and memory in a flexible and scalable way. Systems built this way can deliver the speed, coordination and throughput needed to support real-time interactions among billions of intelligent agents. As adoption scales, rack-level optimization — where compute, storage and networking are tightly co-designed — will be key to delivering the next wave of performance and efficiency.
Why openness matters in the AI race
As AI systems grow more complex and distributed, openness in software, hardware and systems design becomes a strategic imperative. Closed ecosystems risk vendor lock-in, limit flexibility and constrain innovation at a time when adaptability is critical to scaling AI.
This is why open software stacks such as AMD ROCm are essential. ROCm gives developers and researchers the freedom to build, optimize and deploy AI models across a wide range of environments. It supports popular frameworks such as PyTorch and TensorFlow, includes advanced tools for performance tuning and offers portability across hardware, all available as open source. In the context of the Philippines’ ambition to foster innovation across academia, startups and industry, open AI software offers broader accessibility, faster iteration and lower barriers to entry.
Openness at the hardware and systems level is equally vital. As AI compute evolves toward large-scale, heterogeneous deployments, rack-scale architecture becomes foundational. Open standards such as the Open Compute Project support modular system design, while emerging collaborations such as the Ultra Accelerator Link aim to create open, high-bandwidth connections between AI accelerators across servers. Meanwhile, the Ultra Ethernet Consortium is defining next-generation networking standards purpose-built for AI, enabling low-latency, high-throughput data movement across distributed systems.
These open initiatives give cloud and data center operators the ability to build flexible, interoperable infrastructure that keeps pace with AI’s growth. For the Philippines, embracing an open ecosystem positions the country to benefit from global innovation while cultivating local differentiation. It enables governments and businesses to build infrastructure that is performant, energy-efficient and tailored to domestic needs, without being locked into proprietary limitations.
In the emerging era defined by multi-agent AI, openness is not just a philosophy; it is a prerequisite for scale, sovereignty and sustained leadership.
Looking ahead to 2026
As agentic AI reshapes how work is done, the focus must go beyond GPUs to encompass CPUs, high-speed interconnects and smart networking, all essential for orchestrating complex, real-time decisions at scale. Equally critical is an open ecosystem, with open software such as ROCm, industry standards for rack-scale design and collaborative efforts such as UALink and UEC enabling greater flexibility, faster innovation and interoperability from edge to cloud.
This is why AMD is advancing its vision with “Helios,” a next-generation rack-scale reference design for AI infrastructure to be released in 2026. It is designed to unify high-performance compute, open software and scalable architecture to meet the demands of agentic AI.
For the Philippines, building open, heterogeneous and scalable infrastructure is more than a technology choice; it is a strategic foundation for national competitiveness. As the country navigates rising automation needs and growing regional AI ambitions, future-ready AI infrastructure will be essential to unlocking sustainable growth, innovation and resilience.
Alexey Navolokin is the general manager for Asia-Pacific at AMD, a US-based semiconductor company that designs high-performance computing hardware used in personal computers, data centers, gaming consoles and embedded systems.