Supercomputers are the giants of the computing world. While a normal laptop might let you browse the web, type essays or play games, a supercomputer is designed to solve problems that are so large, complex, and calculation-heavy that they would take ordinary machines years to finish. Weather forecasting, simulating nuclear reactions, and modelling the early universe all depend on them.
How does a supercomputer work?
At the heart of a supercomputer is the idea that, instead of making one processor work extremely fast, it’s often better to use thousands or even millions of processors working at the same time. This is called parallel computing.
Each processor, or core, tackles a small part of the problem. When their solutions are combined, they add up to a complete answer more quickly than one processor could manage on its own.
Modern supercomputers are not single machines in the everyday sense but a vast collection of components organised into layers:
(i) The processors are a supercomputer’s workers. A Central Processing Unit (CPU) is good at handling general tasks while a Graphics Processing Unit (GPU) — originally designed for video games — is very good at handling repetitive mathematical operations, which are common in scientific simulations. Many supercomputers use both.
(ii) A node is a group of processors bundled together with memory — a package that resembles a small computer. A supercomputer might contain thousands of such nodes.
(iii) The nodes are connected to each other in a high-speed network in which data is transmitted much faster than through home internet connections.
(iv) Each node has its own memory for quick access to data. For larger amounts of information, supercomputers use storage systems that can hold petabytes (millions of gigabytes) and specialised file systems to ensure thousands of nodes can read and write information from/in memory without chaos.
(v) All this activity produces enormous heat. Supercomputers often require bespoke cooling, including water pipes and refrigeration units; some units are even immersed within special liquids. Without cooling, these machines would quickly destroy themselves.
(vi) Finally, running a supercomputer can require as much electricity as a small town. Efficient power distribution and utilisation are thus required.
What software do supercomputers use?
The software of a supercomputer controls how its processors work together. The operating system must schedule tasks across thousands of processors, manage memory, and handle communication. So engineers have developed parallel programming languages like MPI (Message Passing Interface) or OpenMP that can tell each processor what to do and when to exchange information.
Load balancing is another key issue. Because of the need for speed as well as a supercomputer’s immense power consumption, it’s inefficient to have some processors finish their work early and idle while others struggle with heavier tasks. So the software must also include algorithms that distribute tasks in ways that minimise such waste.
Supercomputer performance is usually measured in flops, or floating-point operations per second. A laptop might perform billions of flops. Today’s top supercomputers operate in the range of exaflops, performing quintillions (1018) of operations every second. In other words, a single exaflop machine can perform more calculations in one second than every human on the earth could perform on a calculator in their entire lifetime.
What does a supercomputer look like?
Interacting with a supercomputer is very different from using a personal computer. Because these machines are designed for large teams of researchers, they’re typically housed in special facilities and accessed remotely.
For example, a scientist can log in from their laptop or workstation using a secure network connection, often through a terminal window rather than a graphical interface. Once connected, the user won’t directly ‘operate’ the machine with mouse clicks. Instead, they prepare job scripts, which are text files that describe what program to run, how much computing power it will need, and how long it should run.
These jobs are submitted to a scheduler, which is a computer programme that manages the queue of tasks from many users and assigns them to available nodes in a supercomputer.
The actual computation might take minutes or even days depending on the size of the problem. When a job finishes, the output — whether numbers, images, simulation data or something else — is stored in the supercomputer’s file system. The user can download this data to their own computer for analysis and visualisation.
In this way, the supercomputer is less like a giant laptop and more like a telescope or a particle accelerator.
Does India have supercomputers?
India’s supercomputing effort began in the late 1980s when Western countries refused to export high-end machines. This spurred the Centre for Development of Advanced Computing (C-DAC), founded in 1988, to develop indigenous systems. The PARAM series, starting with PARAM 8000 in 1991, was among the first India-built supercomputers.
The Indian government launched the National Supercomputing Mission (NSM) in 2015 as a joint project of the Department of Science and Technology (DST) and the Ministry of Electronics and IT (MeitY), implemented by C-DAC and the Indian Institute of Science (IISc), Bengaluru. Its goal is to build a network of 70+ high-performance computing facilities across India, with capacities ranging from teraflops to petaflops, and to develop indigenous hardware and software.
Supercomputers have also been installed at many academic and research institutions. India’s current fastest supercomputer is AIRAWAT-PSAI at C-DAC in Pune; it’s ranked inside the top 100 in the canonical top 500 list worldwide. Other PARAM-series systems are located at IITs, IISERs, IISc, and central laboratories across India. The Indian Institute of Tropical Meteorology, also in Pune, hosts Pratyush, a supercomputer used for weather and climate modelling. The National Centre for Medium Range Weather Forecasting in Noida has Mihir for the same purpose.
C-DAC remains India’s principal designer and integrator of supercomputers. It collaborates with the Indian Institute of Science, IITs, and private sector vendors for hardware. Of late, there has been greater emphasis on indigenously developed processors, like the Rudra server and the AUM high-performance computing nodes under the NSM.
Aside from weather, especially monsoon, and climate modelling, researchers in India use supercomputers for long-term modelling of the Indian Ocean and the Himalayas, and for molecular dynamics, discovering drugs, nanotechnology, simulating black holes, gravitational waves, galactic structures, to train cutting-edge artificial intelligence models, and to simulate defence scenarios.
What does the future look like?
Future supercomputers may look different. Quantum computers, which use the principles of quantum mechanics, promise a new way of handling certain types of problems that could reduce the hardware and energy demand.
At the same time, exascale computing, which is already beginning to arrive, is pushing the limits of what classical machines can do. For example, on September 5, a machine called JUPITER in Germany became Europe’s first exascale supercomputer. A statement from the European Commission said it’s powered entirely by renewable energy.
There are also new designs inspired by the human brain, called neuromorphic computing, where processing and memory will be installed on a single chip, potentially yielding significant gains in energy efficiency and speed.
This said, all these supercomputers will use the same fundamental concept of parallel computing.