At an ever-increasing rate, artificial intelligence (AI) and automation are revolutionising operational technology (OT) across Australia’s most critical infrastructure sectors, writes Les Williamson, Regional Director Australia and New Zealand, Check Point Software Technologies

Les Williamson

From electricity grids and water treatment plants to advanced manufacturing, AI-driven systems are enabling unprecedented levels of efficiency, predictive capability and autonomous decision-making. However, amid the promise of progress lies the dangerous illusion that more automation and AI necessarily equates to more control.

Autonomy ascending

Across the country, AI is quietly reshaping how infrastructure is managed. In electricity grids, smart systems are already balancing voltage and frequency in real time.

Meanwhile, predictive maintenance algorithms in manufacturing plants are forecasting equipment failures days in advance, using everything from thermal sensors to vibration analysis. Water treatment facilities are also relying on machine learning models to dynamically adjust chemical dosages based on live sensor data.

These capabilities represent the vanguard of autonomous industrial operations. But they also come with a hidden cost: complexity. With each new layer of automation – every sensor, algorithm, networked control system – the web of dependencies becomes more tangled. Decisions are increasingly made by systems whose inner workings are opaque even to their designers.

The AI models at the heart of these systems are often so-called ‘black boxes’, making decisions based on vast troves of data with little transparency. Worse, they are adaptive, learning and evolving in real time. This means the decision-making logic is not only complex but constantly changing, making audits, explanations and troubleshooting far more difficult.

Converging risks in OT environments

Traditionally, OT systems in sectors like power, water and transport were air-gapped and highly secure, intentionally isolated from wider networks. But the rise of cloud computing, smart sensors and always-on connectivity has seen the boundaries between IT and OT blur.

The benefits, such as faster response times, remote monitoring, data-driven optimisation, are undeniable. But so too are the risks.

In Australia’s energy sector, for example, systems that once required on-site technicians can now be managed remotely using smart grids. Yet these grids, when interconnected and autonomous, can suffer cascading failures. A single miscalculated input can ripple through the system, causing widespread blackouts.

Smart traffic systems are another example. While AI-based controls optimise traffic flow in real time, they’re also vulnerable to manipulation. A corrupted sensor feed could disrupt an entire city’s road network.

The paradox of control

The paradox of autonomy is stark. On one hand, autonomous systems reduce the need for human oversight, enabling faster, more efficient decision-making. On the other, they reduce our ability to understand, influence or even predict those decisions.

This, in turn, creates a dangerous dynamic: systems that operate independently while their human operators lose visibility into how and why choices are being made.

Moreover, many safety mechanisms, such as fail-safes and override systems, are built on outdated assumptions. These static rules struggle to cope with the fluid, adaptive nature of modern AI. And when systems fail, they often do so silently, until the consequences are felt in the real world.

When autonomy becomes a threat

Perhaps most alarming is the rise of what some experts call ‘threat autonomy’. This new class of risks arises when autonomous systems are manipulated not by brute force, but by subtle exploitation of their logic. An attacker doesn’t need to hack a system – they can simply poison its inputs.

Imagine a scenario where AI-powered systems controlling a city’s water network are fed slightly skewed sensor data. The AI might overcompensate, adding incorrect chemical doses to the supply.

Or in a smart factory, attackers might introduce subtle variations in input data, prompting the AI to degrade product quality in pursuit of flawed efficiency goals. In a hyper-connected system, these distortions can trigger cascading failures across entire industrial ecosystems.

Rethinking oversight and accountability

To mitigate these risks, Australian organisations must rethink how control is defined and exercised in autonomous systems. First and foremost is the need for explainability. If a machine decides to shut down a turbine or reroute power in the grid, decision-makers need to know why. Incorporating explainable AI (XAI) frameworks is essential for transparency, compliance and trust.  Indeed, preventative and Zero Trust hybrid OT architectures should be reviewed as the foundation of any rethink of autonomous operations risk mitigation strategies.

Second, CISOs should invest in AI-focused red teams that simulate cyber-physical attacks and identify weaknesses in decision logic. This goes beyond conventional cybersecurity testing, probing how AI-driven systems might respond to deceptive inputs or unforeseen scenarios.

Also, retaining a human-in-the-loop (HITL) capability is non-negotiable. Even as automation increases, human oversight must remain embedded in high-stakes decision-making.

Businesses must resist the seduction of the control illusion. Visibility does not equal authority, speed does not guarantee safety, and automation does not absolve responsibility.

For Australia’s industrial sector, now is the time to interrogate every assumption about control, resilience and risk in the autonomous era. Because in this new world, true control isn’t about issuing commands – it’s about understanding and securing the intent behind them