– Advertisement –

We instinctively reach for new tools when cyber risk rises. Yet evidence often reveals that the decisive battleground is not in IT systems, but among the humans using the technology. The 2025 Verizon Data Breach Investigations Report finds nearly 60% of breaches involve the human element: manipulation, mistakes, or misuse. Technology is necessary, but it isn’t sufficient if the person behind the keyboard can still be tricked.

The gap between “what people know” and “what people do under pressure” is widening because AI has supercharged social engineering. Flawless emails, credible voices, and convincing video calls now arrive at scale, on demand. In Hong Kong, a finance worker joined a video conference where the “CFO” and colleagues were deepfaked; the result was a transfer of more than US$25 million. The criminals didn’t defeat encryption, they exploited routine human behaviour.

If awareness training alone could fix this, it would have by now. Most organisations already run simulations and annual modules. The problem is conversion: Training improves knowledge, but breaches are driven by behaviour in the moment. That is where human risk analytics can make a difference.

Human risk analytics in the context of security

In security, human risk analytics is the continuous analysis of how people interact with communications and systems, then using those patterns to predict and reduce risk. Think of it as moving from “did the user pass an annual phishing test?” to “how does this person typically act, and is today different in a risky way?”

Practical examples include:

Detecting employees who repeatedly click on high-risk links or respond unusually quickly to urgent requests.

Flagging anomalies such as sudden new payment instructions, messages sent outside typical working hours, or atypical communication routes.

Correlating identity signals like impossible travel or device mismatches with risky user actions.

On their own, these are clues. Combined, they create a risk picture per person and per message. The outcome is not a punitive label but a just-in-time intervention calibrated to the situation.

How does this work in practice?

A junior finance analyst receives a message from their “CEO” approving an urgent supplier payment. Human risk analytics note the following: the supplier bank details are new; the “CEO” has never messaged this analyst directly about payments; the request arrives outside normal business hours; the analyst’s historical pattern shows rapid responses to senior leaders. The message is auto-flagged as high risk.

Before the analyst can act, two things happen:

Friction is added in the right place. Payment is temporarily held; the analyst is prompted to use an independent channel (preregistered number or secure chat) to verify; step-up authentication is required to proceed.

A contextual nudge appears. A short, onscreen reminder flags the unusual circumstances: a new payee, a message outside office hours, and a first-time contact from the “CEO.” The analyst is encouraged to verify before proceeding.

If the analyst reports the message, that behaviour is reinforced; if they ignore the prompt, the system increases scrutiny for similar requests in the future. The loop closes when leaders review leading indicators: reduced time to report, fewer high-risk clicks among previously at-risk cohorts, and an overall decline in near misses. This is behaviour change you can measure, not training completion rates you can only audit.

Five ways to make human risk analytics stick in the enterprise

Across the Asia-Pacific region, five principles consistently separate successful programs from box-ticking exercises:

Instrument the human layer first. Start where attackers start: inbound communications and workflow tools. Aggregate message risk, user behaviour, and identity context to see and score real-world exposure.

Personalise risk without stigmatising people. Different roles face different pressures. A regional sales lead on the road isn’t a “repeat clicker”, they’re context-challenged. Tailor prompts and friction to the job, not just the individual’s history.

Make interventions microscopic and immediate. The smaller the nudge and the closer to the risky moment, the better the conversion. Replace annual modules with 20-second, in-flow cues attached to real messages.

Close the loop with outcomes, not effort. Track behaviours that predict incidents: time to report, verification rate for payment changes, percentage of high-risk messages auto-quarantined before a user sees them, repeat risk reduction over 30/60/90 days.

Build trust by design. Limit who can see user-level metrics; mask personal data where possible; explain to employees what is being measured and why. People comply with systems they perceive as fair.

Ultimately, this is not about replacing technology; it’s about orchestrating technology around people. Controls still matter — email authentication, sandboxing, identity security, and the like — but the decisive layer is whether a rushed employee at 6:43 PM pauses before releasing a million-dollar payment because your system put the right speed bump in the right place.

Attackers already optimise for human behaviour. Defenders must do the same. Organisations that succeed will treat human risk analytics as a control that turns messy, everyday actions into measurable, improvable security outcomes. In an era of AI-generated deception, that is what will decide your next breach.

– Advertisement –