Researchers have found that a single recorded blue whale call can be used to train an AI system that accurately detects the same song across vast and long-running ocean recordings.

The study turns rare, hard-to-capture animal sounds into a scalable way to search decades of archived data that would otherwise remain largely unused.

A hidden archive

EarthSnap

Across decades of underwater recordings, blue whale songs appear as repeating acoustic signals embedded within vast, continuous streams of ocean noise.

A team led by Ben Jancovich, a PhD candidate at the University of New South Wales (UNSW) reshaped one call into many realistic versions.

The results showed that sparse evidence could still train a detector.

Jancovich’s team built a software that flags target sounds, so old audio could reveal whale calls missed by earlier searches.

That approach points toward a bigger problem: rare animals often leave evidence long before people can interpret it.

One single call can train AI

Instead of collecting thousands of examples, the researchers copied one clean call and altered it in controlled ways.

Those edits used data augmentation, making changed copies to teach a system, by altering pitch, length, and background noise.

Altered copies taught the computer what natural variation and ocean distortion could do to the same basic song.

“The surprising outcome is that a relatively simple data augmentation process enables really good performance from that one single training example,” said Jancovich.

Blue whales sing highly consistent songs

Blue whales were heavily hunted in the 20th century and their populations crashed. Some groups are slowly recovering, but they are still endangered.

NOAA Fisheries, the U.S. agency for marine resources, reports that vessel strikes, entanglement, and ocean noise remain ongoing threats to blue whales.

Unlike many animals, groups of blue whales sing highly consistent songs, so one population often repeats one signature pattern.

That sameness gave the model a narrow target, because it only had to recognize a call with stable sound features.

For conservation work, reliable listening can help researchers track where whales travel without needing constant sightings.

Training without waste

Training large computer models can burn time, money, and electricity when every pattern starts from scratch.

Here, transfer learning, reusing a model trained for one task, allows the team to adapt software originally built for human speech.

A neural network, a system built from layers that learn patterns, then compared whale-call shapes against the altered examples.

The finished system trained on a standard laptop within hours, which keeps the method within reach for smaller research groups.

Numbers that matter

For the Chagos pygmy blue whale, a smaller species tied to the central Indian Ocean, one-call training worked best.

A 99.4% recall score means the model found almost every call already present in the test set.

The share of flagged sounds that were true calls reached 91.2% after disputed labels were checked.

At 95.1%, the F1 score – one number balancing misses and false alarms – marked strong performance without proving perfection.

AI pinpoint meaningful sounds

Old recordings often come from passive acoustic monitoring, leaving recorders to listen unattended for months or years.

Underwater microphones called hydrophones can capture ocean sound continuously, but the resulting files quickly become overwhelming.

Human analysts once inspected visual sound charts by hand, and fatigue could cause missed calls or inconsistent labels.

Automation does not remove expert judgment, but it can narrow the pile until humans review the most meaningful sounds.

Limitations of the method

Consistency makes the method powerful, yet that same requirement limits where it can work.

Dolphins, for example, often use individual whistles, so one call would not represent enough of their vocal range.

A crowded whale chorus can also confuse the results, because many overlapping calls can resemble the target without clear boundaries.

Future versions may need examples of chorus and other difficult noise placed in the training set as rejected sounds.

Archives become evidence

Across the central Indian Ocean, the team plans to apply the detector to 25 years of recordings.

That long record could show whether whale songs changed over time, moved through seasons, or responded to human noise.

Because sound travels far underwater, acoustic archives can reveal animals that no ship, plane, or satellite happened to see.

Such records can turn scattered moments of listening into evidence about behavior, movement, and recovery.

Beyond the ocean

Forest microphones, desert recorders, and remote listening stations face the same problem as ocean archives.

Birds, insects, frogs, and other animals with repeatable calls could benefit if one strong recording can train a useful detector.

That possibility matters for rare species, especially when scientists hear them only a few times before conditions change.

“If accurate detectors can be trained from a single good recording, this can help us study rare and elusive species that have seldom been heard by humans,” said Jancovich.

A single whale song will not solve every monitoring challenge, but it lowers the threshold for turning overlooked recordings into usable evidence.

Future tools will need rigorous validation, clearly defined limits, and open access to ensure researchers can apply them reliably at scale.

The study is published in the journal Nature.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–