Launching a website or an online community brings people together to create and share. The operators of these platforms, sadly, also have to navigate what happens when bad actors attempt to misuse those destinations to spread the most heinous content like child sexual abuse material (CSAM).

We are committed to helping anyone on the Internet protect their platform from this kind of misuse. We first launched a CSAM Scanning Tool several years ago to give any website on the Internet the ability to programmatically scan content uploaded to their platform for instances of CSAM in partnership with National Center for Missing and Exploited Children (NCMEC), Interpol, and dozens of other organizations committed to protecting children. That release took technology that was only available to the largest social media platforms and provided it to any website.

However, the tool we offered still required setup work that added friction to its adoption. To help our customers file reports to NCMEC, they needed to create their own credentials. That step of creating credentials and sharing them was too confusing or too much work for small site owners. We did our best helping them with secondary reports, but we needed a method that made this seamless to encourage adoption.

Today’s announcement makes that process significantly easier for site owners, helping them contribute to keeping the Internet safer with even less manual effort. The tool no longer requires website operators to create and provide their own unique NCMEC credentials. The result is that we have seen monthly adoption of the tool increase by 1,600% since the introduction of this change in February.

Services that attempt to flag and stop the spread of CSAM rely on partner organizations, like NCMEC, who maintain lists of hashes of known CSAM. These hashes are numerical representations of images that rely on an algorithm to create a kind of digital fingerprint for a photo. Partners who operate these tools, like Cloudflare, check hashes of content provided against the list maintained by organizations like NCMEC to see if there is a match. You can read about the operation in detail in our previous announcement here.

We rely on fuzzy hashing, a technique that goes beyond simple one-to-one matches. If a photo of CSAM is altered even slightly — by adding a filter, cropping it, or adding some noise — the fingerprint changes completely.

A fuzzy hash, on the other hand, creates a “perceptual fingerprint.” Even if an image is modified, its fuzzy hash will remain similar to the original. This allows our tool to identify matches with a high degree of confidence, even if the abuser tries to disguise the content.

The removal of the requirement to share the credential with Cloudflare removes one additional step to deploying and enabling our tool, but site operators are still expected to continue to file their own reports with NCMEC or their regional equivalent.

The process for using the tool is now straightforward and simple:

Enable the Tool: Activate the CSAM Scanning Tool on your Cloudflare zone and verify your notification email address.

Scan and Detect: Our tool scans your cached content for potential CSAM, creating a fuzzy hash of each image. If a match is found with a known bad hash, a detection event is created.

Remediate: Cloudflare blocks the URL to any identified matches and notifies you so that you may take further action.

We believe that the tools for a safer Internet should be available for everyone  — not just a few large companies.

We invite you to enable the CSAM Scanning Tool on your website today. For more technical details on how it works, please visit our developer documentation. We also welcome you to join our community to discuss the technology and help us continue to build a better Internet.