The European Commission has published its first report under the Digital Services Act (DSA), assessing what it described as ‘systemic online risks’. The report triggered immediate online backlash and ridicule after warning that emojis such as a pill, a snowflake, or a leaf may be used as coded language in online drug sales.
The findings suggest that criminal networks are increasingly relying on symbolic communication to evade detection, with platforms now ‘experimenting’ with identifying such patterns. ‘Among the notable mitigation measures highlighted are, for example, the use of automated systems to detect emojis used as code for illegal activities online, such as the sale of illegal drugs,’ the report states.
💊 Emojis used as coded language to promote illegal activities online?
Some platforms are now detecting emojis used as code for drug sales.
This is one of the key findings of the first EU-wide report on systemic online risks.
Dive in → https://t.co/oAlxoNdogu#DSAForReal pic.twitter.com/OMq0YyIEF5
— Digital EU 🇪🇺 (@DigitalEU) April 22, 2026
The reaction online was swift and overwhelmingly critical, particularly among right-wing circles, with many pointing to what they described as ‘Orwellian’ regulatory overreach into everyday communication.
On X, one widely shared response mocked the finding bluntly: ‘The EU is now cracking down on emojis—this is getting absurd.’ Another user wrote: ‘So sending a pill emoji makes you a criminal now?’ reflecting a broader wave of ridicule that quickly turned the issue into a viral talking point.
The findings regarding emojis were first posted by the DigitalEU X account, accompanied by an image depicting several emojis with the caption ‘an emoji isn’t always just an emoji’. This sparked the imagination of many users, flooding the platform with memes ridiculing the original post by altering the emojis and captions to criticize the Commission. ‘Cope harder, you authoritarian clowns,’ wrote Simon Goddek, a researcher and biotechnologist, in response to the report.
https://t.co/NrrGBum0WK pic.twitter.com/pYX4SayikV
— Dr. Simon Goddek (@goddek) April 23, 2026
Beyond mockery, critics have also raised concerns about the implications for free speech and digital governance. A frequently cited argument is that identifying ‘coded language’ requires platforms to interpret context and intent—moving beyond clearly illegal content into more subjective territory.
This concern ties into broader scepticism surrounding the DSA, which obliges large platforms to assess and mitigate so-called ‘systemic risks’, including illegal content and threats to public security. Critics argue that this framework increasingly incentivizes proactive and interpretive moderation, rather than responses limited to clearly unlawful material, and in doing so risks encroaching on online free speech.
The debate also highlights the technical challenges involved. Emojis are inherently ambiguous and context-dependent, making accurate detection difficult and increasing the likelihood of false positives. Experts have long noted that content moderation already operates in ‘grey areas’, where meaning is fluid and difficult to define, particularly as platforms rely more heavily on automated systems.
Related articles: