Get the latest technology updates delivered straight to your inbox.
Send us a tip using our anonymous form.
Reach out to us on any subject.
© 2026 The Tech Buzz. All rights reserved.
TikTok Rolls Out AI Age Detection Across Europe Amid Ban Debates | The Tech Buzz
TikTok deploys surveillance-based age detection to identify underage users in Europe
PUBLISHED: Fri, Jan 23, 2026, 5:13 PM UTC | UPDATED: Fri, Jan 23, 2026, 6:22 PM UTC
The Buzz
TikTok is implementing AI age-detection systems across Europe following a UK pilot that removed thousands of underage accounts, flagging suspicious profiles for human moderators instead of automatic bans
â–
The rollout comes as Australia bans social media for under-16s, the EU debates mandatory age limits, and 25 US states pass age-verification legislation
â–
Privacy experts warn the system requires expanded surveillance and data collection without proven safety benefits, creating risks for false positives and potential government abuse
â–
Third-party verification vendor Yoti processes over 1 million age checks daily for TikTok, Meta, and Spotify, despite user concerns about data security
TikTok is rolling out AI-powered age-detection technology across Europe to identify and remove users under 13, but the move is sparking fresh debate about whether enhanced surveillance is the right answer to child safety concerns. The system, which analyzes profile data, content, and behavioral signals before flagging suspected underage accounts for human review, represents the platform’s response to mounting regulatory pressure as governments worldwide consider outright bans for minors. With Australia already prohibiting social media for kids under 16 and 25 US states enacting age-verification laws, TikTok’s approach offers a middle ground that privacy experts say still comes at a steep cost.
TikTok just became the latest tech giant to bend to regulatory pressure over child safety, but its solution is raising questions about whether the cure might be worse than the disease. The company announced it’s implementing a new age-detection system across Europe designed to keep kids under 13 off the platform, using AI to analyze user behavior rather than simply banning young accounts outright.
The technology, which builds on a yearlong pilot in the UK, relies on a combination of profile data, content analysis, and behavioral signals to evaluate whether an account possibly belongs to a minor. According to a statement from TikTok, the system doesn’t automatically boot users. Instead, it flags suspicious accounts and forwards them to human moderators for review. The company declined to comment further on the European expansion.
The move comes at a pivotal moment for social media regulation worldwide. Governments are questioning whether platforms can police themselves, and they’re increasingly willing to impose solutions by force. Australia became the first country to ban social media entirely for children under 16 last year, covering Instagram, YouTube, Snap, and TikTok. The European Parliament is pushing for mandatory age limits, while Denmark and Malaysia are considering similar restrictions for under-16s.
“We are in the middle of an experiment where American and Chinese tech giants have unlimited access to the attention of our children and young people for hours every single day almost entirely without oversight,” Christel Schaldemose, a Danish lawmaker and vice president of the European Parliament, said in November during a parliamentary session reported by Reuters. She called for an EU-wide ban on platform access for children under 16 without parental consent and an outright prohibition for those younger than 13.
“Legislatures in the US, just in the calendar year 2026, are likely to pass dozens or possibly hundreds of new laws requiring online age authentication,” says Eric Goldman, a law professor and associate dean at Santa Clara University who has argued that government-compelled censorship should be viewed as constitutionally suspect. “Unless something dramatically changes, regulators around the globe are building a legal infrastructure that will require most websites and apps to be age-authenticated.”
But does TikTok’s surveillance-based approach actually solve the problem, or does it just create new ones? Privacy advocates say it’s the latter.
“This is a fancy way of saying that TikTok will be surveilling its users’ activities and making inferences about them,” Goldman tells WIRED. He refers to age verification mandates as “segregate-and-suppress laws” because platform governance often serves political motives, and policy solutions sometimes expose children to more harm than help. “Users probably aren’t thrilled about this extra surveillance, and any false positives – like incorrectly identifying an adult as a child – will have potentially major consequences for the wrongly identified user.”
Goldman adds that even if this approach works for TikTok, most services don’t have enough data about their users to reliably guess people’s ages, making the system difficult to scale across other platforms.
Alice Marwick, director of research at the tech policy nonprofit Data & Society, says TikTok’s age-detection tech does seem marginally better than automatic bans, but it still requires the platform to monitor users far more closely. “This will inevitably expand systematic data collection, creating new privacy risks without any clear evidence that it improves youth safety,” she says. “Any systems that try to infer age from either behavior or content are based on probabilistic guesses, not certainty, which inevitably proceed with errors and bias that are more likely to impact groups that TikTok’s moderators do not have cultural familiarity with.”
The irony of the situation isn’t lost on Goldman. Last October, in testimony before the New Zealand Parliament’s education and workforce committee, he noted that if the goal of age verification is to keep children safer, “it is cruelly ironic to force children to regularly disclose highly sensitive private information and increase their exposure to potentially life-changing data-security violations.”
TikTok acknowledged that no globally accepted method exists to verify age without undermining user privacy, even as its UK pilot led to the removal of thousands of accounts belonging to children under 13. The company said the appeals process for its age-detection tech relies on third-party verification vendor Yoti, and also uses traditional verification tools like credit cards and government-issued IDs – mechanisms that raise their own concerns about privacy and trust.
Yoti, which is also used by Spotify and Meta’s Facebook, has drawn criticism from users worried about excessive data collection and potential leaks. The UK company says it has conducted more than 1 billion age checks and completes an estimated 1 million per day. A Yoti spokesperson told WIRED the company estimates ages without identifying individuals and permanently deletes images after an age result is given. Yoti says it has never reported a data breach related to facial age estimation.
While TikTok’s approach may be viable under EU regulatory frameworks, it faces steeper challenges in the US. “Here, the legal exposure is significantly higher,” says Jess Miers, an assistant professor at the University of Akron School of Law, given that many state laws are getting repeatedly tied up in First Amendment litigation. Without a federal privacy law, “there are no meaningful guardrails on how this data is stored, shared, or abused not just by the companies collecting it but by the government itself. It could be handed to ICE. It could be used to target women searching for reproductive care. It could be used against LGBTQ+ teens seeking information on gender-affirming treatment. And it absolutely will be used to chill speech.”
Lloyd Richardson, director of technology at the Canadian Centre for Child Protection, believes stricter approaches like Australia’s are the right path forward. “Historically, if you look at the internet, it was borderless, no-holds-barred, and the rule of law within a country was irrelevant in many ways. But we’re starting to see a shift in that now,” he says. “Organizationally, we believe that the road they’re going down in Australia is the right approach in terms of having a social media delay.”
Richardson dismisses concerns about verification technology, saying there’s “a lot of disinformation around the topic” and that “there are absolutely ways to do age verification without AI face scanning, without the disclosure of personal information.” But experts like Marwick remain skeptical that technology alone can resolve what is fundamentally a policy and societal challenge.
TikTok’s age-detection rollout across Europe represents a watershed moment in the global debate over child safety online, but it’s far from a perfect solution. The platform’s bet on AI surveillance instead of outright bans may satisfy regulators in the short term, but it forces users into a Faustian bargain – trading privacy for access while creating new risks around data collection, false positives, and potential government abuse. As Australia, the EU, and dozens of US states race to implement their own age verification mandates, the tech industry is discovering there’s no way to keep kids safe online without fundamentally reshaping how platforms operate. The question now isn’t whether age verification is coming – it’s whether the systems being built will protect children or simply create new vulnerabilities in the name of safety.