TikTok is introducing new technology to detect underage users in Europe

20th January 2026

TikTok is introducing new technology to detect underage users in Europe

Share:

TikTok is introducing new technology to detect underage users in Europe, as regulators push for stronger online child protections and safer digital spaces for minors.

 Pressure Mounts on Platforms to Protect Children

TikTok has announced plans to roll out new technology across Europe designed to identify and manage accounts believed to belong to children under the age of 13. The move marks a significant step by the video-sharing platform to address growing concerns from European regulators about the safety of minors online and the effectiveness of existing age-verification systems.

Owned by Chinese technology company ByteDance, TikTok has faced increasing scrutiny over how young users interact with content on its platform. With governments and regulators demanding tougher safeguards, the company says its latest initiative aims to strike a careful balance between child protection, user privacy, and fairness.

TikTok is introducing new technology to detect underage users in Europe

According to TikTok, the new age-detection system will be introduced gradually across European countries in the coming weeks. Rather than relying solely on self-declared ages at sign-up, the technology will assess a range of signals to determine whether an account may belong to a child under 13.

These signals include information from a user’s profile, the type of videos they post, and behavioural patterns such as how they interact with content and other users. TikTok says this approach allows it to spot potential underage accounts that may otherwise go unnoticed using traditional methods.

Importantly, accounts flagged by the system will not be removed automatically. Instead, they will be sent to trained human moderators for review. The company argues this step reduces the risk of errors and ensures that legitimate users are not wrongly penalised.

Lessons from a UK Trial

The decision to expand the technology across Europe follows a trial of similar tools in the United Kingdom. TikTok says the pilot helped refine the system and demonstrated that automated detection combined with human oversight can improve accuracy.

European regulators have repeatedly warned that age-verification methods used by social media platforms remain weak, often allowing children to access services intended for older users. Officials have raised concerns that young users may be exposed to content that is inappropriate, misleading, or potentially harmful.

By extending the technology beyond the UK, TikTok appears to be responding directly to these regulatory criticisms, as well as to broader public pressure for reform.

Safety Concerns and Legal Pressure

The rollout also comes against the backdrop of legal action in the United States, where several British families have filed lawsuits alleging that their children died after being exposed to dangerous content on TikTok. Some of the cases involve viral challenges that reportedly encouraged self-harm.

TikTok has said it strictly bans any content that promotes or glorifies dangerous behaviour and insists that it actively removes such material when identified. The company has also expressed condolences to the families involved, while maintaining that it continues to strengthen its safety policies.

Although the lawsuits are taking place outside Europe, they have intensified global scrutiny of how social media platforms protect vulnerable users.

A Broader Shift in Online Child Protection

TikTok is introducing new technology to detect underage users in Europe at a time when the impact of social media on children is under closer examination than ever before. Regulators across the continent are considering tougher rules that would force platforms to take greater responsibility for young users’ safety.

For TikTok, the new system represents both a technological and regulatory response — one that acknowledges growing concerns while attempting to preserve user trust. Whether the measures will satisfy regulators remains to be seen, but the move signals a clear shift towards stronger safeguards for children in the digital age.