Why the age debate in Europe is asking the wrong question
Across Europe, policymakers are asking what age children should be allowed online. 5Rights is asking what kind of digital environment we are prepared to offer them once they are.

What is the right age for children to access social media? It is the question driving legislative action across Europe, with several Member States moving to restrict children’s access to social media and the European Commission signalling ambition for setting an age threshold. The concern is legitimate. In Denmark, 94% of children have a social media account before they turn 13. In Greece, six in ten children aged 9 to 12 use social media every day. Societies are right to be alarmed and right to demand action.
On its own, however, this question is incomplete. Focusing solely on the point of entry lets companies off the hook for the recommender systems that push harmful content, the persuasive design that keeps children compulsively engaged, the data practices that exploit their attention for profit – all the while pushing them towards other environments like gaming platforms, AI chatbots and EdTech services, where children face equivalent risks with even less scrutiny.
The right question, then, is what platforms should do once they know the user is a child. Age assurance, done right, should function as a trigger: one that activates a fundamentally different experience, calibrated to the child’s developmental stage, rights and evolving capacities. For younger children, that means no exposure to behavioural targeting or persuasive design features. For teenagers, it means access with meaningful safeguards, including limits on recommender systems and greater control over how their data is used and the prohibition of the most harmful features.
But for any of this to work, age assurance itself must be something children can trust. This is the framework set out in 5Rights’ Age Assurance as a Spectrum: age assurance that is privacy-preserving, proportionate to the risks, and rights-respecting. Recognising a child online without exploiting their data is the starting point. Now tech companies must take that knowledge and use it to build digital experiences that genuinely serve children’s best interests. As 5Rights Head of EU Affairs, Manon Letouche, argued in her keynote at this week’s Global Age Assurance Standards Summit in Manchester:[
“How do we give children access to online experiences that are appropriate for them? Not by asking what age they should be let in but by asking what kind of digital environment we are prepared to offer them.
We do it by: recognising children, through effective, privacy-preserving age assurance; designing for children, by embedding age-appropriate protections and experiences across services; and investing beyond restriction, in a digital ecosystem that offers real, meaningful alternatives. A continuum where children are not treated as small adults, nor excluded as risks to be managed, but recognised as rights-holders, with distinct needs, capacities, and entitlements.”
The responsibility to keep children safe online belongs to the tech companies, not to children navigating systems that were never designed for them and not to parents left to fill the gap. Done right, age assurance can be more than a compliance tool. It can be a bridge to a digital environment where children are not treated as an afterthought, but recognised as rights-holders, able not just to be safe online, but to learn, participate and thrive.