Skip to main content

Final DSA guidelines deliver historic win for children’s rights online after years of 5Rights advocacy

The European Commission’s final guidelines on Article 28.1 of the Digital Services Act incorporate key recommendations from 5Rights’ baseline and coalition advocacy that shaped the framework from inception to adoption.

Group of young people using and looking at mobile phone while sitting together

The European Commission has adopted its final guidelines on Article 28.1 of the Digital Services Act, establishing a landmark framework for protecting millions of children online across Europe. These guidelines are not only a regulatory milestone but the culmination of 5Rights’ years of campaigning to place children’s rights at the heart of system design.

Platform responsibility replaces parental burden

The guidelines establish a comprehensive risk-based approach based in the understanding that certain design practices and features of online platforms create or exacerbate existing risks for children. The responsibility is therefore on platforms, and not on children nor parents, to adapt their services accordingly – in line with children’s rights. In that regard, the guidelines stress that all children’s rights must be considered, with a particular focus on accessibility, non-discrimination and children’s evolving capacity.

Settings should be set to the highest level of privacy and safety by default, with features like geolocation and push notifications disabled, and interactions limited to previously accepted contacts. Most importantly, platforms cannot nudge children to change their settings and recommender systems must prioritise explicit user signals over implicit engagement-based metrics.

The guidelines also emphasise the importance of meaningful child participation throughout platform design and evaluation processes, recognising that companies can design for children only with their active involvement.

Strengthened AI safeguards and transparency

The final guidelines mark a decisive step forward from the draft version, incorporating crucial improvements that directly reflect 5Rights’ sustained advocacy.

Most significantly, the guidelines now include substantially strengthened requirements to address and mitigate risks related to artificial intelligence, reflecting key concerns raised by 5Rights during the consultation process. AI tools must now undergo thorough risk assessments before being made available to children, cannot be activated automatically and must include clear warnings informing children they are interacting with an AI. Crucially, AI-based support cannot replace human interaction as the primary support mechanism for young users.

The guidelines also deliver stronger transparency requirements that 5Rights advocated for. Tech companies should publicly report their risk review outcomes, content moderation metrics and any documentation assessing the appropriateness and proportionality of age assurance measures, creating robust public oversight mechanisms.

Looking ahead to implementation and continued advocacy

With implementation expected over the coming months and a one-year review process built in, there will be ongoing opportunities to assess effectiveness and address emerging gaps. For 5Rights, this represents not an endpoint but a foundation for continued advocacy to ensure this groundbreaking framework delivers real-world protection for the millions of children in Europe who navigate digital spaces daily.