Classroom AI apps expose children to porn site trackers and give UK students wrong US helplines, new report reveals
Children using well-known AI-powered apps in classrooms, such as Grammarly, Character.AI and others, are being tracked by adult website advertisers, given dangerous misinformation about self-harm and taught false facts.
(London, 10 September 2025) Children using well-known AI-powered apps in classrooms, such as Grammarly, Character.AI and others, are being tracked by adult website advertisers, given dangerous misinformation about self-harm and taught false facts, according to new research carried out by LSE and 5Rights Foundation’s Digital Futures for Children centre.
The alarming findings emerge as the House of Lords prepares to debate crucial amendments to the Children’s Wellbeing and Schools Bill next week, which would give the Government new powers to regulate the use of tech in the classroom, including AI.
Major failures across popular classroom apps
Researchers carried out a child-rights audit of five AI tools widely used in education settings– Character.AI, Grammarly, MagicSchool AI, Microsoft Copilot and Mind’s Eye – uncovering systematic rights-based concerns related to privacy, safety and accuracy:
- Children’s data exposed to commercial tracking from adult websites: despite claims about its safety and privacy, AI-powered personalisation app, MagicSchool AI, enables tracking cookies by default for users as young as 12. This exposes children to commercial tracking from adult website advertisers, including erotic and friend-finder websites. Similarly, Grammarly allows marketing platforms of companies like Facebook, Microsoft and Google to use education account data for commercial purposes.
- AI chatbots teaching false facts and creating dangerous dependencies: AI chatbots used for classroom learning have been found sometimes to confuse fictional characters with real figures and provide students with inconsistent information, which is inconducive to a learning environment. The platform’s design can also trigger unhealthy emotional dependency, with some child users reporting severe mental health struggles.
- Vulnerable children abandoned when seeking help: researchers found that children in the UK reporting bullying or suicidal thoughts on the MagicSchool AI chatbot can be provided with US emergency helpline numbers instead of UK resources. When researchers tested the tool, it also refused to engage with children seeking help until they explicitly mentioned suicide multiple times.
- Plagiarism detectors falsely accusing students: AI plagiarism detection tools, available in apps such as Grammarly, have well-known limitations, including false accusations. However, this does not prevent the company from advertising its effectiveness. Confusingly, the app tells teachers to continue using their own professional judgement, meaning the burden remains on the workforce.
- Data of children with disabilities shared without consent: Mind’s Eye (Smartbox) – designed for adults and children with disabilities, shares children’s data with its group companies in the US and EU without explaining why or offering any option to refuse, while biased outputs risk making these children feel excluded rather than supported.
Parliament considers action
EdTech remains unregulated in England beyond basic data protection laws. There is no central list of AI-facilitated EdTech products in schools, and no public list of approved products meeting expected safety standards.
The proposed amendments being discussed would require that:
- EdTech must be effective, safe and do what it claims to do
- Where AI is used, it must be clearly labelled
- Children’s personal data should not be stored outside of the school by third parties
With GenAI being used across most school subjects, according to the Department of Education, Parliament must act now before these unregulated tools become even more entrenched in children’s education.
Dr. Ayca Atabey, lead author of the study, said: “Across all GenAI tools we studied, children’s perspectives were largely excluded from their design, governance and evaluation, and all tools undermine children’s rights to privacy and protection from commercial exploitation.”
Colette Collins-Walsh, Head of UK Affairs at 5Rights Foundation, said: “The pandemic saw a rapid digitalisation of education, but in the five years since, no one has stopped to think if this is benefiting children. This is having serious consequences: children are being tracked by erotic websites and chatbots are providing wrong emergency helplines, risking lives and creating dependencies that can damage mental health.
As the Government presses ahead with spreading AI far and wide, we must have rules in place to protect children and their education. In the Children’s Wellbeing and Schools Bill, parliament has a chance to ensure this happens.”
The research forms the first report by the LSE and 5RightsA Better EdTech Future for Children project, which will develop best practices, rights-based recommendations and stimulate public debate over the best ways to achieve more inclusive, transparent and accountable digital learning environments.