Baroness Kidron on Putting Children at the Start of the AI debate

On the 2nd of November 2023, our Chair, Baroness Kidron, discussed the need to put children at the start of the AI debate, rather than pushing them to the fringes. This was part of Queen Mary University’s event “AI at a Turning Point: How Can We Create Equitable AI Governance Futures?” that ran in partnership with The Alan Turing Institute, the All-Party Parliamentary Group on AI and Big Innovation Centre.

See below for the transcript of Baroness Kidron’s speech, and for a video of the event livestream.

At the risk of providing my own spoiler alert, let me start by telling you what I will conclude. It is a great omission – that the needs of children are not front and centre of the AI Summit – but have been – quite literally – pushed to the fringes.

Last Thursday the prime minister set out his great hopes for an AI enabled world – and paused to say that he would tackle head on the potential dangers, among the problems he cited were that “Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.“  

On Monday of this week at a summit fringe event, Secretary of State Suella Braverman, the National Crime Agency and the Internet Watch Foundation set out the terrifying scale of AI altered and AI generated Child sexual abuse.   The material that formed the background for this event was in part captured by a covert police unit with whom I have been working.  Over the last 6 months I have seen images, videos, text, voice and increasingly realistic synthetic environments in which sexual and violent abuse takes on the attributes of gaming and creates worlds where distorted appetites and imagination without a single real-world limitation creates material of such depravity – that even after a decade of engaging with this material I am shocked to the core.

I don’t want to give the impression that the relationship between AI and children is confined to the question of Child Sexual Abuse.  Absolutely not as I will come to shortly.  But I do want to forcibly make the point that AI generated child sexual abuse is not - as the PM suggested - a problem of the future. It is a problem of the here and now.  

Creation, distribution, consumption of CSA content – are illegal in the UK albeit covered by at least three separate laws - the oldest of which is from 1978.  But the models, or ‘plugs ins’ trained on and specialising in the creation of CSA material - built on readily available image, text, and video creation sites  - are not.  There is undoubtedly an argument for a new bill to bring together and update all legislation that touches on the creation of CSA – but at a minimum we can use legislation currently going through parliament to update definitions and plug the gaps to create the necessary friction for those who profit, create and consume CSA.  Including, requiring informed consent for the use of images in training models – so that a child’s image cannot be scraped from social media or the school website and turned into an abuse material and making it an offence to train models on CSAM. This a here and now problem - not a problem that should be left to the fringes.  

The reason that I am setting this scenario in such detail is that in a week of existential angst – it offers three important lessons.

First – we have already have laws in many of the areas that are causing concern.  So before raising the alarm about existential threats of the future it would seem prudent to look at the present and see how our existing rights and laws do apply or could be updated to apply.  I have just applied this thought to the CSA context, but how about intellectual property, data protection rights, collective bargaining – as the Hollywood writers have just done - consumer rights and or safety standards, what about Human rights? children’s rights?

The routine application and robust enforcement of these existing legal and rights frameworks would radically change the way in which AI is being developed and deployed.

Second, the language we use is critical. The language of existential threat that AI will replace humans – is something that disempowers most of us. But ask us if we want to supercharge the creation of Child Sexual Abuse material I would hazard a guess that the answer is no. Or if we think it is ok to have facial recognition trained on white faces so a black head teacher, visitor, parent, child – cannot pass security to enter the school – again no.   We have language that provides for shared human values and that language gives us agency.   Existential threat gives us none.

I will leave to another contributor the task of considering if AI is either artificial or intelligent – but whether computational systems however powerful have agency without human systems that give them free reign – I challenge.   

Nuclear, biological weapons, disease contagion, or even climate change all have the capacity to bring the world as we know it to an end – on the first two the global community curtailed both development and spread to a degree of success that has at a minimum prevented global annihilation. The pandemic saw the human agency at scale – as every part of the world moved to contain the virus.  Perhaps climate change is simultaneously the best and worst example – in that we see a struggle for human agency over vested interests – in which the enormous equity disparities between polluters and polluted, between the natural world thats provide oxygen and human behaviour that gobbles it up, between the short-term politicians and business and the longer-term interests of the young.  This battle is in full swing – and offers a glimpse of how it is possible to make a question so big – that it creates an environment in which the immediate and practical actions that might really contain the threat are overlooked in favour of an as yet unidentified silver bullet that will save us when the time comes.  

AI is not separate and different and the language we use to describe either its  benefits or threat must make that clear – AI is built, used and purveyed by business, governments, civil society and as I have already pointed out criminals – it is part of human built systems over which we still have agency.  Who owns the AI, who benefits, who is responsible and who gets hurt – is at this point – still in question.  The language that suggests that AI is too late and too difficult for us to deal with – is a carryover of decades of a deliberate strategy of tech exceptionalism that has privatised the wealth of technology and outsourced the cost to society. Existential threat is the language of tech exceptionalism. It is tech exceptionalism that poses an existential threat to humanity not the technology itself.

Thirdly and finally.  Children are early adopters of technology. The canaries in the coalmine – as we have seen with social media, gaming and other carelessly developed and poorly regulated digital environments.  The AI debate should start with children not - push them to the fringe.  Children have no electoral capital so need representation, they have an enhanced rights framework to there are more strings to pull, they are likely to live longer so the impact of automated decisions will have a longer lifespan - and the vast majority of adults from all disciplines, ideological, geographic, cultural and socioeconomic groupings have a vested interest in making life OK - either for their children or children in general.  

AI in ed tech is already such a problem that UNESCO recently published a 500+ page book, The Ed tech Tragedy that forensically points out the failure to ask basic questions about the quality of outcomes for children – social, developmental and pedagogical/educational – before creating a ed tech market that is cannibalising education systems across the world.  AI in recruitment is routinely untrustworthy on discriminating on the base of race and gender - but more subtly has taken out human agency from a process that could – if human judgement was applied - provide a leap for young people without the right qualifications – and I know that because I am one such person.  

Policing, the distribution of welfare, algorithmically targeting three and four year olds because they click more often on advertising links than other demographics - just a handful of areas where AI has been adopted undermining human agency - impacting on children here and now. Not because the tech is bad – but because it is primarily deployed on the promise of efficiency without sufficient care for pedagogy, diversity, non-computational qualities (or should I say human qualities) children’s rights or development needs…

In the race for AI prominence and the vast riches that they envisage, the tech bros have come to town, to warn us that the future they are creating is untrammelled, unprincipled, and insecure whilst loudly proclaiming that society must get a grip.  Meanwhile in the race to be ‘the one’ they are failing to apply existing rules about privacy, intellectual property, children rights, or safety rails to their models.  It is not even on the table that they become corporately or personally responsible for the outcomes of AI even as they seek to monetise the benefits.

With wars raging across the world for with no global consensus to bring them to an end, with elections pending with the widespread use of mis and disinformation anticipated, and the very survival of the planet in plain sight – It would be beyond foolish to stand here and suggest that I have the answer for the development of secure, enabling, trustworthy AI. But as technology is largely neutral and human agency is not and we wish to build the digital world that children deserve, we must bring to an end the existential threat of tech exceptionalism, and reverse the failure of governments, most worryingly in the US – to apply rigorous standards to the tech sector – as it does to all other sectors.

Thank you.