Too little too late? Instagram’s latest changes and what they will mean for kids

Last week, Instagram announced it will be making changes to keep young people “even safer.” The day after the announcement, Members of the US Senate Subcommittee on Consumer Protection, Product Safety, and Data Security cross-examined Instagram’s chief, Adam Mosseri. The Subcommittee’s chair, Senator Blumenthal, concluded that Instagram’s latest attempts to improve safety were “underwhelming” – here’s our take on it all.

It hasn’t been a good few months for Instagram. Thanks to Frances Haugen and other former tech-company employees turned whistle-blowers, the cat is well and truly out the bag on how the company has conveniently ignored evidence that its product is harmful to children. Haugen revealed internal research that showed Instagram made body image issues worse for one in three teen users and caused suicidal thoughts in a substantial minority – evidence that the company failed to act upon. Her testimonies painted a picture of a company that prioritises growth at all costs, even when that cost is the wellbeing of children the world over. For those of us working in the field of child online safety, it confirmed what we already knew, so forgive us for viewing Instagram’s recent announcements with a degree of scepticism.

Every incremental change that produces better outcomes for children is of course, to be welcomed. But will these changes really make kids safer?

Recommendation systems

The headline change is that Instagram will be taking “a stricter approach to what they recommend to teens in search, explore, hashtags and suggested accounts” by expanding their sensitive content controls and exploring chronological-based ranking in feeds.

Instagram’s Sensitive Content Controls, introduced in July this year, allow users to decide how much ‘sensitive’ content is shown in their Explore page. The control has three options: allow, limit and limit even more. ‘Limit’ is the default for all users and ‘limit even more’ gives the option to see “fewer photos or videos that might be offensive” such as content depicting violence, or sexually explicit or suggestive material. The ‘allow’ option is unavailable to people under the age of 18, and Instagram are considering introducing the ‘limit even more’ setting for search, hashtags, reels and suggested accounts for teens. Given recent revelations about the sheer scale of harmful material teens are exposed to on Instagram, why on earth is ‘limit even more’ not the default for 13-17s, not only on Explore, but across the service?

Instagram’s recommendation systems are curated by algorithms that show you what the service thinks you are most likely to engage with, partly determined by ‘engagement’ metrics such as the number of likes, shares and comments a piece of content has amassed. This is known as engagement-based ranking. Chronological-based ranking shows content in the order in which it is shared, and would mark the start of a return to the good old days of early social media, before feeds were flooded with ads and other ‘recommended content’ served up to under 18s, regardless of its nature, veracity or propensity to harm.

A better approach to recommendation would see algorithms designed not solely to maximise engagement, extend reach, and increase activity, but to diversify what teens are shown, to filter out self-harm and suicide material, pro-eating disorder content or pornography and violence, and to spare children from intrusive adverting. As former Culture Secretary, Jeremy Wright QC, said, “algorithms that can show us the adverts they think we might respond to can also be used to filter out harmful material.”

Time to take a break?

Instagram also launched its new ‘Take A Break’ tool, designed to “empower people to make informed decisions about how they’re spending their time.” So for example, when a user appears to be spending too much time scrolling (on the infinite scroll designed to keep users scrolling), they are nudged to take a break from the platform. Instagram is designed to extend engagement, and to do so, its features (popularity metrics, frictionless sharing etc.) exploit natural human tendencies to seek out social affirmation and connection – traits that are particularly prominent in teens. This has led to what Frances Haugen describes as “problematic use”:

 “When kids describe their usage of Instagram, Facebook’s own research describes it as an addict’s narrative. The kids say, ‘This makes me unhappy. I feel like I don’t have the ability to control my usage of it. And I feel that if I left, I’d be ostracized.”

At the Senate Subcommittee hearing, Mosseri refuted Senator Blumenthal’s claim that Instagram is addictive. Perhaps he never got the brief from fellow former Meta employee Sean Parker, who in a 2017 interview said:

The thought process that went into building these applications, Facebook being the first of them, was all about, ‘how do we consume as much of your time and conscious attention as possible?’ That means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post… that’s going to get you to contribute more content, and that’s going to get you more likes and comments. It’s a social-validation feedback loop… exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

Instagram suggesting to teens that they take a break from a platform that is designed to harness their attention is like asking an alcoholic at the bar if they would consider herbal tea.

“Parents know best”

Instagram will be introducing its first set of parental controls in March next year, including the ability for parents and guardians to see how much time their children spend on the service and to set time limits. This raises two concerns. The first is around the efficacy of time caps as a safety measure. The tenor of the ‘screen time’ debate has evolved over the last few years, from moral panic about kids’ eyes turning square to concern about what those screens are actually showing. Research has shown that the negative effects of screen-based activities on children depend as much on the quality of the activities as the quantity.

Limiting the amount of time a child spends using a screen or engaging with a particular service does not address the risks created by the way the service is designed or how it engages with the child, it merely limits their exposure. Importantly, these new tools do not give parents or guardians the ability to alter the quality of the time their children spend on Instagram, for example, limiting how much advertising they are exposed to, adapting the design features that feed social comparison, or influencing the type of content they are shown.

The second issue is that parental controls require a degree of adult involvement and a level of ‘digital literacy.’ Not all children have active or engaged parents, and many adults do not feel they have the requisite knowledge to use tools designed to help them oversee their child’s online use. There is also a concern that such tools will introduce a level of parental surveillance that may be inappropriate for older children, or cause tension within families.

Age assurance

To better understand the real age of Instagram’s users, Mosseri described how they are “building classifiers which try to predict age.” These classifiers would take data from users’ activity such as the age of other people in their network, or their comments on birthday posts. There are obvious problems with this approach. Firstly, an underage user would need to already be using the service for it to be able to identify them as underage. Secondly, it could potentially lead to data-hungry services like Instagram justifying even greater data collection under the guise of age assurance.

Mosseri said, “It’s difficult for companies like ours to verify age for those so young they don’t yet have an ID.” You would be forgiven for finding this difficult to believe, given Instagram’s parent company is developing an immersive and embodied internet that blurs augmented, virtual and physical reality, or as it’s fondly known, the metaverse. The technology for privacy-preserving age assurance exists already and is being developed at speed by the safety tech sector, whether it’s anonymised age tokens, capacity testing and facial or voice analysis. It is disingenuous for Instagram to cast age assurance as a technical challenge, when it is well within their capabilities to introduce privacy-preserving age assurance now.

Targeted advertising

When asked by Senator Markey if he would support legislation to ban targeted advertising to children, Mosseri did not give a definitive answer. Instead, he towed the party line, claiming to be supportive of limiting targeting options which Meta announced they would be introducing, allowing ads to target based only on age, gender and location, and upholding existing bans on weight loss ads, dating apps and other age-restricted goods and services. Later in the hearing, when asked if Instagram’s own machine learning ad delivery system targets children using factors other than age, gender and location, he indicated the platform doesn’t use off-platform data. But he did reveal that it uses in-app (‘behavioural’) data from users to determine what ads are shown to children. So turns out the data being gathered is not just limited to age, gender and location after all. Not quite the step away from behavioural advertising that the announcement might have us believe.

“High standards and universal protections”

Mosseri also called for the establishment of an industry body that set standards for user safety, to determine the best practices for how to verify age, how to build age appropriate experiences, and how to build parental controls, with input from policymakers and experts in the field of child safety. Well happily, here’s one we made earlier:

5Rights Foundation and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) have developed the IEEE 2089-2021 Standard for Age Appropriate Digital Service Framework, which introduces practical steps that companies like Instagram can follow to design digital products and services that are age appropriate. Perhaps Mosseri would like to make Instagram ‘even safer’ by following its guidance.

All in all, these changes are too little too late. It’s hard to believe the likes of Mosseri when they claim they are doing all they can to make their services safer, and that they welcome regulation to support these efforts, when time and time again we see them failing their most vulnerable users and investing lobbying power to water down proposals for tighter online regulation. This was certainly the feeling among the Senators who left the hearing with a clear instruction for Mosseri: “Have some empathy. Take some responsibility.”