UK Tech Companies and Child Safety Agencies to Examine AI's Capability to Create Abuse Content

Technology companies and child safety agencies will be granted permission to assess whether artificial intelligence tools can generate child exploitation images under new UK legislation.

Substantial Rise in AI-Generated Harmful Content

The declaration coincided with revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the amendments, the authorities will permit designated AI developers and child safety groups to examine AI models – the foundational systems for chatbots and image generators – and verify they have adequate safeguards to stop them from creating depictions of child sexual abuse.

"Ultimately about stopping exploitation before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the risk in AI models early."

Tackling Legal Obstacles

The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at preventing that issue by enabling to halt the creation of those materials at their origin.

Legislative Framework

The changes are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or distributing AI models developed to generate child sexual abuse material.

Practical Consequences

This week, the minister visited the London headquarters of a children's helpline and heard a mock-up conversation to advisors featuring a report of AI-based abuse. The interaction portrayed a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.

"When I hear about children experiencing extortion online, it is a cause of intense anger in me and justified anger amongst parents," he said.

Concerning Statistics

A leading internet monitoring foundation stated that instances of AI-generated abuse material – such as webpages that may include multiple files – had more than doubled so far this year.

Instances of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a vital step to guarantee AI products are safe before they are launched," commented the head of the online safety organization.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, providing criminals the ability to create possibly limitless amounts of advanced, photorealistic exploitative content," she added. "Content which additionally commodifies victims' suffering, and makes children, particularly female children, less safe on and off line."

Support Session Information

The children's helpline also released details of support interactions where AI has been mentioned. AI-related harms discussed in the sessions include:

  • Using AI to evaluate weight, physique and appearance
  • Chatbots discouraging children from consulting trusted adults about abuse
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-faked images

During April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and associated terms were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing utilizing chatbots for support and AI therapy applications.

Joseph Willis
Joseph Willis

Elara is a passionate traveler and storyteller who shares unique cultural insights and off-the-beaten-path experiences from her global expeditions.