UK Technology Firms and Child Protection Agencies to Examine AI's Capability to Create Exploitation Content

Tech firms and child safety organizations will receive authority to evaluate whether AI systems can produce child exploitation material under new British legislation.

Significant Increase in AI-Generated Harmful Material

The declaration coincided with revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will allow designated AI developers and child protection organizations to inspect AI systems – the underlying systems for chatbots and visual AI tools – and verify they have sufficient protective measures to prevent them from producing depictions of child exploitation.

"Ultimately about preventing abuse before it happens," declared the minister for AI and online safety, noting: "Specialists, under strict protocols, can now identify the risk in AI models promptly."

Addressing Regulatory Obstacles

The changes have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at averting that problem by enabling to halt the creation of those images at their origin.

Legislative Framework

The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a prohibition on owning, creating or distributing AI systems developed to create exploitative content.

Real-World Impact

This recently, the minister toured the London base of Childline and listened to a simulated call to counsellors involving a report of AI-based exploitation. The interaction portrayed a teenager requesting help after being blackmailed using a explicit deepfake of himself, created using AI.

"When I learn about young people experiencing extortion online, it is a cause of intense anger in me and justified concern amongst parents," he said.

Concerning Data

A leading online safety foundation stated that instances of AI-generated abuse content – such as online pages that may include multiple images – had significantly increased so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "constitute a crucial step to ensure AI products are secure before they are released," stated the head of the online safety organization.

"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a few clicks, giving offenders the ability to create possibly endless amounts of advanced, photorealistic child sexual abuse material," she added. "Content which additionally exploits victims' trauma, and makes young people, particularly girls, less safe both online and offline."

Support Session Information

Childline also released information of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions include:

  • Employing AI to rate weight, physique and appearance
  • Chatbots dissuading young people from talking to trusted guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated images

During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing utilizing chatbots for assistance and AI therapy applications.

Cassandra Morales
Cassandra Morales

A seasoned business consultant and tech enthusiast with over a decade of experience in digital transformation.