British Tech Firms and Child Safety Agencies to Test AI's Ability to Generate Abuse Images

Tech firms and child protection agencies will be granted permission to assess whether AI tools can produce child exploitation material under recently introduced British laws.

Substantial Rise in AI-Generated Harmful Material

The announcement came as revelations from a safety watchdog showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the authorities will allow designated AI companies and child protection organizations to inspect AI models – the underlying systems for chatbots and image generators – and verify they have adequate protective measures to stop them from producing depictions of child sexual abuse.

"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now identify the risk in AI models early."

Tackling Regulatory Obstacles

The changes have been introduced because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This law is aimed at preventing that issue by helping to halt the production of those images at their origin.

Legislative Structure

The changes are being added by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, producing or sharing AI systems developed to create exploitative content.

Real-World Consequences

This recently, the official toured the London base of a children's helpline and heard a mock-up call to advisors involving a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a explicit deepfake of himself, constructed using AI.

"When I hear about children experiencing blackmail online, it is a cause of extreme anger in me and justified concern amongst families," he said.

Concerning Statistics

A prominent online safety foundation stated that instances of AI-generated abuse content – such as online pages that may include multiple files – had significantly increased so far this year.

Cases of the most severe material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Girls were predominantly victimized, making up 94% of prohibited AI images in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a vital step to ensure AI products are secure before they are released," commented the chief executive of the online safety organization.

"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a few clicks, providing offenders the ability to make potentially limitless quantities of advanced, lifelike child sexual abuse material," she added. "Material which further exploits survivors' suffering, and renders children, especially female children, less safe both online and offline."

Support Session Data

The children's helpline also released information of counselling sessions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Using AI to evaluate weight, body and looks
  • Chatbots discouraging young people from talking to trusted guardians about abuse
  • Facing harassment online with AI-generated content
  • Online extortion using AI-manipulated pictures

During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated topics were mentioned, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were related to mental health and wellbeing, encompassing utilizing chatbots for assistance and AI therapeutic apps.

Eric Ellis
Eric Ellis

A cybersecurity analyst with over a decade of experience in digital forensics and threat intelligence, passionate about educating others on online safety.