UK Technology Firms and Child Safety Agencies to Test AI's Capability to Generate Exploitation Images
Technology companies and child safety organizations will receive authority to evaluate whether artificial intelligence systems can generate child exploitation images under recently introduced British laws.
Significant Increase in AI-Generated Harmful Content
The announcement coincided with revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the authorities will permit designated AI companies and child safety groups to examine AI systems – the foundational technology for chatbots and visual AI tools – and ensure they have adequate protective measures to stop them from creating images of child exploitation.
"Fundamentally about stopping abuse before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the risk in AI models early."
Addressing Regulatory Challenges
The changes have been introduced because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is designed to preventing that problem by helping to halt the production of those images at their origin.
Legislative Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or sharing AI systems developed to create child sexual abuse material.
Real-World Consequences
This recently, the official toured the London base of Childline and heard a simulated conversation to counsellors involving a report of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a explicit deepfake of themselves, created using AI.
"When I hear about young people facing blackmail online, it is a source of intense anger in me and justified concern amongst parents," he stated.
Concerning Data
A prominent online safety foundation reported that instances of AI-generated exploitation material – such as online pages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The law change could "constitute a vital step to guarantee AI tools are safe before they are launched," commented the head of the online safety organization.
"AI tools have enabled so victims can be targeted repeatedly with just a simple actions, providing criminals the ability to create possibly limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Material which additionally commodifies survivors' suffering, and renders children, especially girls, less safe on and off line."
Counseling Interaction Data
Childline also published details of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:
- Using AI to evaluate body size, body and appearance
- AI assistants discouraging children from consulting trusted adults about abuse
- Being bullied online with AI-generated material
- Digital extortion using AI-manipulated pictures
Between April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, including using AI assistants for assistance and AI therapy apps.