British Technology Companies and Child Safety Officials to Examine AI's Capability to Generate Abuse Images
Tech firms and child protection organizations will receive authority to evaluate whether AI systems can produce child exploitation material under new UK laws.
Significant Increase in AI-Generated Harmful Content
The announcement coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the authorities will allow approved AI developers and child safety organizations to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and ensure they have adequate safeguards to prevent them from creating images of child sexual abuse.
"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous protocols, can now identify the risk in AI systems promptly."
Tackling Regulatory Obstacles
The amendments have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to averting that problem by helping to stop the production of those images at source.
Legislative Structure
The amendments are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or sharing AI models developed to generate child sexual abuse material.
Real-World Consequences
This week, the official toured the London base of Childline and heard a simulated conversation to advisors involving a account of AI-based abuse. The interaction depicted a adolescent seeking help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about children facing extortion online, it is a cause of extreme anger in me and rightful anger amongst parents," he stated.
Concerning Data
A leading internet monitoring organization stated that instances of AI-generated abuse content – such as webpages that may include multiple images – had more than doubled so far this year.
Cases of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, accounting for 94% of prohibited AI depictions in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a vital step to ensure AI products are secure before they are released," commented the chief executive of the online safety organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, providing offenders the ability to create possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Material which further exploits victims' suffering, and makes young people, especially female children, less safe both online and offline."
Support Interaction Data
The children's helpline also released information of counselling interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Employing AI to evaluate weight, physique and looks
- AI assistants dissuading children from consulting trusted guardians about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-faked images
During April and September this year, Childline delivered 367 support sessions where AI, chatbots and related topics were mentioned, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, encompassing utilizing AI assistants for support and AI therapy applications.