British Technology Companies and Child Protection Agencies to Test AI's Capability to Generate Exploitation Images
Tech firms and child safety agencies will receive authority to assess whether AI tools can produce child exploitation images under recently introduced British legislation.
Substantial Increase in AI-Generated Harmful Material
The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the government will permit approved AI companies and child safety organizations to inspect AI models – the foundational technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child sexual abuse.
"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now detect the danger in AI systems early."
Addressing Regulatory Challenges
The changes have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that issue by enabling to halt the creation of those images at source.
Legal Framework
The amendments are being introduced by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models developed to create child sexual abuse material.
Practical Consequences
This recently, the official toured the London headquarters of a children's helpline and heard a simulated call to counsellors featuring a report of AI-based exploitation. The interaction portrayed a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I hear about children facing blackmail online, it is a cause of extreme frustration in me and rightful concern amongst families," he stated.
Concerning Statistics
A prominent internet monitoring organization stated that cases of AI-generated exploitation content – such as online pages that may include numerous files – had significantly increased so far this year.
Cases of the most severe material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," stated the chief executive of the internet monitoring organization.
"AI tools have made it so victims can be victimised all over again with just a simple actions, providing offenders the ability to make possibly limitless quantities of sophisticated, photorealistic exploitative content," she continued. "Material which additionally exploits victims' trauma, and renders young people, especially girls, less safe on and off line."
Counseling Interaction Data
Childline also published details of counselling sessions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:
- Employing AI to rate body size, body and looks
- Chatbots dissuading young people from talking to safe guardians about harm
- Being bullied online with AI-generated content
- Digital blackmail using AI-faked images
Between April and September this year, Childline conducted 367 counselling interactions where AI, chatbots and related topics were mentioned, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing using chatbots for support and AI therapeutic applications.