British Technology Companies and Child Protection Agencies to Test AI's Ability to Generate Exploitation Images
Technology companies and child protection organizations will receive permission to assess whether artificial intelligence systems can generate child exploitation material under new British laws.
Significant Increase in AI-Generated Harmful Content
The declaration came as findings from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the changes, the government will allow approved AI companies and child safety groups to examine AI models – the foundational technology for conversational AI and image generators – and ensure they have sufficient safeguards to stop them from producing depictions of child sexual abuse.
"Fundamentally about preventing abuse before it happens," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the risk in AI systems early."
Addressing Regulatory Challenges
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This law is designed to averting that problem by helping to stop the production of those images at source.
Legal Structure
The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, producing or distributing AI models designed to create child sexual abuse material.
Practical Impact
This recently, the minister toured the London headquarters of a children's helpline and heard a mock-up call to advisors involving a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a sexualised deepfake of themselves, created using AI.
"When I learn about children facing blackmail online, it is a cause of intense frustration in me and rightful concern amongst parents," he said.
Concerning Data
A prominent online safety organization reported that cases of AI-generated exploitation material – such as online pages that may include multiple files – had more than doubled so far this year.
Instances of category A content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are launched," stated the chief executive of the online safety organization.
"AI tools have made it so victims can be targeted all over again with just a few clicks, giving criminals the ability to create potentially limitless quantities of advanced, photorealistic exploitative content," she added. "Material which additionally exploits victims' suffering, and makes young people, especially girls, more vulnerable both online and offline."
Counseling Interaction Information
The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related harms discussed in the sessions include:
- Using AI to evaluate body size, physique and looks
- Chatbots discouraging children from talking to safe guardians about abuse
- Facing harassment online with AI-generated material
- Digital blackmail using AI-faked images
During April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellness, encompassing using chatbots for assistance and AI therapy applications.