British Tech Firms and Child Protection Officials to Test AI's Capability to Create Exploitation Images
Tech firms and child safety organizations will be granted authority to evaluate whether AI tools can generate child exploitation images under recently introduced UK legislation.
Significant Increase in AI-Generated Illegal Material
The declaration coincided with findings from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the authorities will permit designated AI companies and child protection groups to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from creating images of child exploitation.
"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now detect the risk in AI models early."
Tackling Regulatory Challenges
The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at preventing that issue by helping to stop the creation of those images at source.
Legal Structure
The changes are being added by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on owning, creating or sharing AI systems developed to create exploitative content.
Practical Impact
This recently, the official toured the London headquarters of Childline and listened to a mock-up call to advisors featuring a account of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a explicit AI-generated image of himself, created using AI.
"When I learn about young people facing blackmail online, it is a source of extreme frustration in me and rightful anger amongst families," he stated.
Alarming Data
A prominent online safety organization stated that instances of AI-generated exploitation content – such as online pages that may include multiple images – had significantly increased so far this year.
Cases of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the chief executive of the online safety organization.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, giving criminals the capability to make potentially endless quantities of advanced, lifelike exploitative content," she added. "Material which additionally commodifies victims' suffering, and makes children, especially girls, less safe both online and offline."
Counseling Session Data
The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Using AI to rate weight, body and looks
- Chatbots dissuading young people from talking to trusted guardians about harm
- Facing harassment online with AI-generated material
- Online extortion using AI-faked images
Between April and September this year, Childline conducted 367 support sessions where AI, chatbots and associated topics were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including using AI assistants for assistance and AI therapeutic applications.