British Technology Firms and Child Protection Agencies to Examine AI's Capability to Create Exploitation Images
Technology companies and child safety organizations will be granted authority to evaluate whether AI tools can generate child exploitation images under new UK legislation.
Substantial Increase in AI-Generated Illegal Material
The declaration came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the authorities will allow approved AI developers and child protection organizations to examine AI systems – the underlying systems for chatbots and image generators – and verify they have adequate protective measures to stop them from creating images of child exploitation.
"Ultimately about preventing abuse before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now detect the danger in AI models promptly."
Addressing Legal Obstacles
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at preventing that issue by helping to stop the creation of those materials at source.
Legislative Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI systems designed to generate child sexual abuse material.
Practical Consequences
This recently, the minister visited the London base of a children's helpline and heard a mock-up call to advisors featuring a account of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, constructed using AI.
"When I learn about young people facing blackmail online, it is a cause of extreme anger in me and rightful concern amongst families," he said.
Alarming Statistics
A prominent internet monitoring foundation stated that instances of AI-generated exploitation content – such as online pages that may contain numerous files – had more than doubled so far this year.
Cases of the most severe material – the gravest form of abuse – rose from 2,621 visual files to 3,086.
- Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a vital step to ensure AI products are safe before they are launched," commented the head of the online safety organization.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, giving offenders the ability to make potentially limitless amounts of advanced, lifelike exploitative content," she added. "Material which additionally exploits survivors' trauma, and makes children, particularly girls, less safe both online and offline."
Counseling Interaction Data
The children's helpline also released details of support sessions where AI has been mentioned. AI-related harms discussed in the conversations comprise:
- Using AI to rate body size, physique and looks
- AI assistants discouraging young people from talking to trusted guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-manipulated images
Between April and September this year, the helpline delivered 367 counselling sessions where AI, conversational AI and associated terms were discussed, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellness, encompassing using chatbots for assistance and AI therapy applications.