British Technology Firms and Child Protection Officials to Examine AI's Ability to Create Exploitation Images

Technology companies and child safety organizations will receive permission to assess whether AI systems can produce child abuse material under recently introduced UK laws.

Substantial Rise in AI-Generated Harmful Content

The declaration came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the amendments, the authorities will allow designated AI companies and child safety groups to inspect AI systems – the underlying systems for conversational AI and image generators – and verify they have adequate safeguards to stop them from producing images of child exploitation.

"Ultimately about stopping abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under strict conditions, can now identify the danger in AI models early."

Tackling Legal Challenges

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at averting that issue by enabling to stop the creation of those materials at their origin.

Legislative Framework

The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or sharing AI systems designed to generate exploitative content.

Practical Impact

This week, the minister visited the London base of a children's helpline and listened to a simulated call to advisors featuring a report of AI-based abuse. The interaction depicted a teenager seeking help after being blackmailed using a sexualised deepfake of themselves, created using AI.

"When I hear about children facing extortion online, it is a source of intense frustration in me and rightful anger amongst parents," he stated.

Concerning Statistics

A leading online safety foundation reported that instances of AI-generated exploitation content – such as online pages that may contain multiple files – had more than doubled so far this year.

Cases of the most severe material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, making up 94% of prohibited AI images in 2025
  • Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the chief executive of the internet monitoring organization.

"AI tools have made it so victims can be targeted repeatedly with just a few clicks, providing criminals the ability to create possibly limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Material which further exploits victims' suffering, and renders children, especially female children, more vulnerable both online and offline."

Counseling Interaction Information

The children's helpline also published details of support sessions where AI has been referenced. AI-related harms discussed in the sessions comprise:

  • Using AI to evaluate weight, physique and looks
  • AI assistants dissuading children from talking to safe adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated images

During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including using AI assistants for assistance and AI therapy applications.

Michael Chavez
Michael Chavez

Tech enthusiast and mobile industry analyst with a passion for emerging technologies and user experience design.