British Tech Companies and Child Protection Agencies to Test AI's Ability to Generate Exploitation Content

Technology companies and child protection organizations will be granted authority to evaluate whether AI systems can produce child exploitation material under recently introduced British laws.

Substantial Increase in AI-Generated Harmful Content

The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the amendments, the government will permit designated AI companies and child protection groups to inspect AI systems – the foundational technology for conversational AI and image generators – and verify they have adequate protective measures to stop them from producing depictions of child sexual abuse.

"Fundamentally about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous conditions, can now detect the danger in AI models promptly."

Addressing Legal Obstacles

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at preventing that issue by helping to halt the creation of those images at source.

Legal Structure

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI systems developed to generate child sexual abuse material.

Practical Consequences

This week, the minister toured the London headquarters of Childline and listened to a simulated call to counsellors featuring a report of AI-based exploitation. The call depicted a teenager requesting help after being blackmailed using a explicit deepfake of themselves, created using AI.

"When I learn about young people facing extortion online, it is a source of intense frustration in me and rightful concern amongst families," he stated.

Concerning Data

A prominent online safety organization stated that cases of AI-generated exploitation material – such as online pages that may contain numerous files – had significantly increased so far this year.

Instances of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are launched," commented the head of the internet monitoring organization.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a simple actions, providing criminals the capability to create possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Material which additionally exploits survivors' trauma, and renders young people, especially girls, more vulnerable on and off line."

Counseling Session Information

The children's helpline also published information of counselling sessions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Employing AI to rate body size, body and appearance
  • AI assistants dissuading children from talking to safe adults about abuse
  • Being bullied online with AI-generated content
  • Digital extortion using AI-manipulated images

During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellbeing, encompassing utilizing chatbots for assistance and AI therapy apps.

David Oconnell
David Oconnell

Passionate gamer and tech enthusiast, Lena shares in-depth reviews and strategies to help players improve their skills and stay ahead in the competitive scene.