UK Tech Firms and Child Protection Agencies to Examine AI's Capability to Create Exploitation Images

Technology companies and child safety organizations will be granted authority to evaluate whether AI tools can produce child exploitation material under recently introduced UK laws.

Substantial Rise in AI-Generated Illegal Content

The declaration came as revelations from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will permit approved AI companies and child safety groups to examine AI systems – the underlying systems for chatbots and image generators – and ensure they have adequate protective measures to prevent them from creating depictions of child exploitation.

"Ultimately about preventing abuse before it happens," stated Kanishka Narayan, adding: "Experts, under strict conditions, can now detect the risk in AI models promptly."

Tackling Regulatory Obstacles

The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was published online before dealing with it.

This law is aimed at averting that issue by helping to stop the creation of those materials at their origin.

Legal Structure

The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a ban on owning, producing or sharing AI models designed to create child sexual abuse material.

Real-World Consequences

This week, the official toured the London headquarters of Childline and heard a mock-up conversation to counsellors involving a account of AI-based abuse. The interaction depicted a adolescent requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.

"When I learn about children experiencing blackmail online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.

Concerning Data

A leading internet monitoring foundation stated that instances of AI-generated abuse material – such as webpages that may contain numerous files – had more than doubled so far this year.

Instances of the most severe content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a crucial step to ensure AI tools are safe before they are launched," commented the head of the internet monitoring organization.

"AI tools have enabled so victims can be victimised all over again with just a few clicks, providing criminals the capability to make possibly limitless quantities of advanced, photorealistic exploitative content," she added. "Content which additionally commodifies victims' suffering, and renders children, particularly girls, less safe both online and offline."

Support Session Information

The children's helpline also released details of support sessions where AI has been referenced. AI-related harms discussed in the conversations include:

  • Using AI to evaluate weight, body and appearance
  • AI assistants discouraging young people from talking to safe guardians about abuse
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-faked images

Between April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellbeing, including using AI assistants for support and AI therapy applications.

Matthew Kelly
Matthew Kelly

Elara is an avid mountaineer and writer, sharing her passion for high-altitude expeditions and sustainable outdoor practices.