The UK government has moved decisively to criminalise the creation of non-consensual intimate images using AI, bringing forward provisions of the Data (Use and Access) Act (DUAA) that were previously expected to come into force more gradually. From this week, creating or requesting the creation of AI-generated intimate images without consent is a criminal offence, alongside existing offences relating to sharing or threatening to share such material.
While much of the public attention has focused on X and its AI tool Grok, this is not a “big tech only” issue. The changes have direct implications for UK organisations of all sizes that develop, deploy, host, integrate or allow the use of AI tools including internal systems, customer-facing products, workplace software and third-party AI embedded into business processes.
What has changed and why it matters
The government’s response follows widespread concern about the use of generative AI to create sexualised deepfake images of women and children, including content described by ministers as violent, degrading and abusive. Ofcom has launched a formal investigation into X’s compliance with the Online Safety Act, and ministers have been explicit that enforcement powers including fines of up to 10% of global turnover and, in extreme cases, blocking access in the UK are all on the table.
Significantly, the new offence targets creation itself, not just distribution. This closes a legal gap that previously allowed AI-generated intimate images of adults to exist in a grey area if they were not shared.
In parallel, the government has confirmed its intention to criminalise nudification apps through the Crime and Policing Bill, making it illegal for companies to supply tools designed to generate non-consensual intimate images. This is an explicit shift towards regulating AI at source, not just moderating outcomes after harm has occurred.
How does this affect my UK organisation?
These changes sit at the intersection of several regulatory regimes that already apply to UK organisations:
Online Safety Act: Platforms and services must proactively prevent illegal content, not simply respond after the fact.
Data protection law (UK GDPR) and DUAA: Using personal data to train, prompt or generate AI outputs including images engages controller obligations and now also attracts specific criminal liability where non-consensual intimate images are created or requested using AI.
Criminal law: Individuals and companies can now be directly exposed where AI tools facilitate or enable criminal conduct.
Corporate governance and risk: Failure to assess foreseeable misuse of AI systems can create regulatory, reputational and litigation risk.
This means organisations cannot rely on the fact that:
- the AI tool is supplied by a third party
- the use is “experimental” or informal
- the organisation did not intend harmful use
- safeguards are contractual rather than technical or operational
If an organisation deploys AI in a way that reasonably enables illegal content creation, regulators will expect evidence that risks were identified, mitigated and actively managed.
What should my organisation be doing now?
UK organisations using AI should urgently review their position across four areas:
- Map AI use including informal and embedded tools
Many organisations underestimate how widely AI is used across teams. Internal tools, plugins, image generators, chatbots and “productivity” AI may all pose risk if they can be misused to create or manipulate images of real people.
- Assess misuse risk, not just intended use
Risk assessments should consider foreseeable abuse cases, including harassment, deepfakes and non-consensual imagery, particularly where tools process images, video or personal data.
- Update governance, policies and controls
Organisations should:
- set clear rules on acceptable AI use
- restrict or technically block high-risk functionality
- ensure moderation, filtering and logging are in place where relevant
- train staff on prohibited uses and legal consequences
- Re-examine supplier arrangements but don’t rely on them
While supplier due diligence and contractual protections matter, they do not transfer legal responsibility. Organisations must be able to demonstrate their own compliance with UK law, regardless of who built the tool.
A wider signal on AI regulation in the UK
Beyond the immediate issue of deepfakes, this move sends a broader message that the UK is willing to act quickly and forcefully where AI causes real-world harm, particularly to women and children. Innovation remains a priority, but not at the expense of safety, dignity or the rule of law.
AI governance can no longer be treated as a future concern or a narrow technical issue. It is now a live legal, criminal and reputational risk and one that boards, compliance teams and senior leadership need to own.
The focus is no longer just on what AI can do, but on what it must not be allowed to do.
The guide, When Data Thinks, explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Download it here.