The UK is once again at the centre of a high-stakes debate over AI and copyright. At issue is a proposal reportedly being considered by ministers on a new commercial research exception, or CRE, that would let AI developers use copyrighted material to train models without permission during research and development, with licensing only happening later, before market entry.
At first glance, it sounds like a compromise. AI companies get room to innovate, and creators get paid at the end. But the reality looks different and many believe, not workable.
It’s important to clarify what the law is now, what a CRE would change, and why the proposal has prompted such a strong reaction from creators, publishers and peers in the House of Lords.
What the law is now
Under current UK law, there is already a limited copyright exception for text and data analysis, but it applies only to non-commercial research and only where the person has lawful access to the work. The Copyright, Designs and Patents Act 1988 does not create a general right for commercial AI developers to scrape and train on copyrighted material for product development.
That matters because the present framework still treats copyrighted works as something that generally must be licensed for commercial use. In practical terms, if an AI developer wants to train a commercial model on protected books, articles, music, images or scripts, the lawful route is to secure permission before training. As Ed Newton-Rex notes, licensing before copying avoids the worst legal and commercial pitfalls of a CRE.
What a commercial research exception would do
The proposed CRE would expand the current non-commercial exception so that commercial AI developers could use copyrighted works during pre-market research and development without consent or upfront payment, with transparency and licensing requirements only kicking in later, at or around market entry. That is the model described in the News Media Association supplementary evidence to the House of Lords inquiry, and it reflects the broader debate now taking place around AI training and copyright reform in the UK.
Supporters present this as a pro-innovation solution. Critics say it is neither workable nor fair.
That criticism is becoming harder for the government to ignore. The UK government’s 2024 consultation on copyright and AI explored possible legal changes, while a December 2025 government progress statement confirmed it was still assessing responses. More recently, the Financial Times reported that ministers were delaying contentious reforms after a backlash from the creative industries.
Why is the proposal facing such resistance?
The House of Lords Communications and Digital Committee has now warned against weakening copyright protections to favour AI model builders. In its report, the committee said ministers should not undermine the UK’s creative industries for speculative future gains from AI. The report notes that the UK creative industries contributed £124 billion to the economy in 2023 and employed 2.4 million people, compared with an AI sector contribution of £12 billion in 2024 and 86,000 jobs.
That is not an anti-AI position. It is a warning about economic trade-offs. If the UK weakens a copyright regime that underpins real jobs, licensing revenue and export value today, it risks doing so in favour of a policy whose benefits are uncertain and whose enforcement model remains vague.
The Lords committee was particularly sceptical of a text and data mining exception for commercial AI training, especially one with an opt-out structure. It called on the government not to introduce such an exception and instead to strengthen protections for creators, including digital replicas and “in the style of” uses.
Is licensing after training backwards?
The most persuasive criticism of a CRE is also the simplest. If permission only has to be sought after a model has already been trained, both sides face unnecessary risk.
For creators and rights holders, the problem is that their work may already have been copied, ingested and used to build value before they even know about it. At that stage, much of their bargaining power has gone. That is one reason the News Media Association argues a CRE would collapse the emerging licensing market by removing the incentive to pay upfront for lawful access.
For developers, the problem is if a model has been trained on millions of copyrighted works and one rights holder later refuses to license, what happens then?
Newton-Rex calls this the “single-dissenter problem”. His argument is that a CRE without compulsory licensing makes commercial release highly uncertain, because even one refusal could leave a trained model legally unusable after millions have already been spent on compute, labour and infrastructure.
As a commercial logic point, it is hard to dismiss because no serious business wants to invest heavily in a product that may become unreleasable because licensing was left until after the value had already been extracted.
The hidden consequence: pressure for compulsory licensing
This leads to the more politically explosive question. If post-training licensing is too unstable because of the risk of refusals, the only way to make a CRE reliably work may be to force rights holders to license. Newton-Rex argues that this is the unspoken end point of the model.
That is where the proposal becomes even more controversial. Compulsory licensing for AI training would mean creators could be forced to license their work to companies building tools that may compete directly with them. The News Media Association evidence also argues that a CRE could breach international copyright rules, including the Berne Convention’s three-step test, because the exception would be too broad, would conflict with normal exploitation of works, and would unreasonably prejudice rightsholders’ interests.
That legal claim would ultimately be tested by courts, not commentators. But as a policy risk, it is serious enough that the government should be very cautious.
What counts as market entry?
Even if ministers wanted to push ahead, they still face a major practical problem of defining when “market entry” happens.
Is it when a model is launched publicly? When it is made available by API? When it is piloted with a small commercial partner? What if the commercial product is not the model itself, but something built using an interim model or synthetic data generated upstream?
Newton-Rex’s paper argues that all of these create loopholes or enforcement headaches. A CRE either becomes unfair because it misses large parts of the commercial pipeline, or it becomes so broad and complex that enforcement becomes impossible.
The News Media Association submission makes a similar point. It says “market entry” is not a workable boundary in commercial AI because research and commercialisation are iterative, blurred and difficult for rightsholders to monitor in real time.
A better route is licence first, train second
The strongest argument against a CRE is not simply that it is unfair. It is that it solves the wrong problem in the wrong order.
If the UK wants a healthy AI market and a healthy creative economy, it should focus on making licensing faster, clearer and more scalable before training begins. That gives developers certainty. It gives creators payment and control. And it avoids years of litigation over whether a model was still in research, had entered the market, or had become tainted by one unlicensed source.
That approach is also more aligned with the current direction of concern from Parliament. The House of Lords committee has effectively warned the government not to trade away a proven creative sector for a speculative AI upside.
The UK does need a durable framework for AI and copyright. But a commercial research exception looks less like a compromise and more like a legal and commercial dead end. The UK should not weaken copyright protections in the hope that the market sorts itself out later. On this issue, clarity before training is better than conflict after it.
The guide, When Data Thinks, explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Download it here.