AI and Copyright
15 min read
The UK is deciding whether artificial intelligence companies can use copyrighted material to train their systems. The answer affects every creator who publishes work online. It also determines whether AI developers need permission before using that work.
The debate over who controls creative work
Under current UK law, using large datasets of copyrighted material to train AI models can infringe copyright. AI companies need either permission from rights holders or a legal exception that allows them to proceed without it.
In December 2024, the government proposed changing this arrangement. The Intellectual Property Office and the Department for Science, Innovation and Technology published a consultation paper titled Copyright and Artificial Intelligence. The proposals included introducing a copyright exception for AI developers and creating a new rights reservation model.
The response was immediate and substantial. The consultation generated over 11,000 submissions. Most came from creative industries expressing serious concerns about the proposals.
What the government proposed
The central proposal was a text and data mining exception. This would permit AI developers to use copyrighted works for training purposes. It would be coupled with a rights reservation model.
Rights reservation works as an opt-out system. Unless a creator proactively reserves their rights through a technical notice or metadata flag, AI companies could use their content under the proposed exception. The burden falls on creators to exclude their work, not on AI companies to seek permission.
The government framed this approach as balancing three objectives. Giving rights holders more control over and payment for use of their works. Enabling wide and lawful access to data for AI development in the UK. Fostering transparency between the technology and creative sectors.
This model mirrors the European Union's 2019 Copyright Directive. The EU introduced a broad text and data mining exception for commercial AI development. Rights holders can expressly opt out of data mining. The UK proposal essentially looks to replicate this approach.
How creators responded
The creative industries mobilised against the proposal. A broad coalition formed under the banner "Creative Rights in AI". Members included the Society of Authors, Publishers Association, UK Music, Association of Illustrators, and Photographers' Association.
The coalition argued the opt-out model was fundamentally unfair. They said AI companies should only use protected works with express permission from rights holders, not by default. They advocated for an opt-in consent regime rather than placing the burden on every creator to opt out.
By the time the consultation closed in late February 2025, the numbers told a clear story. According to a parliamentary progress report, 88% of respondents said AI developers should be required to obtain licenses for any copyrighted material used in training. Only 3% supported the government's opt-out scheme.
In February 2025, 1,000 UK musicians released a protest album titled "Is This What We Want?". The album consisted of 12 tracks of silence. The track titles, when read sequentially, spelled out a message. "The British Government Must Not Legalise Music Theft To Benefit AI Companies."
The musicians included Kate Bush, Imogen Heap, and Annie Lennox. Their campaign website explained that allowing AI training exceptions without permission or payment would harm musicians who rely on licensing income.
The practical problems with opt-out
Critics identified several issues with the opt-out model beyond the principle of fairness.
Creative works often proliferate beyond a creator's direct control. A photograph gets reposted by others. An article is screenshot and shared. Making an opt-out regime effective requires creators to track their work across the entire internet.
Ed Newton-Rex, a British composer and AI expert who formerly worked at Stability AI, called the proposal "extremely worrying". He argued that only a robust automatic content recognition system could protect works at internet scale. No such reliable system exists yet.
The burden falls hardest on individual creators and small businesses. Large platforms might manage to implement opt-outs. Individual artists, freelance photographers, and small publishers face a different reality. They lack the resources to monitor the web for AI data scraping and lodge objections case by case.
One MP put it bluntly during parliamentary debate. "Most creators, especially individual artists and small publishers, simply wouldn't stand a chance."
What Parliament said
On 28 January 2025, the House of Lords voted on amendments to the Data (Use and Access) Bill. Baroness Kidron introduced provisions aimed at strengthening protections for creators. The Lords passed these amendments 145 votes to 126.
The amendments would require AI developers to comply with UK copyright law regardless of where their training takes place. They would mandate that AI companies disclose any web crawling tools they use to gather data. They would oblige AI companies to notify rights holders when and how their works have been used in training datasets.
Two days later, the Lords held a dedicated debate on the rights reservation model. Lord Foster of Bath opened by declaring that mass scraping content without permission amounts to "theft on a grand scale". He emphasised that creative industries are not anti-AI. They simply expect to be paid when their content fuels AI innovation.
Lord Black of Brentwood called the opt-out model "deeply flawed". He warned it would impose "an immense administrative burden and unsustainable cost" on content creators. He noted that over 40% of the top 100 English-language news sites weren't blocking any AI web crawlers. This was likely due to lack of resources or knowledge, not choice.
Lord Vallance of Balham responded for the government. He acknowledged the concerns and outlined areas being explored. Metadata and watermarking technologies might allow creators to signal that their content should not be used for training. Web crawler protocols could be improved to respect those signals.
He made a key commitment. The government will not proceed with any new exception unless it is confident the solution is effective, simple, and accessible for rights holders of all sizes.
The Commons takes up the issue
On 23 April 2025, the House of Commons held a Westminster Hall debate on the impact of AI on intellectual property. MP James Frith opened by emphasising that embracing AI must not come at the expense of economic fairness.
Multiple MPs shared evidence from their constituents. One noted that 58% of professional photographers in the UK have already lost work due to AI image generators. The average loss was over £14,000 per photographer in commissions.
Another MP highlighted a recent controversy. Meta had allegedly used 7.5 million pirated books from the illegal LibGen repository to train its LLaMA 3 AI model. The Society of Authors called this a blatant infringement of copyright.
MPs argued that no creator should lose their job or signature style to a machine that imitates them without permission. The phrase "no yes, no use" was echoed. Consent must come first. If a creator didn't say yes, an AI shouldn't be using their work.
Minister Chris Bryant responded by finding common ground. "Creators deserve to be paid. I completely and utterly agree, and so do the Government."
He noted that using pirated material to train AI models is "patently wrong". He condemned the practice after discovering some of his own writing had been scraped in a dataset allegedly used by Meta.
Bryant committed to listening to creative industries before introducing any legislation. He outlined areas being worked on. Transparency mechanisms so creators know if their works have been used. Technical solutions for rights reservation. Possible collective licensing schemes to help individuals enforce rights at scale.
He admitted that "a technical solution for rights reservation does not yet exist". But he posed the challenge. Why don't we make it happen?
Bryant concluded with a striking line. "Artificial intelligence was made for humanity by humanity, not humanity made for artificial intelligence, and we need to make sure that we get the balance right."
What happens next
The Data (Use and Access) Act 2025 received Royal Assent on 19 June 2025. The Act doesn't change copyright law directly. Instead, it creates a roadmap for resolving the AI training issue.
The Act requires the Secretary of State for Science, Innovation and Technology to produce two things by 18 March 2026. A detailed economic impact assessment of various options for AI and copyright. A comprehensive report on the use of copyrighted works in AI development, including policy proposals.
The report must consider each of the four options presented in the December 2024 consultation. It can also consider other alternatives. It must examine technical measures for control, the effect of copyright on AI developers' access to data, transparency obligations, licensing frameworks, and enforcement mechanisms.
In December 2025, the government provided an interim progress statement to Parliament. This confirmed the overwhelming opposition to the opt-out model. Of the approximately 10,000 creators who responded via online survey, 88% favoured requiring a license for any AI training use of their work. Only 3% supported the opt-out idea.
The progress report indicated that the government is exploring technical solutions being developed by industry for content tagging and tracking. Some AI companies have started engaging with rights holders. Certain academic publishers and news organisations have been negotiating voluntary licensing agreements with AI firms.
The statutory report due in March 2026 will determine the path forward. It will likely inform any new copyright and AI provisions the government might introduce in future legislation.
The international context
The UK's debate is unfolding alongside similar discussions globally.
The EU AI Act was formally adopted in 2024. The Act includes provisions requiring companies deploying generative AI in Europe to document training data sources. They must ensure copyrighted data was obtained lawfully and respects opt-outs by rights holders.
The United States has no specific AI training exception. American courts are addressing the issue through fair use litigation. Several lawsuits by authors and artists against AI companies are attempting to set precedents.
The UK is calibrating its policy with an eye on both the EU and US approaches. Divergent rules could complicate matters for AI firms and rights holders operating across borders. They could also affect where AI companies choose to locate their operations.
Where things stand
UK AI and copyright policy is at a crossroads. Parliament has made clear that creators must be at the heart of any new approach. The government has shifted from its initial proposal to a more consultative process.
No final decisions have been made. The government has committed not to proceed without workable solutions that give creators genuine control over their work.
The creative industries have spoken with one voice. They want an opt-in model where AI companies must obtain licenses before using copyrighted material. They want transparency about what content is used for training. They want fair compensation for use of their work.
AI developers argue that legal certainty and broad access to data are crucial for UK competitiveness. Some maintain that strict licensing requirements would be prohibitively cumbersome given the vast datasets needed to train advanced models.
The challenge is finding a framework where AI can flourish in partnership with human creativity. One where innovation doesn't override the interests of the individuals who create the content that feeds AI systems.
The March 2026 report will be a critical milestone. It will produce one of the most in-depth governmental analyses of AI training and intellectual property anywhere. The policy that emerges will set a precedent for how a modern nation balances technological advancement with creators' rights.