The EU is quietly entering a new era of digital regulation, one that could reshape how data protection and AI governance work across Europe. Three interconnected developments are driving this shift:
- A leaked draft of the “GDPR Digital Omnibus”, which may introduce a new lawful basis for AI model training.
- The Digital Omnibus proposal to simplify AI Act implementation, giving businesses more breathing room.
- Reports that the EC is considering delays to some AI Act provisions amid industry pressure.
Together, they signal a significant recalibration of Europe’s digital rulebook with major implications for UK organisations that handle EU data or deploy AI systems.
The GDPR digital omnibus: A new lawful ground for AI model training?
A leaked draft of the EC’s GDPR digital omnibus has revealed the most radical potential reform to the GDPR since 2018: The creation of a “legitimate interest” equivalent for training AI models, including when special categories of personal data are used.
This represents a clear shift from strict prohibition to risk-based flexibility. The proposed approach focuses on risk mitigation at the level of outputs, assessing AI systems by how they perform and the safeguards in place, rather than solely by the data they ingest.
If adopted, this could provide long-awaited legal clarity for developers and researchers navigating the grey area of data scraping and model training. However, the European Data Protection Board (EDPB) has stressed that Article 9’s prohibitions on processing sensitive data still apply unless a valid exception exists. In other words, innovation won’t trump fundamental rights.
Still, the direction is clear that the EU is trying to reconcile privacy protection with AI innovation and that balance could redefine global AI governance.
Simplifying the AI Act: From burden to balance
In parallel, the Commission has unveiled a companion reform, the Digital Omnibus for the AI Act, designed to make the landmark regulation easier to implement. Early experience revealed major practical hurdles such as delayed national authorities, missing standards, and heavy paperwork, particularly for SMEs.
The new package promises targeted relief:
- Grace periods for watermarking and transparency requirements
- Reduced registration for non–high-risk AI uses
- Expanded exemptions and privileges for SMEs and small mid-caps
- Permission to process sensitive data for bias detection and correction (under safeguards)
- More flexible post-market monitoring
- Stronger AI literacy initiatives and a broader use of AI sandboxes
Crucially, the EU insists this is not deregulation. The core of the AI Act, protecting safety, rights, and trust, remains intact. The goal is faster, more proportionate compliance that fosters innovation rather than smothering it.
Possible delays and the politics of AI regulation
Adding another layer, reports indicate the Commission may delay certain AI Act provisions, granting a one-year “grace period” for companies already deploying generative AI or high-risk systems. This move follows pressure from European industry leaders and even diplomatic friction with the US administration, urging Brussels to ensure Europe remains competitive.
Proposals under review include postponing fines for transparency breaches until 2027 and offering more flexible monitoring rules for high-risk AI. While critics fear this could dilute the regulation, others argue it’s a necessary reset to give businesses time to adapt responsibly.
What this means for UK businesses
For UK organisations, these developments are more than distant EU policy shifts, they’re a strategic signal. Many UK companies still process EU personal data or operate within digital supply chains. Divergence between the UK GDPR and the evolving EU regime could create new compliance headaches:
- Dual compliance models for data and AI governance
- Reassessment of lawful bases for data used in AI training
- Cross-border transfer risks if adequacy is questioned
- A competitive pressure to keep pace with a more innovation-friendly EU approach
If the EU moves toward a risk-based model that balances innovation with accountability, UK policymakers may face renewed pressure to modernise their own frameworks or risk falling behind in both regulatory credibility and digital competitiveness.
For compliance teams, this is the moment to future-proof privacy governance. That means revisiting Data Protection Impact Assessments (DPIAs), aligning AI oversight with data ethics, and preparing for a landscape where accountable innovation replaces checkbox compliance.
It seems clear that Europe is pivoting from restriction to responsible innovation. The GDPR digital omnibus and AI Act reforms mark a broader shift toward regulation that enables technology while still protecting individuals.
For UK businesses, this is both a challenge and an opportunity. Those that understand and adapt early will not only stay compliant but gain a strategic edge in building AI and data-driven systems that are trusted, transparent, and future-ready.
Vinciworks’ new conversational learning course on data protection’s rights and responsibilities puts you at the heart of data protection, turning policy into practical action. Guided by AI-powered experts, it explores how personal data should be handled, shared and stored through realistic workplace scenarios. Try it here.