GDPR is being simplified, but that makes AI compliance more complicated

The EU’s flagship data protection regime, the General Data Protection Regulation (GDPR), is facing its most significant overhaul since it came into force in 2018. Under the banner of “simplification,” a sweeping deregulation agenda is being pursued by the European Commission, supported by the European Parliament and several centre-right member state governments. These changes, framed as a way to reduce regulatory burdens, will have major consequences for data protection and how AI is regulated across the EU and beyond.

 

Why is GDPR simplification on the agenda now?

 

The political backdrop to this shift began with the 2024 European Parliament elections. Green parties lost significant ground, and the reappointment of Commission President Ursula von der Leyen relied on a coalition of centrist and centre-right support. With deregulation high on the agenda, the Commission has already begun scaling back its sustainability frameworks. Key directives such as the CSRD, CSDDD, and EUDR have been delayed, watered down or amended to reduce their scope. The same logic is now being applied to GDPR through what has been dubbed the “Omnibus” approach: a broad, multi-track simplification process.

 

What are the proposed changes to GDPR?

 

At the heart of these proposed GDPR reforms is a push to ease compliance obligations for small and mid-sized firms. Currently, businesses with fewer than 250 employees are exempt from maintaining detailed records of processing activities unless they engage in high-risk processing. The new proposals would raise that threshold dramatically. A new “Small-Mid Cap” (SMC) category is being proposed, covering businesses with up to 750 employees, a turnover of less than €150 million or total assets under €129 million. These firms would benefit from reduced reporting obligations unless they process special category data or engage in high-risk activities such as profiling or large-scale health data processing.

 

Importantly, the proposals don’t remove the need for good internal data governance. SMCs could still be required to conduct Data Protection Impact Assessments (DPIAs) if the nature of their processing meets the risk threshold. Moreover, the Commission has proposed that new templates, codes of conduct, and certification mechanisms be tailored to this growing class of businesses to help them navigate compliance expectations.

 

What more of GDPR might change from the EU Commission?

 

Yet these changes could just be the beginning. Senior MEPs such as Axel Voss have floated even more radical ideas, including a three-tier GDPR structure. Under this model, “Mini GDPR” would apply to companies handling fewer than 100,000 data subjects and no sensitive data, eliminating DPO requirements and capping fines at €500,000. “Normal GDPR” would retain the current framework for most firms, while a new “GDPR Plus” category would be introduced for Big Tech and data-heavy platforms. These entities would face increased transparency demands, audit obligations, and potentially much higher fines, potentially up to €100 million or 10% of global turnover.

 

The legislative process for these changes is now under way, moving through the ordinary legislative procedure that includes negotiation between the Commission, Parliament and Council. Crucially, all three arms of the EU appear united in their appetite for deregulation, echoing the fate of other compliance-heavy regimes like sustainability reporting. There are very few advocates left for the original GDPR in its full, unaltered form.

 

How does GDPR and the AI Act overlap?

 

What complicates this landscape even further is the overlap between GDPR and the EU AI Act. The AI Act, which came into force in 2024 and will become fully applicable by mid-2026, imposes detailed compliance obligations for high-risk AI systems, including documentation, risk assessment, bias testing and human oversight. But there are growing signs that the AI Act may also be paused or simplified.

 

This development is driven in part by pressure from the United States, where the regulatory climate is moving in the opposite direction. The US House of Representatives recently passed a Budget Bill that would block all state-level enforcement of AI laws for the next decade, effectively giving US companies a free hand to experiment and deploy AI technologies without the same constraints facing their EU counterparts. President Trump has repealed several AI-related executive orders from the Biden era, issued pro-US procurement memoranda, and helped launch a $500 billion AI infrastructure investment known as the Stargate project. With no serious enforcement mechanisms looming in the US, the EU’s comparatively rigid AI Act starts to look out of sync with global realities.

 

Will the EU pause the AI Act?

 

In this context, the Commission is reportedly considering a “Stop the Clock” pause for the AI Act, during which simplification amendments could be introduced. Poland has already called for this to be paused. This reflects the exact same process that has already played out with sustainability rules and is now reshaping GDPR. The potential pause in the AI Act introduces further uncertainty for compliance professionals, particularly those tasked with navigating the intersection of GDPR and AI.

 

That intersection is already fraught with complexity. For instance, while the AI Act allows the use of sensitive data like ethnicity or health information in certain high-risk scenarios (such as biometric identification), GDPR typically prohibits such processing without explicit consent or a narrow exemption. A company could therefore find itself in compliance with the AI Act but still in breach of GDPR. Even for firms newly exempted from certain GDPR obligations under the SMC category, the AI Act will impose its own documentation and governance requirements for high-risk AI applications.

 

What should DPOs do now?

 

For compliance officers, especially Data Protection Officers (DPOs) at mid-sized companies, this represents both a challenge and an opportunity. AI risk management is fast becoming a natural extension of GDPR compliance. Many of the competencies involved, including DPIAs, lawful basis assessments, transparency, fairness, data minimisation, are already part of the DPO’s toolkit. The European Data Protection Board has gone so far as to list “advice or feedback from the DPO” as a necessary part of an AI project’s documentation when personal data is involved. In practice, this means that compliance teams will increasingly need to weigh in on algorithmic transparency, bias mitigation, and data governance for AI systems.

 

For smaller companies and startups, the simplification of GDPR could lower the barrier to AI experimentation by removing some of the procedural burdens. But this should not be mistaken for an AI compliance free pass. High-risk AI uses, such as credit scoring or automated recruitment tools, will still trigger substantial obligations under the AI Act. In these cases, reduced GDPR record-keeping does not negate the need for strong internal governance and risk management.

 

Data protection and AI compliance is entering a new phase; marked by deregulation on paper but deepening complexity in practice. GDPR may be getting lighter, but the responsibilities around AI are getting heavier. For compliance professionals, now is the time to invest in upskilling, revise internal policies, and re-evaluate risk frameworks to align with what is likely to be a new and highly dynamic compliance environment.

 

Listen again to our webinar on the latest changes to GDPR.