Global AI governance entered a new phase this week when President Donald Trump signed a sweeping executive order on artificial intelligence, aimed at reshaping how AI is regulated across the US. The order seeks to curb state-level AI regulation in favour of a single, nationally coordinated framework, a move that could significantly recalibrate the US approach to AI oversight, with implications that extend well beyond US borders.
The executive order does not introduce new AI safety obligations or user protections but rather reduces regulatory fragmentation by curbing the ability of individual US states to impose their own AI rules. At the same time, it lays the groundwork for a more centralised, lighter-touch federal approach to AI regulation.
This approach reflects the concern within the US government that a patchwork of state-level requirements could hinder innovation, complicate compliance and weaken the country’s competitive position in the global AI market.
From state experimentation to federal centralisation
In the absence of comprehensive federal legislation over the past few years, US states have taken the lead on AI governance. The result was a diverse set of laws addressing transparency, algorithmic discrimination, chatbot disclosures, deepfakes, child safety and other uses of AI.
The executive order hits pause on this trajectory. It establishes mechanisms for federal agencies to identify and challenge state AI laws considered overly burdensome, inconsistent with interstate commerce, or in conflict with a future national AI policy. It also introduces financial levers, allowing certain federal funds to be withheld from states that continue to enforce laws seen as incompatible with this direction.
At the same time, the order anticipates future federal legislation that would pre-empt conflicting state rules while preserving some areas of state autonomy, such as child safety protections and government procurement.
A regulatory approach distinct from the EU
This development sharpens the contrast between US and EU approaches to AI governance.
The EU’s AI Act sets out risk-based obligations, mandatory safeguards for high-risk systems, and enforceable rights for individuals, reflecting a precautionary approach that requires organisations to implement safeguards before AI systems are deployed. At the same time, proposals under the Digital Omnibus initiative could adjust or partially roll back parts of the AI Act, particularly around cross-sector harmonisation and innovation flexibility. The timing and scope of these changes remain uncertain, with adoption expected in early 2026, making close monitoring important. By contrast, the US executive order prioritises regulatory simplicity, innovation incentives, and national competitiveness, relying on future federal standards, existing consumer protection law, and sector-specific oversight rather than a single horizontal AI statute. For global organisations, these differences in regulatory approach, now potentially compounded by EU reforms, mean that AI compliance strategies must be tailored to each jurisdiction, even for systems developed centrally.
Fragmentation at the international level
Internationally, the order underscores the absence of a shared global consensus on AI regulation. While the EU is moving toward enforceable, binding rules, and other jurisdictions are adopting hybrid or principles-based models, the US is signalling caution around prescriptive legal constraints.
This may influence ongoing discussions in international forums, including the G7, OECD and bilateral trade negotiations, where interoperability and mutual recognition of AI standards are becoming increasingly important.
For countries designing their own AI regimes, the US position may be attractive from an innovation perspective, but it also raises questions about accountability, transparency and cross-border trust in AI systems.
What this means for UK organisations
For UK organisations, the implications are both practical and strategic. The UK continues to pursue its own pro-innovation, sector-led approach to AI governance, relying on regulators rather than a single overarching AI statute. In this respect, the UK sits somewhere between the EU and US models.
However, UK organisations operating internationally will need to navigate a complex environment:
- EU-facing AI systems will still need to comply with the EU AI Act’s risk-based requirements.
- US-facing operations may encounter fewer formal AI-specific obligations at federal level, but continued exposure to litigation, consumer protection enforcement and evolving federal standards.
- UK domestic expectations will continue to emphasise accountability, transparency and alignment with data protection law, particularly where AI is used in decision-making affecting individuals.
The executive order also reinforces the importance of robust internal AI governance, as reliance on minimal legal requirements in one jurisdiction may not translate across borders.
Looking ahead to 2026
This development should be viewed as a transitional moment rather than a settled endpoint. Legal challenges to federal pre-emption are likely, and the effectiveness of the approach will depend heavily on whether comprehensive federal AI legislation follows.
What is clear is that by 2026, AI governance will be defined less by convergence and more by regulatory divergence. Organisations that invest now in flexible, principles-based AI governance frameworks will be best placed to adapt as legal regimes continue to evolve in different directions.
In that sense, the executive order is not just a US policy shift but a reminder that AI regulation is becoming a core element of global economic and regulatory strategy, with real implications for how and where AI is developed, deployed and trusted.
AI can transform how work gets done but companies and firms need to understand the opportunities and risks inherent in this emerging technology. Our innovative AI compliance courses provide training that will ensure you stay ahead of the curve, avoid compliance fines and safely evade reputational damage. Try it here.