Wednesday 25 February, 12pm UK | Artificial intelligence is moving into a new regulatory era. The EU is progressing the Digital Omnibus package, the EU AI Act is entering its implementation phase, and regulators worldwide are issuing new rules on automated decision making, transparency, bias testing and safe deployment of AI. At the same time, data protection authorities have opened landmark AI cases on profiling, training data, consent and explainability, with several major enforcement actions expected in 2026.
In this webinar, we will take a deeper look at how organisations can build a safe and compliant AI framework. We will explore the next steps under the EU AI Act, the UK’s Data (Use and Access) Act, and the most important AI investigations and fines from the past year. We will also examine AI developments in the United States, where federal and state regulators are moving towards sector specific rules, algorithmic accountability requirements and updated guidance for workplace use of AI.
This session will give practical guidance on governance, risk scoring, documentation, training and implementing appropriate safeguards to ensure AI tools and models can be deployed responsibly without exposing your organisation to regulatory or ethical risk.
What this webinar will cover
- EU Digital Omnibus updates affecting digital services, data governance and platform obligations
- EU AI Act requirements for high risk, general purpose and foundation models in 2026
- Key AI enforcement actions and case law from UK and European data protection authorities
- Global AI regulation trends including the US approach to algorithmic accountability and workplace AI
- Practical steps to build an AI compliance programme including governance, documentation, DPIAs and model oversight
- Data protection issues in AI development including training data, transparency, accuracy and user rights