How to build a compliant AI programme: The webinar FAQs 

In a breathtakingly short amount of time, AI has moved from novelty to necessity. What felt experimental just a few years ago is now deeply embedded in everyday business tools. Whether it’s drafting documents, analysing contracts, reviewing CVs, triaging customer queries, predicting risk or even writing code, AI is involved in the process. And from finance to construction, legal services to education, it’s becoming difficult to find a sector that isn’t testing, deploying, or actively relying on AI in some form.

But as adoption has accelerated, so have the questions.

We’re no longer asking whether AI is useful. We’re asking whether it’s compliant, whether it’s secure, whether it’s being used fairly and how we can explain how we use it. Regulators are moving but not always at the same speed as the technology. The EU AI Act is entering implementation. The UK is regulating through existing bodies. GDPR is already being used to scrutinise training data, profiling, and automated decisions. Meanwhile, new tools such as agentic browsers, embedded copilots, and AI assistants are changing risk profiles seemingly overnight.

That’s why your questions on our webinar, How to build a compliant AI programme, are so telling. They’re not abstract or speculative. They’re operational and strategic and they reflect organisations trying to introduce AI responsibly while protecting client data, intellectual property, reputation, and regulatory standing.

This is a clear inflection point. AI capability is expanding rapidly, enforcement expectations are crystallising, and governance maturity is becoming a competitive advantage.These FAQs are designed to meet that moment. You’ve asked practical, thoughtful questions about implementation, liability, compliance, security, sustainability, and strategy. What follows are our answers designed to hopefully help you implement AI safely, strategically, and effectively.

What key takeaways should UK construction consultancies consider when using or implementing AI tools?

The focus should be on data quality, contracts that are clear and safety risks. Construction data often includes personal data such as CCTV, site access logs and accident reports, so you must identify lawful bases, conduct DPIAs for higher-risk tools, and ensure outputs don’t undermine health and safety obligations. AI does not replace professional judgement. It supports it.

Is the data that is given for AI to use reused in any way on the web?

It depends on the tool and configuration. Enterprise versions usually contractually restrict training reuse but consumer versions may use prompts for model improvement. Always check terms, disable training where possible, and avoid entering confidential or personal data unless you have clear safeguards.

What is the impact of AI on the education landscape from a policy perspective?

Education policy is shifting toward greater transparency, more control on academic integrity, safeguarding of minors’ data, and clear guidance on acceptable AI use. Schools and universities must balance innovation with GDPR compliance, bias risk, and safeguarding responsibilities.

How does compliance work regarding AI companies that store data overseas?

You must assess cross-border transfers under UK GDPR and EU GDPR adequacy decisions, Standard Contractual Clauses, and Transfer Risk Assessments. Data location alone isn’t the issue. It’s whether enforceable safeguards and effective redress mechanisms exist.

What risks should I be considering when using AI to generate and use data with users’ information?

Key risks include unlawful processing, bias, inaccuracy, over-profiling, data leakage, and automated decision-making challenges. Also, there are reputational risks to consider if outputs are misleading or discriminatory.

What are the essential requirements for a small firm?

Be sure to have an AI use policy, a risk assessment template, including DPIA triggers, approved AI tool list, staff training, and contractual safeguards with vendors. Keep it proportionate but well documented.

How can I manage AI so it doesn’t take over critical thinking and only use it to support my organisation?

Make sure a human is involved in material decisions. Require review and verification of all AI outputs. Train staff on limitations and bias. The bottom line is to make it clear that AI assists but does not decide.

How do I protect my files when using AI?

Use enterprise accounts, disable training reuse, restrict uploads of sensitive data, apply role-based access controls, encrypt data in transit and at rest, and block high-risk AI browser tools from accessing corporate systems.

Where can I start understanding the complexities of AI for assessing as part of a DPIA?

Start with mapping what data goes in, what comes out, who is affected, and what decisions are influenced. Then assess lawful basis, fairness, bias risk, security, and transfer risk. The ICO’s AI guidance is a good practical starting point.

How can I use AI as part of the client onboarding process for both client experience and efficiency?

AI can assist with document summarisation, risk flagging, KYC data extraction, and triage but final decisions should be made by a human. Maintain transparency with clients and avoid solely automated decisions.

How do I deal with the challenges of managing employees using AI tools at work?

Create a clear AI acceptable-use policy, define approved tools, prohibit shadow AI, provide training, and monitor for data leakage risks. Make expectations explicit rather than assuming common sense.

How do I address AI in compliance and ethics in UK banks?

Banks must align AI governance with FCA principles, model risk management, operational resilience, Consumer Duty, and SMCR accountability. AI decisions affecting customers require fairness and strong oversight and the ability to withstand an audit.

Will AI replace human contribution?

AI can automate tasks but not accountability. High-risk decisions, client relationships, and ethical judgement still require human responsibility. Regulation reinforces that accountability cannot be outsourced to algorithms.

What are the UK data compliance steps when exploring AI use?

Identify lawful basis, assess high-risk processing, review vendor contracts, assess international transfers, implement security controls, train staff, and document governance decisions.

How can organisations ensure clarity in AI-driven decisions affecting customers?

Explain the logic in plain language, clarify key factors influencing outcomes, allow challenge or review, and ensure a human can intervene. It’s about clearly understanding impact, not revealing source code.

What are the top 3 things to implement or consider for an AI programme?

  1. governance framework with board oversight
  2. risk assessment and DPIA process
  3. clear staff policy and training

What are the biggest tips for aligning AI use best practices with the current evolving landscape?

Stay aware of regulator’s expectations, review tools regularly, document decisions, prioritise transparency, and assume models and risks will change quickly.

How do we balance the risk of innovation with the unknown in AI use?

Pilot tools in low-risk environments, apply staged approval processes, and require documented risk sign-off before scaling.

Is it likely that full-licence CoPilot under Azure will remain compliant under proposed AI regulations?

Enterprise configurations reduce risk, but compliance depends on how it’s deployed. You remain responsible for data inputs, outputs, and use cases, not Microsoft.

How can we ensure protection of a company’s intellectual property when using AI tools?

Avoid uploading proprietary data into consumer models, review vendor IP clauses, disable model training, and use enterprise environments with contractual safeguards.

How will various AI laws correlate with each other?

They layer rather than replace each other. GDPR covers data, the EU AI Act addresses system risk classification, sector regulators add domain rules, and cybersecurity law overlaps. Organisations must comply with all applicable regimes simultaneously.

What is the best way to protect personal data and confidentiality whilst using AI within a law firm?

Use enterprise tools only, conduct DPIAs, restrict uploads of client data, anonymise where possible, and ensure confidentiality clauses that explicitly address AI processing.

Since the EU AI Act is entering implementation, how will the UK legal market be affected?

UK firms serving EU clients may fall within scope. Even domestically, expectations will align with EU standards. Firms should align governance now to avoid dual systems.

What should we be looking at when designing and planning an AI programme?

Risk classification, data flows, security controls, vendor management, bias testing, transparency, and escalation procedures.

How can I use AI systems to analyse accidents and near-miss reports?

AI can identify patterns and predictive indicators, but ensure anonymisation where possible, lawful basis for processing, and human review before safety decisions.

Where is the best place to start in communicating safe AI use?

Begin with a simple AI acceptable-use policy supported by real-world and scenario-based training.

Is AI really applicable in HSE fields?

Yes, for predictive analytics, document review, and risk trend analysis. But outputs must support, not override, professional safety judgement.

Where do you think UK regulation on AI will focus?

Expect a focus on transparency, consumer protection, financial services oversight, and coordination through the ICO and sector regulators rather than a single AI statute.

When everyone is using AI to answer questions, sometimes incorrectly, what guidelines should we use?

Adopt verification rules that no AI output goes externally without review. Also, fact-check critical information and document reliance on AI in regulated contexts.

Will AI be useful or is it a bubble?

AI is already delivering real productivity gains, but value depends on governance and fit-for-purpose deployment. It’s transformative but it’s not magic.

What is the safe way to introduce AI usage for our company?

Run a pilot, define approved tools, train staff, implement oversight, and scale gradually with documented risk reviews.

Would an AI policy work?

Yes, if it’s practical, clear, supported by training, and enforced. A policy without governance is just paper.

How can AI tools be used to make compliance processes more efficient?

Automated document review, regulatory horizon scanning, contract analysis, risk triage, training content generation, and internal query assistance.

How can AI best help the workforce?

By automating repetitive tasks, enhancing analysis and drafting insights. This frees up people to use their judgement, creativity, and engage with clients.

As an HR manager, how can I work effectively with IT on AI?

HR should be focused on training, culture, and acceptable use. IT should manage security, access, and tool approval. Together, the teams can define governance, monitor adoption, and address workforce impact proactively.

Should we be looking at putting AI clauses in supplier contracts to ensure they do not use AI without our knowledge?

Yes. Supplier contracts should include AI transparency clauses requiring disclosure of AI use, restrictions on training with your data, data location clarity, audit rights, confidentiality protections, and clear allocation of liability. You should also require notice before introducing new AI-driven processors or functionality.

We have been looking at ways of getting clients consent to using AI in the course of their matter. Do you have any suggestions or advice on how to gain such consent?

Be careful not to rely on consent where it is not freely given. In professional services, consent can be unclear and invalid. It’s important to assess whether AI use is necessary for contract performance or legitimate interests, explain it transparently in engagement terms, and offer an opt-out for higher-risk use cases.

What are some examples of AI-powered browsers? Is this like having a chrome plug-in?

Examples of AI-powered browsers include ChatGPT’s Atlas browser, Perplexity’s Comet, Opera Neon and Gemini-enabled Chrome. These are more than plug-ins. They embed AI agents directly into the browsing experience, allowing them to read, summarise, and act across sessions.

Do the agentic browsers have to be proactively utilised by a user or can they be used accidentally?

Typically they require activation, but once enabled they may act automatically within browsing sessions. The risk is not accidental installation. It’s that once active, they can process content and instructions embedded in webpages without clear user awareness.

If my organisation operates fully within the UK, but the data centers that store and process our data are based in the EU, does the EU AI Act apply to my organisation?

The EU AI Act applies based on market placement and use in the EU, not just server location. If you offer AI systems to EU users or your outputs are used in the EU, it may apply. Simply hosting data in the EU does not automatically bring you into scope.

Where does liability fall in the case of Annex III high-risk AI use cases under the EU AI Act, the service provider or the client who decides the use case? To what extent can liability be apportioned in the client contract?

Under the AI Act, obligations primarily attach to the “provider” of the high-risk system, but deployers also have duties. Liability can be contractually allocated between parties, but regulatory responsibility cannot be fully contracted out. Clear role definition, as in provider vs deployer, is critical in agreements.

For a Copilot chatbot, how can I ensure compliance if users add personal information in chats? Is a data retention period of 30 days sufficient?

Retention alone is not enough. You need lawful basis, transparency, restricted access, enterprise-level configuration preventing model training, audit logging, and role-based controls. Thirty days may be proportionate depending on purpose but retention must match necessity and documented justification.

Any advice on preparing a DPIA for implementing AI within a finance system?

Map the data flows first. Identify whether profiling or automated decision-making is involved. Assess bias risk, financial harm risk, human review mechanisms, security safeguards, and vendor transparency. Document mitigation measures clearly and involve both compliance and IT early.

What are the best approaches to addressing the environmental/sustainability impacts of AI as part of a company sustainability policy?

Focus on energy consumption transparency, vendor ESG disclosures, cloud efficiency commitments, carbon reporting, and internal usage discipline. Treat AI as part of your digital sustainability footprint and require reporting from providers.

Is the Digital Omnibus Package sufficiently ‘future-proofed’ given the rapid rate of technological development?

No regulation is fully future-proof. The Omnibus aims to streamline and align regimes, but governance maturity inside organisations must adapt faster than legislative cycles. Internal oversight and risk scanning remain essential.

Do standard web browsers use AI agents in search results automatically or is it opt-in? If automatic, what should we tell staff?

Most AI summaries in search, like AI overviews, are automatic features, but full agentic functionality requires activation. Staff should be trained not to assume AI-generated summaries are accurate and to avoid entering confidential information into browser-based AI features.

For consultancies that handle NDAs, what are the options for utilising agents and LLMs while keeping sensitive data secure?

Use enterprise or self-hosted models, disable training reuse, anonymise data before processing, implement strict access controls, conduct DPIAs for high-risk use, and ensure contracts explicitly prohibit vendor reuse of confidential data. Sensitive projects should be processed in controlled environments, not consumer AI tools.

The guide, When Data Thinks, explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Download it here.