The FAQs you want answers to
There’s no doubt that AI is transforming industries at an unprecedented pace, offering tremendous opportunities for innovation, increased efficiency and growth. But of course, with all this rapid advancement comes a complex and evolving regulatory and ethical landscape. The UK government as well as international bodies are all scrambling to ensure AI is developed and deployed responsibly, trying to balance innovation with accountability, fairness and transparency. It’s not an easy or simple balancing act.
For businesses and legal professionals, understanding AI compliance is really no longer optional. It’s become something of a necessity. Whether you’re integrating AI into your operations, advising clients on AI governance or navigating regulatory requirements, the question you are likely always asking yourself is, How do we ensure AI is used responsibly, legally and ethically?
That question opens the door to many others, and you’ve asked them. From understanding how tools like ChatGPT actually work, to navigating the practical realities of regulation, to exploring how AI can support your specific goals, the curiosity is clear and the stakes are high.
This FAQ is here to provide clear, informed answers. You have the questions. We have the insights.
Is AI going to change the field of ethics and compliance?
Yes. AI introduces complex ethical and regulatory issues, including algorithmic bias, transparency, and accountability. Ethics and compliance functions are now expected to manage AI governance, assess model risk, and ensure compliance with evolving frameworks like the EU AI Act and UK’s Data (Use and Access) bill.
Can an employee become too AI dependent? And can AI give wrong answers and how do you detect this?
Yes. Over-reliance can reduce critical thinking and decision quality. AI tools may provide inaccurate outputs or “hallucinate.” Detection involves human review, source checking, training staff on AI’s limitations, and enforcing a “human-in-the-loop” policy.
What are the best/safest AI tools to use within an organisation? Are there any we should ban?
The safest tools are those that offer enterprise-grade data protections, transparency, and allow control over data usage such as Microsoft 365 Copilot or Google Workspace AI (in enterprise environments). Tools lacking clear data handling policies or with known risks of data leakage should be restricted or banned.
How can we use AI to improve current practices without compromising copyright regulations?
Ensure AI tools used for content generation or analysis do not repurpose copyrighted materials unless explicitly licensed. Use AI to summarise, assist with formatting or identify trends but always attribute sources and verify outputs. Avoid using public generative AI for client- or IP-sensitive content.
How can AI improve my business?
AI can streamline admin tasks, automate reports, enhance customer service with chatbots, support legal drafting and optimise forecasting. The key is using it to enhance, but not replace, human judgment. And always make sure you’re in compliance and implementing quality control.
How do I ensure that my company is AI compliant?
Conduct risk assessments, identify AI tools in use, classify their risk level, draft an AI use policy, train staff and monitor use. Ensure GDPR compliance for data use, and align with emerging EU AI Act requirements or UK guidelines.
Which AI model can be used internally in a company? Does MS CoPilot share confidential company and client data?
Microsoft 365 Copilot, when properly configured, operates within the customer’s Microsoft environment and does not train on user data. It’s suitable for internal use if compliance and security controls are enforced. Always check your settings and data options.
Will AI take over legal drafting?
AI can assist legal drafting by suggesting clauses, summarising documents and generating templates. However, final review and contextual accuracy still require human legal professionals. It’s a support tool, not a substitute.
What is the biggest risk of using AI?
Unintended data disclosure and over-reliance on inaccurate or biased outputs. Poor governance can also result in reputational harm, compliance breaches or even legal liability.
How do I make sense of AI compliance and ethical deployment in a company policy context?
Start with defining what types of AI your company uses. Classify tools based on risk and purpose. Create clear usage guidelines, ensure data protection, include fairness and transparency principles and set up regular reviews and monitoring.
What are the most impacted equality categories with AI? And what are suggested risk mitigation actions for an educational organisation?
Protected characteristics like race, gender and disability can be disproportionately affected. Mitigation includes bias testing, diverse training data, transparent algorithms, and staff training. In education, avoid using AI for decisions like admissions or grading without human oversight.
What are the benefits of AI for a construction cost consultant?
AI can automate quantity take-offs, generate cost estimates, identify project risks and predict overruns. It enhances decision-making, reduces manual input errors and improves timeline accuracy.
How safe is it to use AI in an ever-changing financial services sector?
It depends on governance. Financial firms must meet strict regulatory standards such as those from FCA, PRA and DORA. AI use must be auditable, transparent and stress-tested. Internal policies should control how AI is trained and deployed.
Will accountants be needed in the future due to AI?
Yes. AI will automate routine tasks like data entry and basic reconciliations but cannot replace judgment, strategy or nuanced advisory roles. Accountants will evolve into data interpreters and strategic advisors.
How can I draft an AI acceptable use policy and procedure to prevent plagiarism and cheating for reports?
Define permissible and impermissible uses. Require disclosure when AI is used, prohibit generative use in assessments unless stated and implement detection software. Be sure to include clear sanctions for misuse. Check out our data privacy template.
What are the best AI tools we can use to assist us in day-to-day jobs within health and safety?
Tools like Smartvid.io for site safety analytics, Microsoft Copilot for documentation, and Natural Language Processing for incident report analysis. Make sure to vet tools for data privacy and security compliance.
How do you handle AI risks through the ISMS (ISO27001)?
Identify AI use in the information asset register and include AI-related threats in risk assessments. Apply controls around data privacy, access and algorithm accountability. Ensure incident management plans include AI misuse scenarios.
How can we use AI but still be ethical and compliant? What are some examples of where it has gone wrong?
Use AI within clear ethical guidelines, prioritising transparency, bias mitigation and human oversight. Failures include Amazon’s biased hiring AI or facial recognition misuse in law enforcement. Learn from these by implementing safeguards and regular audits.
Where should we draw the line between the advantages of AI use and potential risks to our brand and business reputation?
Draw the line where transparency, fairness or legal compliance is compromised. Prioritise trust and user rights over operational convenience. If in doubt, err on the side of caution.
Will maximising AI save on resource costs?
In many areas, yes. These include admin, customer service and basic analysis. But the cost of governance, training and oversight should be factored in. Cost savings should not come at the expense of compliance or quality.
What tools can we put in place to monitor the use of AI-generated outputs like learner reports and assignments?
Use AI detection tools like Turnitin’s AI detection, mandate usage declarations and apply metadata analysis. Also, train educators to spot inconsistencies in writing style or reasoning.
What are my responsibilities as an employer in terms of using AI?
Ensure employees understand acceptable AI use, train them on compliance, protect customer and employee data and provide oversight. Establish a clear AI usage policy and regularly audit practice.
Are there AI use cases for regulatory work?
AI can support regulatory teams by automating policy scanning, regulatory intelligence monitoring, compliance report generation and risk prioritisation. It can also assist with anomaly detection in data-heavy environments.
Are there network integration and risks such as AI access to shared files?
AI tools integrated into company networks, such as via cloud services, can inadvertently access sensitive or confidential files. Limit access via permissions, apply DLP controls and log AI interactions with shared drives.
Applying a 2×2 risk map analysis, is the EU AI Act only considering the impact, regardless of likelihood?
No. The EU AI Act considers both likelihood and severity of harm in determining whether a system is classified as high-risk. It builds on GDPR’s risk-based approach but sharpens the focus on the context of deployment. For example, even if the likelihood is low, if the potential impact is very high as in for instance, biometric identification by law enforcement, it may still be regulated as high-risk. Don’t miss our Guide to the EU AI Act.
What happens to information a user puts into AI, from a privacy and copyright perspective?
When you input data into a generative AI tool, several things may happen depending on the provider:
- Privacy: If the tool is cloud-based like ChatGPT, inputs may be logged or temporarily stored. Enterprise versions often guarantee that data is not used for model training.
- Copyright: If you input copyrighted material, you’re still responsible for how it’s handled. The AI won’t take copyright ownership, but some terms of service allow reuse of inputs unless explicitly restricted.
It’s important to always check the tool’s privacy policy and terms.
How does data privacy protection work for face recognition in CCTV owned by regulators (such as a police department)?
Facial recognition used by public authorities like the police is subject to strict data protection laws under GDPR and the Law Enforcement Directive. Use must be lawful, necessary and proportionate. In many EU countries, its use is highly restricted and often requires a specific legal basis, court approval or legislative mandate.
What about SaaS tools implementing AI-powered features that are US-based?
SaaS providers based outside the EU/UK, including in the US, must comply with GDPR if they process EU/UK citizen data. The recent EU-US Data Privacy Framework offers a legal mechanism for compliant data transfers. That said, organisations must conduct Transfer Impact Assessments and ensure tools have appropriate safeguards, especially if they handle employee or customer data such as HR platforms or CRMs.
How does ChatGPT do everything so quickly?
ChatGPT operates on powerful, pre-trained models that use deep learning and parallel computing across massive server infrastructure. When you ask a question, the model doesn’t search the internet; it generates responses based on patterns it learned during training. This allows for very fast, context-aware replies.
What concerns me is where my information goes when using AI tools, such as for instance, summarising an email. Who sees it?
In the free or basic versions of AI tools, inputs may be used to improve the model unless the provider states otherwise. However, in enterprise or API versions, like ChatGPT Enterprise or Microsoft Copilot, data is typically encrypted and not used for training or accessible by humans. Still, always confirm the data policy for the specific version you’re using.
Are any AI tools recommended or considered safer than others?
Yes. Tools like ChatGPT Enterprise, Microsoft Copilot, and Google Duet AI are designed with enterprise-grade privacy in mind and do not use your data for model training. Look for tools that:
- offer data isolation
- are GDPR-compliant
- provide transparent data handling policies
- are hosted in the EU/UK or under the Privacy Shield (for US tools)
Regarding web scraping, is it true that early AI training violated GDPR, and regulators ignored it?
Many early AI models were indeed trained on publicly available data, including scraped websites, some of which may contain personal data. The legality is now under scrutiny, especially in the EU. Regulators haven’t exactly turned a “blind eye” but are grappling with how to apply existing laws to foundation model training. The EU AI Act and recent guidance from data protection authorities are beginning to address this gap.
What does ChatGPT do with all the info users input? Is it stored? Or used for training?
In the free and Plus versions, inputs may be used to improve the model unless users disable that setting. In ChatGPT Enterprise, API, and Microsoft Copilot, your data is not used for training and is not stored long-term. Always check the data usage and privacy policy of the specific version you’re using.
What about legal AI tools like Lexis+ AI and Practical Law AI?
These tools are specifically tailored for legal use and are generally built to meet higher standards for confidentiality and compliance. They often operate within private cloud environments and are not trained on user inputs. These tools may also restrict generative features to summarisation or precedent drafting, reducing exposure to hallucinations and compliance risks.
How can information entered into an AI tool be used by the company that owns it?
Depending on the tool’s terms, user input may be:
- stored temporarily or long-term
- used to improve the AI model (if opted in)
- shared with subprocessors (such as for hosting)
For sensitive use cases, opt for tools offering enterprise-grade guarantees, like data isolation, retention controls, and no training on inputs.
What are the top 3 competitors/equivalents to ChatGPT?
- Claude by Anthropic – Known for safety-focused design and large context window
- Gemini (formerly Bard) by Google – Integrated with Google services
- Mistral or LLaMA (Meta) – Often used in open-source applications, but more technical to deploy
These all vary in openness, accuracy and privacy handling.
Is there a way to tell if content was written by AI?
It’s difficult, especially with well-edited output. Tools like OpenAI’s AI text classifier, GPTZero, or Turnitin AI Detection offer some analysis, but none are 100% accurate. Look for unnatural phrasing, overuse of clichés or lack of specific personal/contextual detail. Policy-wise, watermarking and disclosure are emerging as good practices.
Concerns about DEEPSEEK aside, are we being naive about Western AI models and government access?
There’s legitimate concern. While Western models are governed by data protection laws, intelligence agencies may still request access under legal frameworks like the US CLOUD Act. This raises questions about sovereignty, data transfers, and government surveillance. Transparency reports and regional hosting, such as EU-only data centers, are part of the mitigation strategies.
Do you have a matrix of LLMs and how they compare for data privacy and EU compliance?
There isn’t a universal matrix yet, but some regulators and research bodies are starting to compare models. The EU AI Act will push for more transparent disclosures. For now, check:
- hosting location
- data retention policy
- enterprise offering availability
- certifications (such as, ISO 27001, SOC 2)
Will the model used in Viciworks conversational learning apply to all of your future training?
If referring to our training program that uses interactive AI, yes, we are expanding these models across all courses to enhance engagement, offer personalised feedback, and track learner progress. Learn more about our conversational learning courses.
If drafting an AI policy for a firm, should it only apply to Gen AI? How to define Gen AI and identify covered tools?
Yes, a policy should at least cover Generative AI, since that’s where most emerging risks lie.
- Definition: Gen AI refers to tools that can create new content (text, images, code) based on learned data patterns.
- How to identify tools: Conduct an internal audit of software in use. Look for features labeled as “AI Assistant,” “CoPilot,” “Smart Suggestions,” or integrated language/image generation capabilities.
Be sure to include provisions for procurement, data usage, and acceptable use.
I need more guidance on an AI usage policy for a non-profit.
Non-profits should focus on:
- clear consent for any personal data input into AI tools
- restricting use of public AI tools for sensitive information
- documenting which AI tools are in use and their data policies
- ensuring compliance with GDPR and donor expectations
If drafting an AI policy for a law firm, my understanding is that this should apply to Gen AI as this is where the risks and issues lie. How then would you define Gen AI and how would you go about recognising what products the firm use or propose to use fall into this definition and are covered by the policy?
Generative AI refers to AI systems that create new content, like text, summaries, or documents, based on user prompts. Think ChatGPT, Copilot, Lexis+ AI. In your policy, define Gen AI as tools that generate human-like outputs rather than just analysing data. To identify relevant tools, audit software currently in use, flag tools with AI-powered content generation, and require staff to register any new tools they plan to use. Focus the policy on tools handling legal content, client data, or internal documents, and maintain a list of approved Gen AI platforms.
Given that privately accessible AI platforms (like Microsoft) were seen as the answer, what are the implications with the US on privacy?
Even with private AI platforms like Microsoft’s Azure OpenAI, data sovereignty concerns remain, especially under US laws like the CLOUD Act, which could allow access to data by US authorities. Organisations must ensure contractual safeguards (like SCCs or DPF compliance) and, where possible, use EU or UK-based data centres.
Want to make the most of AI while keeping your business safe?
Try our collection of AI-at-work courses
With our AI courses you will…
Understand the concepts and terms used in discussing AI
Get advice on best practices for using AI in the workplace
Gain familiarity with the risks associated with AI use
Explore AI’s moral issues and challenges