Big AI models are rapidly moving into regulated sectors, and healthcare is no exception. Recent developments show regulators in the US and Europe increasing scrutiny of AI use in healthcare and life sciences, while insurers and healthcare providers accelerate AI adoption despite uneven governance readiness. This combination of regulatory pressure and rapid deployment helps explain why tools like GPT Health are emerging now, as AI capabilities mature faster than formal oversight frameworks.
GPT Health, a newly released health-focused capability within ChatGPT, allows individuals to interact with medical records and health data to better understand conditions, prepare for appointments, and navigate healthcare systems. Although currently released to a limited audience to support individual users, its arrival has significant implications for businesses, particularly those handling sensitive data or operating in regulated environments.
A shift for compliance teams
GPT Health highlights a new shift for compliance teams where AI tools are no longer limited to general productivity. They are beginning to touch high-risk data, regulated processes, and legal obligations.
For organisations in healthcare, insurance, life sciences, and digital health, GPT Health signals how AI can improve user engagement and operational efficiency. AI can help explain complex information, reduce administrative friction, and improve understanding of healthcare processes. Over time, similar capabilities are likely to be embedded into customer portals, support functions, and internal workflows.
However, this also raises expectations. Customers, employees, and patients may assume AI-assisted tools are safe, accurate, and compliant by default. Businesses that deploy or allow the use of AI in health-related contexts will be expected to demonstrate that they understand the risks and have controls in place. This applies not only to healthcare providers, but also to employers, insurers, and platforms that may indirectly process health-related information.
This expectation comes from regulations like HIPAA in the US and GDPR in the EU, which require organisations to protect health data and implement strong security controls. Emerging rules, such as the EU AI Act, also hold organisations responsible for assessing risks, ensuring transparency, and maintaining human oversight for high-risk AI. Compliance teams must show they understand these risks because failure to do so can lead to penalties, legal liability, and reputational damage.
Regulatory and compliance considerations
Health data is among the most tightly regulated categories of personal information. In many jurisdictions it falls under special protections, such as HIPAA in the US and GDPR special-category data rules in the EU. The use of AI tools like GPT Health does not remove these obligations.
From a compliance perspective, the key risk is not the AI itself, but how it is used. If employees input health data into AI tools without authorisation, safeguards, or contractual protections, organisations may be exposed to data protection breaches, regulatory penalties, and reputational harm. Even where vendors state that data is protected or isolated, organisations remain responsible for ensuring lawful processing, appropriate access controls, and clear accountability.
Governance and oversight
GPT Health reinforces the need for strong AI governance because it introduces AI into highly sensitive and regulated areas, where errors or misuse can have serious legal, operational, and reputational consequences. Compliance teams must ensure that AI tools handling sensitive information are covered by existing policies on data protection, information security, and acceptable use. This includes defining who can use such tools, for what purpose, and under what conditions, so that organisations maintain accountability, traceability, and regulatory compliance.
Auditability is also critical. When AI systems are involved in interpreting or processing health information, organisations that handle patient data must be able to demonstrate how data is handled, how decisions are made, and how risks are managed. Without clear logging, oversight, and documentation, compliance becomes difficult to prove.
GPT Health is a signal of where AI is heading. AI will penetrate every aspect of daily lives, handling sensitive and personal data. Compliance teams should act now to stay ahead of emerging risks. Organisations should review AI usage policies to explicitly cover health and other sensitive data, train employees on when AI tools can and cannot be used for regulated information, work with legal and IT teams to assess vendor assurances and data handling practices, and embed AI risk into privacy impact assessments and broader risk frameworks. While AI can bring real benefits, these advantages are only realised when paired with clear rules, human oversight, and accountability.
When Data Thinks is a guide that explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Get it here.



