For the past two years, most conversations about AI have focused on what AI produces. Concerns around hallucinations, bias, misinformation, and job displacement have dominated headlines and boardroom discussions. But Canada’s recent findings against OpenAI and ChatGPT suggest regulators are now looking at the data used to build AI systems in the first place.
That shift could have real implications for businesses around the world.
In a recently announced investigation, Canadian privacy regulators concluded that OpenAI’s original training and deployment of ChatGPT violated Canadian privacy laws. The investigation found that the company collected and used personal information without sufficient transparency or meaningful consent.
The regulators said ChatGPT had been trained using vast amounts of publicly accessible online information, alongside user conversations, in ways that many Canadians neither understood nor reasonably expected. Sensitive information, including political views, health conditions, and information relating to children, was potentially caught in the process.
At the centre of the ruling was whether users were properly informed about the scale and purpose of the company’s data collection, and whether publicly available information should automatically be considered fair game for AI training.
That distinction matters.For years, tech companies have largely operated under the assumption that information posted publicly online could be freely scraped, analysed, and reused to develop AI systems. Canada’s findings challenge that assumption directly. Regulators are increasingly questioning whether “publicly available” should really mean “freely exploitable,” particularly when sensitive personal information is involved.
This represents a shift in AI governance where regulators are beginning to ask whether the foundations of the model itself were built lawfully and responsibly. Once that question enters the conversation, the implications extend far beyond AI developers.
Why businesses should care
For businesses adopting generative AI, this ruling is a warning that AI governance can no longer be treated solely as an innovation issue. It is also becoming a compliance, procurement, and reputational issue.
Many organisations have approached tools like ChatGPT mostly to enhance efficiency and productivity. But regulators are now signalling that businesses must also think carefully about accountability, data governance, and trust.
That is especially true for organisations handling sensitive information such as employee records, healthcare data, financial information, or confidential client materials. Questions that once seemed abstract are becoming realities: Where was the model’s training data sourced from? Was consent obtained? What legal risks could arise from using a system trained on disputed datasets?
These concerns are likely to reshape vendor assessments and enterprise AI governance over the coming years. Businesses may demand stronger contractual assurances from AI providers, clearer data-handling practices, and greater transparency around model development.
AI due diligence is starting to look more like cybersecurity or privacy compliance.
A global ripple effect
Canada is unlikely to remain an isolated case.
Privacy regulators around the world are moving at different speeds, but many are converging on the same core issue of whether existing privacy laws can accommodate the realities of modern AI development. Some jurisdictions appear willing to take a pragmatic approach to internet-scale data scraping. Others are becoming less comfortable with the idea.
That divergence could create growing complexity for multinational businesses deploying AI tools across borders.
At the same time, the investigation indicates that regulators are not necessarily trying to stop AI adoption. In fact, Canadian regulators acknowledged that OpenAI has since implemented stronger privacy protections, limited the sensitive information used for training, and improved transparency around how ChatGPT handles personal data.
The broader message is not to squelch innovation. It is to build responsibly before deployment, not after.
The future of ChatGPT in business
The future of generative AI in business will likely depend less on raw capability and more on trust. The next phase of AI adoption will involve companies that can demonstrate responsible governance and lawful data practices. That means privacy systems, clearer transparency around training data, stronger enterprise safeguards, and more mature AI governance structures will become competitive advantages and not only regulatory burdens.
Canada’s ruling might be one of the first signals that the AI race is entering a new phase where the central question is not just about what AI can do. It will also be about whether the systems behind the AI tool were built in a way that can withstand regulatory and public scrutiny.
Understanding AI: Risks and opportunities
Try the course and others like it →
