EU moves to rein in ChatGPT

The European Commission is currently assessing whether ChatGPT should be formally classified as a Very Large Online Search Engine (VLOSE) under the Digital Services Act (DSA). This might sound like a technical designation but it means that any general-purpose AI system will no longer be treated as just software, but as a systemically important digital service.

If confirmed, ChatGPT would become the first standalone AI product to fall into the DSA’s most stringent regulatory category, placing it alongside platforms like Google, Meta, and Amazon. And the implications extend far beyond OpenAI.

A threshold crossed 

The immediate trigger for this regulatory movement is scale. Under the DSA, any online service with more than 45 million monthly users in the EU can be designated as “very large,” a label reserved for services considered capable of posing systemic risks. ChatGPT has comfortably exceeded that threshold, reaching over 120 million monthly users across Europe.

But it’s not about how many people use ChatGPT. It’s about what kind of service it actually is.

The DSA was designed with clear categories, online platforms that host and distribute content, and search engines that index and retrieve it. ChatGPT blurs those distinctions. It retrieves information like a search engine, generates new content like a publisher, and hosts interactions like a platform. Essentially, it’s a hybrid and that is forcing regulators to stretch the boundaries of existing law.

This is why the EC has emphasised that classification will be handled on a case-by-case basis, rather than applying a blanket rule to all large language models. Even so, the fact that ChatGPT has crossed the user threshold and publicly disclosed it, makes some form of designation increasingly likely.

Where the decision stands

Despite mounting expectations, the decision is not yet final. The EC is still in the process of reviewing OpenAI’s user data and consulting with national regulators, including Ireland’s media authority, which plays a key role in overseeing many major tech companies operating in Europe.

This phase may take several weeks, but it is procedural rather than speculative. Should the EC proceed with designation, the consequences will follow a well-defined path. OpenAI would be formally notified and given a four-month window to comply with the enhanced obligations attached to VLOSE status. After that, the company would enter a regime of ongoing supervision, including audits, reporting requirements, and financial contributions to regulatory oversight.

So, while the outcome is not yet official, the machinery of regulation is already in motion.

What would regulation mean?

Designation under the DSA is operationally demanding. For a service like ChatGPT, it would mean opening up parts of its inner workings to scrutiny in ways that have not previously been required of AI systems.

OpenAI would need to provide meaningful explanations of how its systems prioritise information, how recommendations are generated, and, critically, given recent developments, how advertising is integrated and targeted. This is particularly timely, as ChatGPT has only recently begun experimenting with advertising features, bringing it closer to the commercial models long associated with traditional platforms.

The DSA would also require systematic risk assessment and mitigation. This includes examining how the service might contribute to the spread of illegal content, affect fundamental rights, influence democratic processes, or impact public health. These are the risks regulators have long associated with large digital platforms and are now beginning to apply to AI.

Oversight would not stop there. Independent audits would become mandatory, and vetted researchers could be granted access to platform data to study systemic risks. The financial dimension is also significant. OpenAI would be required to contribute to the cost of its own supervision, while facing potentially substantial fines for non-compliance, as seen in enforcement actions against companies like X (Twitter).

Why UK businesses should be paying attention

For organisations in the UK, it would be a mistake to view this as a purely European development. The reality is that EU digital regulation continues to exert influence far beyond its borders, a phenomenon often described as the “Brussels Effect.”

Many UK businesses operate within or alongside EU markets, rely on EU-based users, or integrate services that are themselves subject to EU law. If ChatGPT becomes regulated under the DSA, those businesses may find themselves indirectly affected through supply chains, contractual obligations, and customer expectations.

There is also a broader regulatory convergence to consider. The DSA does not exist in isolation. It sits alongside the EU AI Act, which governs the development and deployment of AI systems. Together, these frameworks create a layered model of oversight, one that addresses both how AI systems are built and how they operate at scale in society. UK regulators may not replicate this structure exactly, but they are unlikely to ignore it.

Importantly, this development signals a shift in how AI tools are perceived. They are no longer just productivity enhancers or experimental technologies. They are becoming regulated infrastructure, with all the responsibilities that entails. Businesses that rely on AI, whether for customer service, marketing, legal analysis, or internal operations, will need to think carefully about governance, transparency, and accountability.

What comes next?

ChatGPT has clearly exceeded the DSA’s user threshold and regulators are under increasing pressure to ensure that AI services with systemic reach do not escape meaningful oversight. Although classification is being approached cautiously, the probability of designation is high.

If it happens, the precedent will be huge. It will establish that AI systems can be regulated not only as technologies but as platforms of societal significance. It will also accelerate a broader reassessment of how existing laws apply to emerging digital services that do not fit neatly into traditional categories.

The regulatory environment for AI is becoming more complex and more consequential. And while the story of ChatGPT and the DSA is still being written, it looks like it’s beginning to shape the future of digital regulation.