Compliance alert: Do not use AI browsers

AI browsers such as ChatGPT Atlas, Perplexity’s Comet, and Gemini-enabled Chrome are being promoted as smarter, faster ways to browse the web. They can read pages, summarise content, fill in forms, and even perform tasks automatically. But behind the marketing, these tools pose serious cybersecurity and privacy risks.

 

They give AI agents direct access to your browser sessions, emails, and online accounts, creating new ways for data to be stolen or misused. Attacks have already shown how easily these systems can be tricked into leaking sensitive information.

 

Until developers can prove these browsers are secure, they should not be used in any environment that handles personal, client, or confidential data. Privacy and compliance teams should block or isolate AI browsers, disable any “agent” or “memory” features, and prevent them from connecting to corporate systems immediately.

 

The illusion of control with AI browsers

Traditional browsers are passive windows to the internet. They display information but don’t decide what to do with it. AI browsers change that. They embed autonomous “agents” capable of reading, interpreting, and acting on content. When you tell Atlas to “book a meeting” or ask Comet to “summarise this page,” those agents act with your credentials, your logins, cookies, and stored sessions.

 

That creates a single, devastating vulnerability. If a malicious instruction is hidden within a webpage, a Reddit comment, or even an image, the agent may treat it as a real command. Researchers at Brave demonstrated this in August 2025, showing how Perplexity’s Comet could be hijacked by “prompt injections” embedded in web content. When triggered, the browser ignored security boundaries and executed cross-site actions as if they came from the user.

 

A few weeks later, LayerX uncovered an even more alarming flaw they called CometJacking. With one crafted link, an attacker could force the AI browser to consult its own “memory,” access connected services such as Gmail or Google Calendar, and secretly send that information to an external server. No malware, no phishing: just a malicious URL that looked entirely legitimate.

 

As Brave’s engineers explained, once the AI starts acting “on your behalf,” it can no longer reliably tell which instructions came from you and which came from a webpage. That’s why they and others, including OpenAI’s own security officers, call prompt injection an unsolved frontier problem.

 

The consequences are already visible. Proof-of-concept attacks show AI browsers stealing emails, copying calendar data, and sending encoded payloads to attacker-controlled servers, all within standard browsing sessions that look legitimate to corporate monitoring tools. To a firewall or SIEM, it’s indistinguishable from normal user behaviour.

 

ChatGPT’s Atlas: The surveillance browser

OpenAI’s ChatGPT Atlas takes the concept even further. Built on Chromium, Atlas merges browsing and conversation. Its “Agent Mode” can navigate pages, fill forms, and carry out multi-step actions. Its “Memories” feature records browsing activity to make responses more personal.

 

OpenAI says these options can be disabled, but as privacy experts point out, very few users change default settings. Even with protections like “Logged-out Mode” and “Watch Mode,” Atlas still processes everything you view, every site you visit, and how long you stay there. As privacy experts warned, this means the AI gains full visibility into your online behaviour, effectively turning your browser into a continuous data collection engine.

 

Under UK and EU GDPR, tools that track or infer detailed behavioural data require clear consent, purpose limitation, and strict data minimisation. The “memory” features in these browsers directly undermine those principles. Even if OpenAI or Perplexity promise that data isn’t used for training, it’s still analysed and stored for personalisation. This creates detailed behavioural profiles that could expose sensitive or confidential information.

 

Perplexity Comet: When one link can steal everything

If Atlas is the surveillance problem, Perplexity’s Comet is the exploitation problem. In 2025, researchers at Brave and LayerX independently uncovered vulnerabilities showing that Comet could be hijacked through indirect prompt injections and “weaponised URLs.”

 

The CometJacking attack was especially alarming. By clicking a single link, a user could unknowingly authorise the browser’s AI to access its memory, retrieve emails or calendar data, encode the results, and send them to a remote server. No warning, no download, no obvious red flag. To a network monitor, it would look like an ordinary user browsing the web.

 

Comet’s developers patched parts of the vulnerability, but later analysis showed similar attacks still worked in slightly modified form. The core issue remains: AI agents inside browsers can’t reliably separate trusted instructions from hostile content.

 

Opera’s Neon and the agentic risks

Opera was one of the first established browsers to enter the AI race. Its Neon browser introduced built-in “agentic browsing” tools designed to navigate the web for you, allowing users to summarise and perform tasks directly within websites.

 

However, cybersecurity experts warn that Neon shares the same underlying weaknesses as other AI browsers. Brave’s engineering team described the problem as systemic, explaining that once an AI agent acts with a user’s full privileges, the same-origin policy—a cornerstone of web security—no longer applies. This means a malicious webpage can effectively instruct the assistant to perform cross-domain actions, bypassing the protections that have kept browsers secure for decades.

 

So far, Opera has not published any detailed technical documentation showing how Neon mitigates prompt injection attacks or limits the data its AI assistant can access or store. As privacy researchers note, Neon reflects a broader industry trend: browser vendors rushing to add AI functionality before proving that it can be deployed safely.

 

Gemini-enabled Chrome: even Google isn’t immune

Google’s Gemini platform brings generative AI directly into Gmail, Docs, Sheets, and other Workspace apps, allowing it to read, analyse, and act on an organisation’s internal data. This deep integration makes it a powerful productivity tool, but also a major privacy and cybersecurity risk. Because Gemini sits inside Google Workspace, it effectively has access to everything an employee can see: emails, documents, spreadsheets, and even shared drives. That level of visibility means that a single breach, insider misuse, or misconfigured permission could expose vast amounts of sensitive data. Even when Google promises that Workspace content isn’t used to train models without consent, the system still processes and stores contextual data, creating potential exposure through inference or cached outputs.

 

Security experts warn that Gemini’s design dramatically expands the attack surface for organisations. The AI layer sits above multiple connected apps, so any vulnerability in its language model or API could provide attackers with access across an entire corporate environment. Because the system analyses content to provide “contextual assistance,” it can inadvertently surface confidential details, share internal data with unintended recipients, or generate text that leaks sensitive information. The lack of external transparency about how prompts, logs, and anonymised data are stored or audited only increases regulatory risk under GDPR and other privacy frameworks. Gemini introduces a single, powerful point of failure. For now, privacy teams should treat it as a high-risk tool and restrict its use around confidential or regulated information.

 

While Google’s security infrastructure is far more mature, experts caution that the issue isn’t patch management, it’s architecture. Despite his conflict of interest, Mozilla’s Brian Grinstead makes an important point that even the best models “don’t yet know how to separate trusted content from untrusted content.” That means a prompt hidden in a blog post or product review could still hijack an AI browser’s agentic layer, causing it to perform actions the user never intended.

 

A compliance risk disguised as convenience

For compliance and data-protection teams, the risks go far beyond data loss. An AI browser that reads, remembers, and acts on sensitive information creates a chain of potential violations:

 

  • Unlawful processing under GDPR if personal or client data is used without explicit consent.
  • Breach of security of processing (Article 32) when a prompt injection or AI misfire causes data exfiltration.
  • Loss of accountability when no clear audit trail exists for what the AI agent did or why.

 

Even OpenAI’s own CISO admits prompt injection is an “unsolved frontier problem.” In compliance terms, that means these tools are inherently high-risk, and no organisation should deploy them without a Data Protection Impact Assessment that concludes otherwise.

 

Everyone should treat AI browsers as high-risk technologies

 

What experts advise: Do not use

Security researchers across Brave, McAfee, and Reality Defender agree on one point: AI browsers are not ready for sensitive use. Privacy teams should block or isolate these tools and wait for the technology to mature. Until developers can build clear separations between user intent and webpage content, and until data-protection controls catch up, AI browsers belong in a sandbox—not on corporate networks.

 

  • Do not connect AI browsers to work emails, calendars, or document drives.
  • Avoid using them on sites requiring authentication or containing client or personal data.
  • Disable memory and agent features entirely if you must test them.
  • Treat them like experimental tools, not production-ready software.