A Chrome extension with a shiny “Featured” badge quietly scraped ChatGPT and DeepSeek conversations from roughly 900,000 browsers, batched them every 30 minutes, and shipped them off to attacker‑controlled domains.
That’s the entire news part of the ChatGPT extension privacy story.
Everything else is the part people are getting wrong.
TL;DR
- The problem isn’t “a few bad extensions”, it’s a browser model that lets any extension become a full‑time keylogger for your AI use.
- Uninstalling malware helps, but unless you isolate AI work, lock down extension policy, and treat prompts like credentials, prompt poaching is a feature, not a bug.
- For most companies, the bigger risk isn’t advertising; it’s leaked IP and credentials that can be reused, sold, or used to train competing models.
How extensions grabbed ChatGPT prompts at scale
Security researchers at OX Security found two Chrome extensions impersonating an AI sidebar product called AITOPIA. Together they had about 900,000 installs. Microsoft’s Defender team later confirmed matching telemetry across 20,000+ enterprise tenants.
What did these extensions actually do?
- Injected content scripts into ChatGPT/DeepSeek pages
- Scraped the DOM to capture prompts and responses
- Stored them locally in Base64‑encoded JSON
- Every ~30 minutes, exfiltrated the bundle over HTTPS to domains like
deepaichats[.]comandchatsaigpt[.]com
OX notes the extensions promised only “anonymous, non‑identifiable analytics data” while quietly copying full conversation content.
This is not a sophisticated exploit.
It is a copy‑paste browser tutorial with an ad‑tech business model.
You do not need a 0‑day if the user willingly installs a keylogger and Chrome helpfully wires it into every tab.
Why the browser + AI‑wrapper economy made prompt‑poaching trivial
Most coverage frames this as a malware story: bad guys ship malicious extensions, Google removes them, install count stops at 900,000, everyone claps.
Except that’s not the interesting failure.
The interesting failure is that the winning AI UX is “random web app in a tab” and the winning monetization strategy for small devs is “free extension, harvest data later.” Prompt poaching is what you get when these two curves intersect.
1. The browser permission model is a loaded gun
Chrome’s extension model has three properties that are great for developers and terrible for ChatGPT extension privacy:
- “Read and change all your data on all websites” is a normal permission, shown with the same UI as legitimate cases like Dark Reader.
- Once granted, that power is always‑on, your extension runs in the background, injects into any matching site, and can phone home whenever it likes.
- There is no per‑site isolation for AI work;
chat.openai.comis just another page in the ocean.
The math is instructive.
If even 1% of Chrome users install “helpful AI” extensions, and 1% of those are malicious or get bought by someone who is, you have millions of browsers quietly exporting whatever people type into their LLMs.
Not a single security control has to fail. The system is working as designed.
2. The AI‑wrapper economy incentivizes data extraction
We’ve already written about AI wrappers: thin UI layers over someone else’s model, racing for users while paying all the API costs.
If you’re that developer, your economics are brutal:
- You pay for API calls.
- You compete with the model vendor’s own UI.
- Users won’t pay you much, if anything.
There is one obvious asset you control: user prompts.
Prompts are:
- High‑signal behavioral data (what people actually care about)
- Packed with proprietary code, strategy, and domain expertise
- Extremely valuable for ad‑tech, lead generation, and training your own model later
So you have a lot of wrappers and “AI sidebars” whose only viable business model is “collect data and figure out monetization later.” OX’s extensions simply skipped the wait.
It’s not that a few devs went rogue.
It’s that the wrapper business model is quietly telling every ambitious extension author: “The prompts are the product.”
3. “Security” tools are being built on the same sand
Look at recent incidents around Google API keys leakage. The pattern is the same:
- Developers paste secrets into web UIs.
- Browser extensions and frontends handle them as plain text.
- Those values end up everywhere, logs, third‑party analytics, misconfigured scripts.
We already know we need prompt‑layer security for agents calling tools and APIs.
What the extension campaign shows is that the browser is itself a prompt layer, and currently, it’s the least governed one.
ChatGPT extension privacy checklist: find, remove, and audit extensions
So: could your ChatGPT prompts have been stolen?
If you’ve ever had AI‑branded Chrome extensions installed, the honest answer is “possibly,” and for most people, “you’ll never know exactly what was taken.” The better question is: how quickly can you close the tap and limit the damage?
Step 1: Inventory and triage your extensions
In Chrome or Edge:
- Go to
chrome://extensions(oredge://extensions). - Toggle Developer mode on.
- For each extension, click Details.
Ask three questions for each:
- Does it say “Can read and change all your data on all websites”?
- Does its purpose require that level of access? (Dark mode, maybe. “AI Prompt Helper,” almost certainly not.)
- Do you recognize the developer and business model?
Quick rules:
- Remove any AI‑themed extension you don’t absolutely need, especially ones that mention ChatGPT, Claude, Gemini, Grok, or DeepSeek in the name.
- Prefer open‑source, audited projects over opaque “productivity boosters.”
- If an extension won’t let you restrict access to specific sites when it clearly could, treat that as a red flag, not a UX quirk.
Step 2: Look for past compromise signals
You probably can’t reconstruct everything that happened, but you can look for:
- Weird outbound domains in your DNS logs or router history (check against the C2 list from OX’s report).
- Extensions with IDs matching those in OX’s IOCs (again, see their report, they list exact IDs).
- Enterprise telemetry: Microsoft reports seeing this campaign in >20,000 tenants; if you’re on Microsoft 365, your security team may already have an alert template.
If any hits show up, assume:
- Anything you pasted into ChatGPT / DeepSeek on that browser is now out of your control.
- That includes credentials, API keys, snippets of proprietary code, and strategy docs summarized via copy‑paste.
Rotate keys and passwords accordingly.
Step 3: Change your default workflow
The important move is not “spend Saturday afternoon uninstalling things.” It’s changing how you use AI in the browser going forward.
For individuals:
- Run AI work in a separate browser profile (or a different browser) with zero third‑party extensions.
- Treat prompts like passwords: never paste raw secrets, API keys, or internal URLs into a public LLM UI.
- Use a password manager and secret store; don’t use ChatGPT to “help remember” credentials.
For teams:
- Enforce extension policies via MDM/Group Policy: maintain a short allowlist, block AI‑helper extensions by default.
- Standardize on one corporate AI interface with logging and prompt‑layer security controls, instead of “everyone use whatever plugin looks cool.”
- Put your egress filters to work: block known C2 domains from OX/Microsoft, but also consider blocking generic “AI helper” telemetry endpoints until vetted.
Why this matters for businesses, IP, and model training
The obvious fear is targeted advertising: “I asked ChatGPT about an obscure peptide and now Reddit shows me peptide ads.”
That might be true in some cases, OX and Microsoft both show that extensions captured full conversation text, but for businesses, it’s the least interesting risk.
Three risks are more structural.
1. Silent IP loss
If you use ChatGPT as your pair programmer or strategy notepad, your prompts and responses are a rolling export of:
- Your internal codebase
- Your architecture and incident history
- Your roadmap and M&A speculation
When 900,000 browsers leak that kind of data, you don’t get a notification. You get a quiet, compounding disadvantage.
2. Reusable secrets and targeted attacks
Microsoft explicitly warns these logs can contain:
- Credentials and API keys
- Internal URLs and ticket IDs
- Details of failed deployments or security incidents
Combine that with the Google API keys vulnerability, and you get an unflattering pattern: organizations are teaching attackers where the skeletons are via their own AI workflows.
Prompt poaching isn’t just “steal some text.”
It’s “build a search index of everything your engineers are worried about and the exact commands they ran.”
3. Training competing models on your expertise
The most under‑discussed angle: exfiltrated chats are a free curriculum of real‑world prompts and high‑quality answers.
If you’re a small LLM vendor or data broker:
- You can train or fine‑tune on that data.
- You get industry‑specific phrasing, edge cases, and long‑tail questions that public web data doesn’t cover.
- You never paid for the conversations, OpenAI did.
From that perspective, the extension campaign looks less like malware and more like a bootleg RLHF pipeline.
If your company is feeding ChatGPT with its best domain knowledge, prompt poaching means you’re also feeding whichever gray‑market model trains on those stolen logs.
Key Takeaways
- The main ChatGPT extension privacy failure isn’t one campaign; it’s a browser/extension model that makes DOM‑level prompt logging trivial.
- The AI‑wrapper economy nudges extension authors toward “prompts as product,” so expect more prompt poaching, not less, unless workflows change.
- You should assume any AI‑themed extension with “read and change all your data” could exfiltrate chats, even if it doesn’t today.
- Real mitigation is structural: isolate AI in clean profiles, lock down extension policy, and treat prompts like passwords and API keys.
- For organizations, the biggest risks are leaked IP and reusable secrets that can fuel targeted attacks and competing models, not just creepy ads.
Further Reading
- 900K Users Compromised: Chrome Extensions Steal ChatGPT and DeepSeek Conversations, OX Security, Original technical research detailing DOM scraping, exfil domains, and indicators of compromise.
- Malicious AI Assistant Extensions Harvest LLM Chat Histories, Microsoft Defender, Independent analysis and telemetry across enterprise tenants, plus mitigation guidance.
- This new malware campaign is stealing chat logs via Chrome extensions, TechRadar, Press overview that popularized the term “prompt poaching.”
- Fake AI Chrome Extensions Steal 900K Users’ Data, Dark Reading, Impact framing with enterprise risk and defensive recommendations.
- Chrome extensions steal ChatGPT data, Cybernews, Practical, end‑user‑focused walkthrough of IOCs and how to audit your browser.
The comforting story is that Chrome will keep banning bad extensions and ChatGPT will keep your data safe. The more accurate story is that as long as prompts live in a general‑purpose browser with over‑privileged add‑ons, the real privacy boundary is whatever you decide not to type.
