Anthropic now has a public help page describing identity verification for Claude. The page says some users may be asked for a physical government-issued photo ID and may also need a live selfie. That part is verified. The bigger claim, that Claude broadly now requires passport-style checks for general access, is not.
I started out expecting this to be another internet panic with one screenshot and a lot of extrapolation. The help page changed that. Anthropic is clearly building a real verification flow, with a vendor, accepted documents, retention rules, and appeal review access. What’s still unclear is scope.
That distinction matters. A limited gate is not the same thing as a universal login requirement. But it still marks a shift: high-value AI access is starting to look less like using a website and more like entering a managed service where identity, policy, and access controls travel together.
What Claude’s identity verification actually requires
Here’s the part Anthropic has confirmed in its help center.
Users who hit a verification prompt may need:
- a physical government-issued photo ID
- a phone or computer camera
- a live selfie in some cases
- about five minutes
Accepted IDs include passports, driver’s licenses, state or provincial ID cards, and national identity cards. Anthropic says it does not accept photocopies, screenshots, scans, mobile IDs, non-government IDs, or temporary paper IDs.
That last detail is easy to miss, but it tells you this is not a lightweight checkbox. Anthropic is asking for original physical documents, held up to a camera, plus liveness-style capture in at least some flows. In plain English: this is closer to financial-services onboarding than “click to confirm you’re human.”
Anthropic also names its vendor: Persona. The company says Persona collects and holds the ID and selfie, Anthropic is the data controller, and Anthropic can view verification records in Persona “when needed” such as appeals. Anthropic says it does not copy or store those images on its own systems. That is verified by the help page, and it’s more specific than the usual trust-us privacy paragraph.
What is not confirmed is where this prompt appears. Anthropic’s wording is narrow: verification is being rolled out “for a few use cases,” for “certain capabilities,” and as part of “routine platform integrity checks” or “other safety and compliance measures.” That sounds selective, not product-wide.
A useful comparison table:
| Question | What Anthropic confirms | What remains unclear |
|---|---|---|
| Is there a verification flow? | Yes | No |
| Does it involve government ID? | Yes | No |
| Can it include a selfie? | Yes | No |
| Is it required for every Claude user? | No public evidence | Yes |
| Is it tied to specific features or risk tiers? | Wording suggests yes | Exact triggers unknown |
Why AI companies are adding identity verification now
Anthropic’s official reason is straightforward: prevent abuse, enforce usage policies, and comply with legal obligations. That is verified. The more interesting question is why this is showing up now in consumer AI products at all.
The simple answer is that frontier models are no longer being treated like ordinary software. They are becoming trust-managed infrastructure.
Once a provider believes some capabilities create outsized legal, safety, fraud, or policy risk, anonymous access starts to look expensive. Identity checks help with:
- banning repeat abusers who just create new accounts
- gating sensitive or high-risk features
- satisfying compliance demands from enterprise and government customers
- showing regulators that “we know who used what”
None of this requires a conspiracy. It’s just the logic of expensive, centralized systems under pressure. If your product can write code, automate workflows, generate realistic content, and possibly touch regulated domains, executives start reaching for the same controls every other risk-heavy platform uses.
The release notes are revealing mostly because of what they don’t say. Anthropic’s recent Claude app updates mention product and admin changes, but do not announce a broad identity-verification rollout. The Transparency Hub also does not describe a major new user verification policy. So the strongest supported reading is: Anthropic has built the gate, published the workflow, and is using it in some cases, but has not publicly framed this as a platform-wide change.
That’s a small rollout with a big precedent. The first time a major AI lab says, in effect, “some capabilities require government-backed identity,” the product category changes. The model is still a chatbot on the surface. Operationally, it starts to resemble a regulated utility.
The privacy trade-offs of government ID and selfie checks

Anthropic deserves some credit for being more concrete than usual. It explicitly says Persona stores the ID and selfie, not Anthropic, and that the data is used only to confirm identity. That is the company’s stated policy. It is plausible, but readers should keep the distinction straight: this is a vendor-controlled document pipeline, not a zero-risk system.
The privacy problem is not just “a company sees your ID.” It’s that government ID verification creates a durable link between account activity and real-world identity. Once that link exists, the blast radius of mistakes, breaches, subpoenas, and policy changes gets larger.
There are a few obvious risks:
- Data concentration. A verification vendor holding passports, license images, and selfies is a more attractive target than an email-password table.
- Function creep. Today the stated use is identity confirmation. Tomorrow the temptation is stronger fraud scoring, account recovery shortcuts, or broader risk screening.
- False matches and access failures. Face-based checks fail unevenly, and when they fail, the user often has to prove they are themselves to a machine that has already decided otherwise. We’ve covered that dynamic before in facial recognition misidentification.
- Legal exposure. Anthropic says data stays between the user, Persona, and Anthropic except where legally required. “Legally required” is normal language. It is also where abstract privacy promises meet concrete state power.
A lot of companies talk as if outsourcing storage solves the trust problem. It doesn’t. It changes the trust boundary. That can be an improvement. It is not the same thing as making the risk disappear.
This is also part of a broader pattern. AI products increasingly ask for browser access, extensions, work data, or identity signals in exchange for convenience. We saw a softer version of this in ChatGPT Extension Privacy: the feature works, but the permission surface quietly expands.
Why the identity verification precedent matters more than the rollout size
The loudest online reaction has been “go local.” That response is emotionally understandable and analytically incomplete.
Local models are not a perfect substitute for Claude. They still lag on convenience, reliability, and often capability at the top end. But identity-gated cloud AI changes the fallback math for power users and builders. If access to premium capabilities can be conditioned on identity verification, then local inference stops being a hobbyist preference and starts looking like resilience planning.
That matters in at least three ways.
First, users may decide that some tasks are worth keeping off identity-linked platforms entirely. Sensitive drafting, exploratory research, controversial topics, and personal material all look different when a government ID check sits in the background.
Second, builders get a reminder that centralized AI dependencies are policy dependencies. If your product flow assumes any user can always reach a cloud model with an email and a card, you now have another failure mode. This is one reason local and open-weight fallback stacks keep getting more attractive, despite their rough edges. We’ve seen the same “great demo, messy trust boundary” pattern in OpenClaw Security Concerns, just from a different angle.
Third, the market learns from precedent. If one top lab normalizes ID plus selfie checks for premium or sensitive use cases, others can copy it with much less backlash. The second company gets to say: everyone serious already does this.
That’s the real story here. Not that every Claude user suddenly needs a passport. The verified evidence does not show that. The story is that AI access is inching toward a world where identity is part of the product.
What users should do right now
For now, the practical move is not panic. It’s inventory.
If you use Claude heavily, ask four concrete questions:
- Which workflows truly require a cloud frontier model?
- Which ones can move to local or open-weight alternatives?
- What data would you be uncomfortable tying to a verified identity?
- What happens if your account hits a verification gate unexpectedly?
If Anthropic prompts you, read the request carefully. The current help page supports the claim that identity verification may involve a passport, driver’s license, or national ID, plus a live selfie. It does not support the stronger claim that this is now universal across Claude.
That difference is the whole ballgame. Limited verification is still verification. A partial gate is still a gate. And once users accept that the best AI tools may require government-backed identity, the industry won’t be eager to unlearn it.
Key Takeaways
- Anthropic has verified that some Claude users may face identity verification using a physical government ID and, in some cases, a live selfie.
- There is no verified public evidence that this is a universal requirement for all Claude access.
- The important shift is structural: AI services are starting to behave more like trust-managed infrastructure than anonymous web apps.
- Outsourcing ID handling to Persona changes the trust boundary, but it does not erase privacy, breach, or subpoena risk.
- Even a partial rollout strengthens the case for local and open-weight fallbacks when access, privacy, or policy stability matter.
Further Reading
- Identity verification on Claude | Claude Help Center, Anthropic’s primary documentation on required IDs, selfie checks, Persona, and data handling.
- Claude Apps Release Notes | Anthropic Docs, Recent official product updates; useful for checking what Anthropic has and has not publicly announced.
- Transparency Hub | Anthropic, Anthropic’s public transparency and safety disclosures, with no obvious broad consumer verification announcement.
- Anthropic Employment Privacy Policy PDF, Shows how Anthropic discusses government ID use in employment contexts, which is a useful contrast to product access verification.
The cloud AI market spent two years selling intelligence as abundant and frictionless. Identity verification is what it looks like when that story runs into risk, regulation, and control.
