Anthropic has started rolling out identity verification on Claude, which means some users may now be asked to prove who they are before accessing certain capabilities or as part of what the company calls routine integrity, safety and compliance checks.
In practice, that means showing a government-issued photo ID and taking a selfie so the verification system can compare your face with the document. Anthropic says the process is handled by Persona, an external verification partner, and says the data is used to confirm identity rather than to train models.
That will sound sensible to some people and intrusive to others. For UK readers, the useful question is not whether identity checks are always good or always bad. It is whether this specific prompt is real, whether the feature is worth the data trade-off, and what you should check before handing over a passport or driving licence to an AI platform.
This also lands at a moment when more AI tools are asking for wider access to our devices, files and accounts. As we noted in our guide to AI tools that overstep instructions, convenience can make people click through permissions faster than they otherwise would. Adding identity checks raises that threshold again.
What Anthropic says is happening
Anthropic’s support page says identity verification is being introduced for “a few use cases”, though it does not list all of them publicly. The company says some people may see the prompt when using certain capabilities, during platform integrity checks, or as part of broader safety and compliance measures.
The company also says it chose Persona because of its privacy and security controls. According to Anthropic, Persona handles the ID and selfie check, images are encrypted in transit and at rest, and the verification data is used to confirm identity rather than for unrelated purposes. Anthropic’s wider privacy policy also says it can use prompts and outputs to improve services unless users opt out, but it separately says verification data is not being used for model training.
That distinction matters. A lot of people are now fairly used to AI firms collecting chat data. Handing over biometric-style verification data feels different, because it is much harder to change later if something goes wrong.
Why ordinary users might care
For some people, this may never appear. For others, it could become a sudden roadblock in the middle of using a chatbot for work, study or everyday admin.
If you rely on Claude to help draft notes, summarise documents or organise tasks, an ID check may feel like an annoying extra hoop. But for many users the bigger issue will be trust. Plenty of people are comfortable sharing text with a chatbot, yet far less happy about uploading identity documents to keep using one.
That does not automatically make the check unreasonable. Some AI companies are clearly under pressure to prevent fraud, abuse and account misuse. But from a consumer point of view, a new safeguard can still create a new privacy risk. That is especially true when the company says the check may be tied to only certain features rather than the whole service, because many people will reasonably want to know exactly what they are giving up the data for.
It also fits a wider pattern in consumer AI. The tools become more powerful, and in return the companies ask for more access, more data or more proof of identity. We have already seen similar trade-offs around shopping bots, work assistants and voice tools. In our earlier look at ChatGPT’s safety labels, the core message was simple: more safeguards can help, but they do not remove the need to understand what a system is really doing with your information.
What UK users should check before saying yes
First, make sure the request is genuine. If you are being asked for ID, it should come through Claude’s own interface or official support flow, not through a random email, message or third-party site. If in doubt, stop and verify through Anthropic’s help pages.
Second, ask whether the feature is worth it. If the identity check is only gating one capability you rarely use, the simplest answer may be not to proceed. A passport or driving licence is high-value personal data. You do not have to hand it over just because a service asks.
Third, read the privacy explanation before you continue. Pay attention to who processes the check, what is stored, how long it is retained, and what appeal route exists if something goes wrong. Anthropic says Persona handles the verification and that users can contact the company for help if verification fails or if an account is banned after the check.
Fourth, keep a practical limit on what you trust the account with afterwards. Even if the identity check is legitimate, that does not make the chatbot itself more accurate, safer in every context, or a better judge of sensitive personal decisions. Verification proves who you are, not whether the bot is right.
The bigger picture
Identity checks may become more common as AI companies try to limit abuse and satisfy regulators. That may be understandable, but it changes the feel of consumer AI. A casual chatbot starts to look more like a regulated service, where access depends on documents, compliance checks and platform rules that users cannot fully inspect.
For UK readers, the sensible stance is calm rather than panicked. Do not assume every identity check is sinister. But do treat it as a meaningful moment. If an AI tool wants your photo ID and a live selfie, pause, read, confirm the request is real, and decide whether the feature is actually worth that level of personal data.
Convenience matters, but so does being able to walk away.
Sources:
Anthropic Support, Identity verification on Claude
Anthropic, Privacy Policy
Engadget, Anthropic will ask Claude users to verify their identities ‘for a few use cases’
