Skip to content

Tinder and Zoom want ‘proof of humanity’ badges, what UK users should check before scanning their eyes

Retro-futurist 1950s-style illustration of a cheerful but cautious British person at a video call screen and dating app profile while a friendly chrome verification orb projects a glowing human badge, with no text or signage.

Tinder and Zoom are the latest big platforms to test a new kind of AI-era trust signal: a badge meant to show that the person on the other end is a real human, not a bot or a deepfake.

According to the BBC, both services are linking up with World ID, a system from Sam Altman-backed World that can verify people through an iris scan and then issue a reusable “proof of humanity” credential. World’s own announcements say Tinder users in the US will be able to add a verified-human badge to their profile, while Zoom is integrating a higher-assurance check designed to help confirm that the person in a meeting really is the person they claim to be.

For ordinary UK readers, this is not really a story about futuristic gadgets. It is about a very practical internet problem. More people are now running into AI-written scam chats, fake dating profiles, cloned voices and convincing video impersonations. If the web is getting harder to trust, companies were always going to look for stronger ways to prove that a real person is present.

Why Tinder and Zoom think this matters

On Tinder, the pitch is easy to understand. Fake profiles waste people’s time at best and can feed romance scams at worst. The BBC notes that Tinder already asks for video selfies in some cases, and says World ID would be an extra optional layer rather than a full replacement. A visible badge may help users feel more confident that a match is not just an AI-generated profile with a polished script behind it.

On Zoom, the problem is slightly different. Here the worry is less about random bots and more about impersonation. World says its Zoom integration is built around a three-way check: the original image captured when someone verified with an Orb, a live liveness selfie taken on the user’s device, and the video feed seen in the meeting. If they match, other participants can see a verified-human badge.

That sounds heavy-duty, but there is a reason for it. Deepfake video fraud is no longer theoretical. Companies have already reported losses after staff were fooled by realistic fake video calls. In that context, a stronger identity check for sensitive meetings will sound sensible to some businesses.

What UK users should like, and what they should question

There is a real benefit here. A badge that is hard to fake could make some online spaces less exhausting. Dating apps full of bots are miserable. Work calls where nobody is quite sure who is genuine are worse. If better verification reduces scams and impersonation, many people will welcome it.

But the method matters. In this case, the selling point is not just a password or selfie. It starts with an iris scan, either through World’s app flow or one of its Orb devices, to create a World ID that then lives on your phone. World says the system is privacy-preserving and does not require people to hand over ordinary personal details such as a name or address. Even so, many users will quite reasonably feel that scanning their eyes is a bigger step than ticking a box or uploading a standard photo.

That does not make the idea automatically bad. It does mean people should slow down before treating “verified human” as a simple, no-cost upgrade. As we noted recently when Claude introduced photo ID checks for some users, identity systems often arrive wrapped in convenience language. The important questions come afterwards: what exactly are you proving, who keeps the credential, what happens if you lose access, and can you still use the service comfortably if you opt out?

The bigger risk is social pressure

The most interesting part of this story may be what happens next if these badges spread. Even when a verification scheme is technically optional, it can stop feeling optional once platforms start rewarding it. World says verified Tinder users get a badge and free Boosts. That is a small sign of how this could work: features, visibility or trust may slowly tilt towards people willing to verify themselves more deeply.

For some users, that trade-off will feel worth it. For others, it will feel like a privacy tax for basic participation. That tension is likely to show up across more services as AI impersonation gets better. The same debate is already bubbling up around voice cloning, facial analysis and other systems that promise safety while asking for more intimate data in return. We have seen similar concerns in areas like human-sounding voice AI, where realism makes tools more useful but also makes trust harder.

A calm rule of thumb

If proof-of-humanity badges come to more services in the UK, the sensible response is neither blind trust nor instant panic.

Check whether the verification is optional, what the badge actually confirms, whether the platform gets more of your biometric or identity data than you expected, and what practical downsides there are if you say no. On work tools, ask whether the badge is being used only for high-risk meetings or being normalised far more broadly. On dating apps, remember that a verified badge may reduce one kind of risk without telling you whether someone is honest, safe or worth your time.

The direction of travel is clear enough. As AI makes fake profiles, fake voices and fake faces cheaper, internet platforms will push harder for proof that a human is really there. That may help, but it also shifts more of online trust towards identity checks that feel unusually personal. The question for users is not just whether the badge works. It is whether the bargain behind it feels fair.


Sources:
BBC News, Tinder and Zoom offer ‘proof of humanity’ eye-scans to combat AI
World, The new World ID and the partners bringing proof of human to the internet
World, World ID for business: Zoom and Docusign integrate proof of human