Skip to content

Claude gift card fraud: what UK chatbot users should check now

Retro-futurist 1950s-style illustration of a worried but calm person checking a bank statement beside a friendly home computer and robot assistant, with gift cards shown as plain coloured cards, optimistic comic-book magazine style, no text, no speech bubbles, no captions, no signage.

AI chatbots are quickly becoming ordinary household tools: used for school questions, work admin, meal planning, health queries, family calendars and the odd difficult email. That makes them feel friendly and familiar. It also means the accounts behind them are starting to look more valuable to fraudsters.

The Guardian reports that some Claude users have spotted unauthorised gift-card purchases linked to their accounts, with payments appearing on bank or credit card statements as Anthropic charges. In one case described by the paper, two $200 payments were spotted before another attempted payment was stopped. Other users reported similar mystery charges in different currencies.

Anthropic told the Guardian it is adding protections against fraudulent gift-card purchases, cancelling subscriptions and issuing refunds where it identifies scam purchases. The company also said there was no evidence that compromised card details originated from Anthropic.

For UK readers, the useful lesson is broader than one company or one feature. If an AI account has a saved card, email access, personal history or family use, it needs the same level of housekeeping you would apply to a shopping, streaming or cloud-storage account.

Why an AI account can be attractive to fraudsters

A chatbot login may not look as sensitive as online banking, but it can still be useful to someone trying to make money or cause trouble. It may have a payment card attached. It may include a subscription that can be upgraded, gifted or reused. It may contain personal details in chat history, especially if people have used it to draft letters, plan travel, organise family life or ask private questions.

That does not mean everyone should panic or stop using AI tools. It does mean chatbot accounts should not be treated as throwaway experiments. Many people signed up for AI services quickly during the boom, sometimes using reused passwords, shared family devices or cards that renew automatically. Those shortcuts are exactly what fraudsters tend to look for.

We have previously written about why AI safety settings and permissions matter at home and work. Payment security belongs in the same conversation: the more useful a tool becomes, the more carefully its account needs to be managed.

What UK users should check first

Start with your bank or card statement. Look for AI-related charges you do not recognise, including payments in dollars or euros if the service bills internationally. Do not rely only on app notifications; log in to your bank or card provider directly and check recent transactions.

If you see a payment you did not authorise, contact your bank or credit card provider quickly. Ask them to block further suspicious payments, replace the affected card if needed and explain the process for disputing or reclaiming the charge. In the UK, card providers usually have established routes for reporting fraud, but speed matters.

Next, check the AI account itself. Change the password, especially if you have used it anywhere else. Turn on two-step verification if the service offers it. Review active sessions, connected devices and any account emails about purchases, gifts, password changes or new sign-ins. If the service has a support page, use the official help centre rather than links in unexpected emails.

It is also worth checking your email account. If vouchers, receipts or reset links arrive there, email security becomes part of the chain. Use a strong unique password, enable two-step verification and review forwarding rules or logged-in devices if anything looks odd.

Be careful with support emails and search results

Fraud problems often create a second risk: fake support. If you search for help in a hurry, you may find adverts, lookalike pages or forum posts that point to unofficial numbers and forms. Go through the AI company’s own website or app where possible. Avoid giving card details, one-time codes or remote access to anyone claiming they can “fix” the problem.

The same applies to emails that appear to come from an AI service. A real-looking receipt or “gift” notification can still be used to push you into clicking a malicious link. Open a fresh browser window, type the service address yourself, and check the account from there.

The practical takeaway

This story is a reminder that AI is no longer just software you play with for answers. It is becoming part of daily digital life, and that brings ordinary digital-life risks: subscriptions, saved cards, account recovery, phishing and customer support delays.

If you pay for ChatGPT, Claude, Gemini, Perplexity or any other AI assistant, take ten minutes today to check the basics: unique password, two-step verification where available, recent transactions, official support links and any saved payment methods you no longer need. If a family member uses the same account or card, make sure they know what a genuine receipt looks like and what to do if a charge appears.

AI tools can still be useful, but trust should come with controls. The friendlier and more capable these services become, the more important it is to keep the boring account settings in good order.