There is a new kind of AI side hustle creeping into view, and it sounds deceptively tidy. Record a few voice clips. Film a short walk. Capture the background noise in a cafe. Let an app use snippets of your phone calls. Get paid.
According to The Guardian’s reporting, thousands of people around the world are now selling slices of their everyday life to help train AI systems. The jobs can be tiny, the payments can feel useful, and the pitch is simple: if tech companies are going to build smarter systems anyway, why not get a little money out of it?
That is not an absurd question, especially in a cost-of-living squeeze. But for ordinary UK readers, the more important question is what exactly is being sold. In many of these gigs, you are not just renting out a bit of spare time. You may be licensing your voice, your face, your habits, your surroundings, and the small details that make you recognisably you.
What is actually happening?
The Guardian describes a growing marketplace where people upload videos, photos, voice clips and other personal data for AI training. Some record city noise. Some provide multilingual speech. Some share ordinary footage of walking, cooking or going about daily routines. AI companies want more material from real life rather than endless recycled internet text.
This is not just a niche app-store curiosity either. Last week, TechCrunch reported that DoorDash is launching a stand-alone Tasks app in the US that pays couriers to complete activities such as filming everyday tasks or recording themselves speaking another language. In its own announcement, DoorDash said this data helps AI and robotic systems understand the physical world and that it plans to expand into more task types and countries over time.
Why companies want this so badly
From the companies’ point of view, the attraction is obvious. Real voices sound more natural than synthetic samples. Real homes, streets, kitchens and shops are messier than staged demo footage. Real human behaviour helps AI systems learn how people actually speak, move and make decisions.
Paying contributors for licensed material also looks cleaner than scraping everything from the open web and arguing about it later. But that does not automatically make the deal fair. As with ChatGPT’s new safety labels, the real question is whether an ordinary person can tell what is being taken, where it goes, and what control they still have afterwards.
Where the risk starts to outweigh the quick cash
The awkward part is that the payment is usually immediate, while the downside may surface much later. The Guardian reports that some marketplaces ask users to grant broad rights that can be worldwide, exclusive, irrevocable and royalty-free, with permission to create derivative works. In plain English, that can mean a short clip recorded today helps power a commercial AI product for years, with no extra payment and little realistic chance of pulling it back.
That matters more when the data involves your identity. A face can be copied. A voice can be cloned. A body, accent, gait or mannerism can become useful training material precisely because it is distinctive. The Guardian also notes legal concerns that biometric patterns are difficult to anonymise in any robust sense, even when names and locations are stripped out.
That does not mean every contributor will suffer some dramatic nightmare scenario. But it does mean the risk is not theoretical. Once personal likeness data is out in circulation, the line between a licensed training clip and an unwanted fake can get blurry very quickly. We have already seen how uncomfortable that territory becomes in our recent pieces on AI fakes and image rights and on sexualised AI avatar accounts on TikTok.
What UK readers should look for before saying yes
The sensible response here is not moral panic. For some people, a small AI-data gig may feel worth it. But it should be treated more like signing a licence than doing a normal errand job. The UK Information Commissioner’s Office says trust in AI and biometric technologies depends on organisations being transparent about the personal information they use, using it fairly, and putting proper governance and technical protections in place. That is a useful everyday benchmark.
Before agreeing to any app or platform that wants your recordings, photos or voice, check:
- Exactly what are you uploading: harmless ambient sound, or something that clearly identifies you or other people?
- Is the permission narrow and time-limited, or broad and hard to revoke?
- Are you being paid once, while the company keeps using the data indefinitely?
- Can the material be reused by partners or turned into derivative products?
- Would you still feel comfortable if the clip resurfaced years later in a context you did not expect?
If children, relatives, co-workers or private conversations appear in the material, the bar for caution should be even higher. It is one thing to film a supermarket shelf. It is another to hand over something intimate or difficult to replace.
The calm takeaway
AI companies are increasingly buying access to reality: how people sound, move, speak, shop, travel and live. That can create real short-term earning opportunities, and it is understandable why some people take them. But for most ordinary readers, the safest default is to assume that a quick payment for personal training data may be buying a much longer relationship than it first appears. If you would not be happy for a voice clip, a face scan or a slice of private life to keep circulating long after the money is spent, it is probably not the sort of side hustle to accept casually.
Sources:
The Guardian — Thousands of people are selling their identities to train AI – but at what cost?
TechCrunch — DoorDash launches a new ‘Tasks’ app that pays couriers to submit videos to train AI
DoorDash — Introducing DoorDash Tasks
ICO — Preventing harm, promoting trust: our AI and biometrics strategy
