Spotify says it is trying to get a better grip on music that appears under the wrong artist name, including cases that seem to involve AI-made tracks. That may sound like an industry problem, but it matters to listeners too. If you use streaming apps every day, a familiar artist page, plausible cover art and a few confident recommendations can make a fake track look real at a glance.
That is the worry behind a new Guardian report about musicians finding songs on Spotify that they did not make. The paper describes jazz composer Jason Moran discovering an EP under his name that sounded nothing like his work. Spotify told the Guardian it is using detection systems, human review and takedown processes, and the company recently launched a beta feature called Artist Profile Protection so some artists can review releases before they appear on their profile.
That is useful progress, but it does not fully solve the everyday trust problem. If fake or wrongly attached music can appear before it is spotted, listeners can still be misled, artists can still be embarrassed, and recommendation systems can still help the wrong track spread.
Why this matters outside the music industry
For ordinary listeners, the bigger issue is not just royalty fraud or platform policy. It is that AI is making familiar online signals less reliable. We are used to treating an official-looking profile, a polished image and a neat title as proof that something is genuine. Increasingly, that is not enough.
We have already seen similar problems in other corners of the internet. On social apps, people are being trained to look harder at labels and disclosure, as in our piece on what UK shoppers should do when AI-made ads are not clearly labelled. In voice tools, realism is improving so quickly that sounding human is no longer a guarantee of trustworthiness, which is why we recently suggested a few checks before relying on more natural-sounding AI assistants.
Music is joining the same pattern. A streaming page may still be useful, but it is no longer a place where every track can be assumed to be authentic just because it is neatly packaged.
What Spotify says it is doing
Spotify’s new Artist Profile Protection feature is optional and currently in beta. Where available, it lets artists approve or decline eligible releases before they appear on their profile. Spotify says that if an artist does not approve a release, it will not be listed under that artist on Spotify, even though it may still appear elsewhere.
The company also has a published policy on music that impersonates another artist’s voice. It says it will remove songs that clone a real artist’s voice without permission, whether or not the uploader openly says it is an AI version. That is reassuring as far as it goes, but Spotify also says it often needs a claim from the artist or someone acting for them before it can judge whether the use was authorised.
In practice, that means bad or misleading content may still sit around until somebody notices and reports it. For a major artist, that is annoying. For smaller musicians, it can be harder, because they may have less time, less management support and fewer fans raising the alarm.
What UK listeners should check before you share or stream
If a track looks surprising, do a quick pause-and-check before you post it, add it to a playlist or send it to friends as if it is definitely real.
- Check whether other official channels mention it. If an artist has a website, Bandcamp page, Instagram account or label page, see whether the release appears there too.
- Be wary of odd artwork or style shifts. A sudden change does not prove something is fake, but an unfamiliar visual style and music that sounds nothing like the artist are both obvious warning signs.
- Look at the release details. Labels, credits and release dates can sometimes reveal a mismatch or a distributor you do not recognise.
- Do not assume the platform has already checked it. Recommendation systems can surface things that still need human review.
- If it feels wrong, search before sharing. A quick search can show whether fans or the artist have already flagged the track as suspicious.
The practical takeaway
This story is really about digital common sense. AI does not just create new content, it also weakens some of the shortcuts people use to decide what is real. That applies to music, shopping, news clips, celebrity clips and increasingly to everyday search results as well.
So the sensible response is not panic and it is not to stop using Spotify. It is to lower the amount of trust you place in surface-level cues. If a song appears under a real artist’s name, treat that as a useful clue, not final proof. A short extra check is often enough.
That may feel slightly tedious, but it is becoming part of modern media hygiene, much like checking whether a message is a scam or whether an image has been heavily edited. The more convincing AI outputs get, the more valuable that small habit becomes.
For listeners, the cost is a few extra seconds. For artists, the stakes are much higher: reputation, income and control over their own identity. That is why this is worth paying attention to now, before fake releases become so common that nobody is surprised by them.
