A banned advert for an AI editing app might sound like a small corner-of-the-internet story. It is not. The UK’s Advertising Standards Authority has ruled that a YouTube ad for PixVideo – AI Video Maker was irresponsible, offensive and harmful because it implied the app could digitally remove a woman’s clothing.
That matters because it turns a familiar AI-sales line — “remove anything” — into something much darker. When tools are marketed in a way that suggests people can expose somebody’s body without consent, the problem is not just bad taste. It is a reminder that image-editing AI can slide very quickly from convenience into humiliation, abuse and fear.
For ordinary people in the UK, this matters because it is about the way everyday consumer apps are presented to millions of people, and the kind of behaviour those adverts quietly normalise.
What the regulator actually banned
According to BBC reporting, the advert showed a before-and-after image of a young woman. In the “before” image, red scribble covered part of her midriff. In the “after” image, more of her bare skin was visible, alongside text saying “Erase anything”.
Eight people complained to the ASA. The regulator said that even if PixVideo did not allow users to create explicit images in that exact way, the ad still gave viewers the impression that it did. In its view, that meant the advert condoned digitally altering and exposing women’s bodies without their consent.
The company behind the app, Saeta Tech, said it understood why the ad caused offence and said it blocks nude or sexually explicit content. It has agreed not to show the advert again and says it has paused advertising while it carries out an internal review.
Why this matters beyond one advert
The bigger issue is not only what one app can or cannot do. It is how casually this kind of capability is being teased in consumer marketing. Once an advert hints that AI can “remove” clothing from a real person, it nudges a dangerous idea into the mainstream.
That should worry anyone who uses social media, dating apps, school group chats or even a normal family photo-sharing service. You do not need to be a celebrity for this to matter.
We have already seen the same broader anxiety appear in more technical-looking AI updates. When platforms add new powers, labels and safeguards matter. That is part of why clear safety labels on AI systems matter so much in normal life: people need plain signals about where the limits are supposed to be, not marketing that blurs them.
The real-life risk is emotional, not just technical
Stories like this are sometimes framed as abstract debates about policy, moderation or platform rules. But the human effect is much simpler. If people believe there are easy apps for making fake intimate images, that can change how safe it feels to post photos at all.
For women and girls in particular, that pressure can be exhausting. A holiday photo, gym selfie or normal group picture may stop feeling harmless if there is a fear it could be manipulated and shared. Even when a fake is crude, the damage can still be real: shame, panic, gossip, reputational harm and the horrible feeling that your image is no longer yours.
That is one reason the ASA’s decision matters. It tells advertisers they cannot shrug this off as cheeky creativity. If the implication of the ad is that a woman’s body can be digitally uncovered for entertainment, regulators will treat that seriously.
What the law is starting to catch up with
The BBC notes that the UK government announced in December that it would make it illegal to create and supply AI tools designed to make it appear that someone’s clothing has been removed. Those offences build on existing rules around sexually explicit deepfakes and intimate image abuse.
That shift is overdue. If a normal person could be harmed by a tool, or by the way it is promoted, regulators and lawmakers are right to step in early.
What ordinary users should take from this
The practical takeaway is not to panic, and it is not to assume every editing app is sinister. Plenty of AI tools do ordinary, harmless jobs. But consumers should be wary of apps that are sold with vague promises to “erase anything”, “reveal hidden details” or perform other magic tricks on real people’s bodies.
We saw a different version of that with Google’s abandoned AI health feature, where a seemingly clever idea turned out to raise obvious trust questions once you looked closely. The pattern keeps repeating: AI products often sound clever first, and only later does someone ask whether they are respectful, safe or genuinely useful.
A small ruling with a bigger signal
The banned PixVideo advert is not the biggest AI story in scale. But it may be one of the clearest. It shows that the public, regulators and lawmakers are getting less willing to treat non-consensual image manipulation as edgy marketing for a new generation of apps.
That is a good thing. AI image tools are not going away, and some of them are genuinely helpful. But if this technology is going to sit inside everyday apps used by ordinary people, the baseline has to be simple: convenience cannot come at the cost of dignity.
Sources: BBC News reporting on the ASA ruling against the PixVideo advert, accessed 18 March 2026; ASA statement as quoted by the BBC.
