AI was supposed to cut the boring bits of office work. Instead, many workers say it is creating a new kind of admin: reading through polished-looking drafts that still need fixing, checking and rewriting before they are safe to send on.
The problem now has a neat name, workslop. The Guardian uses it to describe AI-generated work that looks impressive on first glance but turns out to be thin, inaccurate or missing important context. BetterUp and Stanford researchers, whose survey helped popularise the term, say 40% of US desk workers had received this kind of output in the previous month. The average incident took around two hours to resolve.
For UK readers, this matters because the problem is not limited to Silicon Valley or giant companies with custom AI systems. It shows up in ordinary jobs the moment chatbots start drafting emails, meeting notes, reports, slide decks or customer replies. The output may arrive faster, but the thinking does not disappear. In many cases it just moves further down the chain, onto the person who has to check whether any of it actually makes sense.
Why AI work can feel helpful and annoying at the same time
This is why executives and workers often describe the same tool so differently. Leaders may see a faster first draft and count that as a productivity win. Staff often experience the messier part: cleaning up generic wording, correcting invented details, stripping out overconfident claims and reshaping text so it fits what the company actually means.
That gap matters. If an AI assistant produces three pages in seconds, it can look like time has been saved. But if someone else then spends half an hour checking facts, fixing tone and removing errors, the gain is a lot less clear. As we noted in our recent piece on practical AI habits at work, these tools tend to help more when they are used for narrow, well-defined tasks rather than as a replacement for judgment.
What ‘workslop’ usually looks like
Sometimes it is obvious: an email that sounds strangely formal, a summary that misses the real point of the meeting, or a report that repeats broad clichés instead of giving a clear answer. Sometimes it is subtler. The draft looks tidy, the grammar is fine and the layout is polished, so it gets waved through even though key details are wrong or the recommendation is too vague to be useful.
That is what makes workslop frustrating. It does not always look broken. It looks nearly done. And nearly done can be more dangerous than obviously bad, because it invites people to trust it too quickly. We have already seen similar issues beyond the office, including research suggesting that people can lean on AI answers too readily when the output sounds confident.
Four sensible checks before you pass an AI draft on
1. Ask what job the tool was supposed to do.
AI is usually strongest at speeding up repetitive structure, not deciding what matters. If the real task required judgment, nuance or accountability, treat the draft as raw material rather than finished work.
2. Check the parts that would be embarrassing if wrong.
Names, dates, figures, quotes, policy wording and customer-facing promises deserve a manual check. These are exactly the details that can create extra work later if they slip through.
3. Watch for smooth but empty language.
A common warning sign is text that sounds busy without saying much. If a paragraph feels polished but vague, it may be hiding the fact that the tool has not actually solved the problem.
4. Notice when AI is creating extra review work for someone else.
If a chatbot helps you move faster but leaves a colleague to untangle the output, the organisation has not really saved time. It has just shifted the burden.
The real issue is management, not magic
None of this means AI is useless at work. It can still help with outlines, formatting, summarising a long document, or getting a rough version of routine text onto the page. The useful lesson is narrower and more practical: speed on the first draft is not the same thing as value at the end.
That is especially worth remembering in a shaky job market, where workers may feel pressure to use AI more often simply to look efficient. If managers reward volume, people will naturally reach for tools that generate more volume. But volume is exactly what makes workslop spread.
A calmer rule for UK workplaces is to judge AI by the amount of trustworthy work it removes, not the amount of text it produces. If the tool cuts admin without creating confusion, great. If it produces glossy drafts that need rescue, it may be adding friction rather than reducing it.
The point is not to reject AI on principle. It is to stop mistaking output for progress. When a draft arrives instantly, the most useful question is still the boring one: has this actually saved anyone time?
Sources:
The Guardian, Bosses say AI boosts productivity, workers say they’re drowning in ‘workslop’
BetterUp, Workslop is the new busywork and it’s costing millions
Deloitte, AI ROI: The paradox of rising investment and elusive returns
