A striking new MIT Technology Review report says some tech workers in China are being encouraged to document their jobs so AI agents can copy parts of how they work. In some cases, that means writing detailed manuals of day to day tasks. In others, it means turning chat histories, files and routines into something a software agent can imitate.
The examples in the piece are Chinese, but the underlying question is much wider and very current in the UK too. More employers now want staff to use AI for notes, drafting, admin and routine tasks. The UK government is also openly pushing wider workplace adoption, with free AI training and a goal of helping millions of workers build practical AI skills. That can be useful. It can also become uncomfortable very quickly when “show us how you work” starts to blur into “help us build a cheaper version of you”.
Why this story matters beyond the tech sector
It is easy to hear “AI agents” and assume this only affects software engineers. It does not. Plenty of office jobs include repeatable tasks that can be mapped out, templated and partly automated: answering common customer queries, drafting routine emails, summarising meetings, updating spreadsheets, preparing reports or moving information between systems.
That does not mean a worker can be fully replaced by a chatbot tomorrow. The more realistic short term pattern is messier. A company asks staff to write playbooks, document prompts, label examples, correct AI mistakes and explain the little bits of judgement that make their work run smoothly. Over time, more of that process gets built into software owned by the employer, while the human role shifts towards checking, supervising and fixing what the system gets wrong.
We have already seen versions of that elsewhere. Earlier this year, the Guardian spoke to workers who said they were asked to help train AI systems that later reduced the value of their work, cut their fees or changed the shape of their role. That fits with a broader ManyHands theme: when companies talk about AI “efficiency”, workers often need to ask harder questions about who keeps the benefit, who carries the risk and what happens if quality slips. We covered part of that pressure recently in our guide to what UK workers should ask when a company blames AI for job cuts.
What UK workers should check if this starts happening at work
First, ask what the AI system is actually for. Is it meant to remove dull admin and free you up for better work, or is it being positioned as a direct stand in for tasks your team currently handles? Those are not the same thing. A tool that drafts meeting notes is one thing. A plan to capture your judgement, tone, workflows and decisions so other people can run them more cheaply is another.
Second, pay attention to what material is being fed into the system. The MIT Technology Review report describes tools that can draw on work chats and files. In a UK workplace, that should raise practical questions about privacy, access and control. Are your messages, writing style and decision patterns being used only inside one team, or absorbed into a broader internal system? If outputs are wrong, biased or misleading, who is accountable for fixing the consequences?
Third, be realistic about quality. Employers may hope an AI agent can bottle up a person’s know how, but real work often includes tacit judgement that is hard to reduce to a checklist. People notice tone, context, exceptions, office politics, customer mood and the small warning signs that a process is going off track. When that subtle work gets flattened into rigid instructions, the result can look efficient while quietly becoming worse. That is part of why so much AI generated office output still needs close human checking, as we noted in our recent piece on AI “workslop” at work.
This is also about bargaining power
One reason this story lands so hard is that it changes the balance of power around knowledge. Many jobs depend on accumulated know how: how to calm an angry customer, spot a bad brief, phrase a sensitive email, or notice when a number does not look right. Once that know how is formalised into a company system, it can become easier to standardise, monitor and reassign, even if the AI is not truly capable of replacing a good worker on its own.
That does not make every documentation exercise sinister. Good documentation can protect teams, help new starters and reduce burnout when one person is carrying too much invisible process knowledge. But workers should not be naive about the incentives. If a company asks you to map your role in minute detail, it is fair to ask how that material will be used, whether it affects staffing plans, and whether humans will still have the authority to override the system.
A sensible way to think about it
For most UK readers, the useful takeaway is not “refuse all AI” or “assume replacement is immediate”. It is to separate helpful automation from extraction. If AI is taking repetitive admin off your plate, that can be a genuine win. If it is mainly turning your experience into a tool that weakens your leverage while leaving you responsible for the mistakes, that is a different bargain.
Workplace AI is moving from novelty to management practice. As that happens, staff should watch not just what the tool can do today, but what the organisation is learning about them through the process. When your workflow becomes training data, the real product may not be convenience. It may be a more controllable version of your job.
Sources:
MIT Technology Review, Chinese tech workers are starting to train their AI doubles and pushing back
The Guardian, Keen bosses, strange mistakes and a looming threat: workers on training AI to do their jobs
UK government, Free AI training for all as programme expands to provide 10 million workers with key AI skills by 2030
