Skip to content

Would you work for an AI boss? What UK workers should check before software starts setting tasks and schedules

Retro-futurist 1950s-style illustration of an ordinary office worker at a streamlined desk while a friendly supervisor robot arranges glowing task cards and a calendar display in a bright futuristic office, for an article about AI management software and what UK workers should check before it starts setting tasks and schedules.

The latest poll on workplace AI asked a question that still sounds slightly absurd: would you accept an AI as your direct supervisor? According to Quinnipiac University, 15% of Americans said yes, while 80% said no. That is still a firm rejection overall, but it tells us something important. The idea of software assigning tasks and setting schedules no longer feels like pure science fiction. Enough people can now imagine it that pollsters are asking about it seriously.

For ordinary UK workers, the useful point is not that a shiny robot manager is about to appear in the office tomorrow. It is that bits of management are already being handed to software in quieter ways: shift allocation, queue assignment, expense approvals, productivity flags and automatic workflow decisions. You may never be told your boss is AI. You may simply notice that more decisions arrive through a system, with less room for context or discretion.

That is why this matters. We have already looked at how AI can be genuinely useful at work when it is used for clear, boring tasks. The same principle applies here. If employers use software to handle repetitive admin well, staff may barely notice. If they use it to monitor, rank or pressure people badly, everyone notices very quickly.

What an “AI boss” usually looks like in real life

The TechCrunch report linked the poll to a wider shift already underway. Companies are experimenting with AI systems that handle parts of approval workflows, triage work, screen requests or act as a first layer before a human manager steps in.

In practice, an AI boss is unlikely to look like a humanoid supervisor delivering pep talks. It is more likely to be software that decides which jobs land in your queue, which schedules look “optimal”, or whose performance gets escalated. For some workers, that could mean faster answers to minor admin questions. For others, it could mean feeling managed by rules they cannot see and scores they do not fully understand.

There is nothing automatically sinister about that. Plenty of human managers are overloaded, inconsistent or slow. A system that clears simple approvals faster or helps distribute work more evenly could be useful. The problem starts when automation moves from helping managers to replacing judgement without being honest about it.

What UK workers should actually ask

Acas guidance on monitoring performance is helpful here. Acas says employers should explain how employees will be monitored, consult staff before introducing monitoring, and create a clear policy. It also warns that too much monitoring can damage trust, cause stress, reduce productivity and, in some circumstances, breach legal and human rights.

That points to a few straightforward questions workers should be able to ask.

What is the system actually measuring? Is it tracking output, time online, messages answered, calls handled or location data? “AI” is often used as a vague label for ordinary monitoring dressed up as innovation.

Where is the human judgement? If the system assigns tasks, flags underperformance or suggests schedule changes, who checks whether the result makes sense in context? Caring responsibilities, disability, training time and temporary workload spikes do not always show up neatly in a dashboard.

Can decisions be challenged? If a worker thinks the software has got something wrong, there should be a clear route to question it. An automated score feels a lot less clever when it cannot cope with real life.

Is this helping people work better, or just watching them more closely? Those are not the same thing. A tool that reduces admin may free people up. A tool that measures everything and explains nothing may simply make work more anxious.

The wider worry

The same Quinnipiac poll found that 70% of Americans think advances in AI will reduce job opportunities. That helps explain why the “AI boss” question lands so awkwardly. It is not really about whether anyone wants a robot manager in the abstract. It is about whether workplace software will be used to squeeze more out of fewer people, with less transparency and less room to push back.

That anxiety is not irrational. We have already seen companies use AI as part of a wider story about restructuring, and workers are right to ask harder questions when employers blame AI for job cuts. But it is also worth separating legitimate concern from lazy hype. Not every workflow tool is a machine tyrant in the making. Sometimes it is just software doing tedious admin.

There is a similar lesson in our earlier piece on what to check before giving AI tools more access. Once a system gets deeper into your workflow, the boring governance questions matter more than the flashy demo. Who can see the data? What happens when the system gets it wrong? Workplace AI should be judged the same way.

The sensible middle ground

Most people do not want to be managed by a chatbot, and the poll shows that clearly. But that does not mean workers should ignore the trend until a company openly announces an “AI supervisor”. By then, the change may already be happening one workflow at a time.

The calm response is not panic or blanket rejection. It is to ask better questions early. If software is going to help assign work, assess performance or shape schedules, employees deserve to know what it is doing, why it is being used and where a real person still takes responsibility. If an employer can answer those questions clearly, the tool may turn out to be mostly boring admin. If it cannot, that is the real warning sign.


Sources:
TechCrunch — 15% of Americans say they’d be willing to work for an AI boss, according to new poll
Quinnipiac University — The Age Of Artificial Intelligence poll release, 30 March 2026
Acas — Monitoring performance