A fresh MIT Technology Review report says AI health tools are moving out of the “interesting experiment” phase and into mainstream consumer products. Microsoft has launched Copilot Health, Amazon has widened access to its Health AI tool, and big chatbots are increasingly being used for symptom questions, test-result explanations and appointment prep.
Most of these products are aimed at US users first, not the UK. Even so, the direction of travel is obvious. If AI can read notes, pull in wearable data and answer health questions in one place, similar ideas will keep showing up in apps, patient portals and devices here too. So this is worth thinking about before the marketing catches up with you.
The appeal is easy to understand. Plenty of people have stared at a blood-test result they do not fully understand, forgotten half their questions in a GP appointment, or fallen into a late-night spiral of symptom searching. An AI tool that promises to pull things together, explain them in plain English and help you prepare for a real consultation sounds useful because, in some cases, it probably is.
But health is one of those areas where “helpful most of the time” is not a high enough bar on its own. The same MIT Technology Review piece points out that researchers are worried these systems are being pushed into public use before enough independent testing has happened. In other words, the companies building them may be doing internal checks, but ordinary patients still have to live with the consequences when the tool sounds confident and gets something important wrong.
What these tools may be genuinely good at
Used carefully, AI health tools can do a few sensible things. They may help you organise notes before an appointment, turn a pile of measurements into a simpler summary, or suggest questions you want to ask a clinician. That is closer to admin support than diagnosis, and it is a much safer lane.
That is also why this trend is bigger than the AI-scribe question in GP appointments. Tools inside consultations are one part of the picture, but there is also a growing push to bring more of your personal health information into consumer AI products at home. We have already seen that tension in our piece on AI health coaches that want access to your records. Convenience is real. So is the privacy trade-off.
The problem starts when an organisational tool quietly turns into a quasi-doctor. Microsoft says Copilot Health is not a substitute for professional medical advice. NHS 111 online makes a similar distinction from the public-health side: it can tell you what to do next, but it does not give a diagnosis. That difference matters. A tool that helps you prepare is one thing. A tool that makes you feel medically reassured when you should be getting checked is something else.
Questions worth asking before you trust an AI health answer
You do not need a technical background to use common sense here. A few calm questions go a long way.
- What is this tool actually doing? Is it summarising information you already have, helping you prepare questions, or trying to tell you what condition you might have?
- Where is the information coming from? Does it show reliable sources and citations, or just give you a polished answer with no clear basis?
- What data is it using about you? Wearables, records, medication lists and test results can all add context, but they also make mistakes or go out of date.
- Who gets to keep that data? Can you disconnect records and delete information easily, or are you making a long-term privacy trade without realising it?
- What happens if it is wrong? Does the app nudge you toward proper care, or does it subtly encourage you to stay inside the app?
Those are not paranoid questions. They are basic consumer questions for a product operating in a sensitive area.
One practical rule: never let the bot outrank the red flags
If an AI tool tells you not to worry, but your body is telling you something serious may be wrong, trust the red flags first. NHS guidance is very clear that 111 online is not for emergencies, and that chest pain with symptoms such as pressure, spreading pain, sweating, sickness, light-headedness or shortness of breath means call 999 straight away. That is a useful principle beyond chest pain too: urgent symptoms are not the moment for a reassuring chatbot answer.
There is a softer version of the same problem as well. Even when something is not an emergency, AI can sound neat and complete while missing the messy human context. That is one reason we keep coming back to overconfident chatbot behaviour on ManyHands. In our earlier piece on AI tools that side with you too quickly, the risk was personal advice. In health, the stakes are higher.
The calm takeaway
AI health tools are not automatically bad, and they are not automatically snake oil either. They may become genuinely useful for prep, explanation and admin. They may even help some people ask better questions and get better use out of a rushed appointment. But that is not the same as earning full trust.
For UK readers, the safest mindset is simple: treat these tools as assistants, not authorities. Let them help you organise, not diagnose. Let them help you think of questions, not settle arguments with your symptoms. And if a company wants your records, your wearable data and your confidence all at once, slow down and ask what you are getting back.
If that sounds cautious, good. Health is one of the few areas where a little healthy scepticism is still a feature, not a bug.
Sources:
MIT Technology Review — There are more AI health tools than ever—but how well do they work?
Microsoft AI — Introducing Copilot Health
NHS 111 online — Get help for your symptoms
NHS — Chest pain
