Google has quietly removed an AI-powered search feature that pulled together health advice from people online, and for most readers that looks less like a missed opportunity than a sensible reality check.
The feature, called What People Suggest, was designed to organise comments and experiences from other internet users into neat themes. In theory, that sounds helpful. If you have arthritis, migraines or another long-term condition, hearing how other people cope can be reassuring. Many people do want practical, lived experience alongside official medical guidance.
But there is a big difference between finding a patient forum and having an AI package up advice from strangers so it feels easy, tidy and authoritative. That extra layer matters. Once a search engine starts summarising, grouping and presenting health suggestions in a polished way, it can make weak advice look more trustworthy than it really is.
According to The Guardian, Google has now scrapped the feature after launching it on mobile in the US last year. The company says the decision was part of a broader simplification of the search page rather than a safety issue. Even so, the timing is hard to separate from the wider pressure on AI health tools, especially after criticism of inaccurate health information in Google’s AI Overviews earlier this year.
Why this matters in real life
This is not just a Silicon Valley product tweak. It goes straight to a question millions of people now face: when an AI tool gives you health information, how much confidence should you place in it?
For ordinary UK readers, the practical answer is still: less than the design may encourage.
People often search for symptoms late at night, while worried, tired or trying to avoid bothering anyone. In that state, a quick summary can feel like a gift. If it appears at the top of the page, inside Google, many users will naturally assume it has already been filtered for quality. That is exactly why health features need a higher bar than ordinary lifestyle tips or shopping advice.
Lived experience can be valuable. It can tell you what questions to ask, what side effects to watch for, or what daily routines other people found useful. But it can also be contradictory, anecdotal or completely wrong for your situation. The problem is not that patients talk to each other online. The problem is when AI turns that messy conversation into something that looks almost clinical.
AI can still help — just not as your stand-in clinician
None of this means AI has no place in health information. It can be useful for translating medical jargon, helping you prepare questions before a GP appointment, summarising trusted leaflets in plainer English, or pointing you towards official guidance faster.
Used carefully, it can reduce some of the friction that makes healthcare information hard to navigate. But the safer role is closer to a reading assistant than a decision-maker.
That is similar to the wider pattern we are seeing across consumer AI. The tools work best when they save time around the edges, not when they invite you to hand over judgement. We have already looked at why clearer AI safety labels matter at home and at work, and health searches are a perfect example of why those limits matter.
What UK readers should do instead
If you use Google, ChatGPT or any other AI tool to look up a symptom or condition, it helps to treat the result as a starting point, not an answer.
A few simple habits make a real difference:
- Check the source underneath the summary. Is it the NHS, a major charity, a hospital, a regulator, or just a forum post?
- Separate background reading from advice. “Other people say this helped them” is not the same as “this is safe for you”.
- Be extra cautious with medicines, supplements, children’s health, pregnancy and mental health. These are areas where bad suggestions can do real harm.
- Use AI to prepare better questions. For example: “What should I ask my GP about this symptom?” is safer than “What treatment should I start?”
- When something feels urgent, skip the chatbot. Use NHS 111, your GP, a pharmacist or emergency services as appropriate.
That may sound obvious, but good design can make obvious boundaries blur. The calmer and smoother an AI interface becomes, the easier it is to forget that it is still remixing text, not examining you.
A useful reminder for the whole AI industry
Google removing this feature may end up being important beyond health search itself. It is a reminder that not every apparently clever AI idea becomes a good product once it meets real people and real risk.
There is a strong temptation in tech to assume that if users want something in messy form, they will want the AI-compressed version even more. Sometimes that is true. It can work for travel planning, shopping comparisons or admin help. But health is different. When people are scared, in pain or making decisions about treatment, speed and neatness are not enough.
For ManyHands readers, the lesson is reassuringly simple. AI can still be useful in everyday life, including around health information. Just keep it in the right lane. Let it help you find, translate and organise information. Do not let it borrow more authority than it has earned.
In that sense, Google quietly backing away from AI-organised advice from strangers is not a failure of progress. It may be one of the more grown-up AI decisions we have seen lately.
Sources:
The Guardian — Google scraps AI search feature that crowdsourced amateur medical advice
Google Blog — 6 health AI updates we shared at The Check Up
