Google is making Fitbit’s AI health coach more ambitious by letting it read medical records alongside the data already gathered from a wearable. On paper, that sounds like exactly the kind of upgrade people have been promised for years: less generic advice, more useful nudges, and a clearer sense of what your numbers might mean in everyday life.
But this is also the sort of AI development that deserves a slow breath before the applause. Health data is not like a playlist, a shopping basket or a work calendar. It is among the most personal information most people will ever share, and once a company invites you to connect lab results, medications and appointment history to an AI coach, the obvious benefits sit right beside some equally obvious questions.
According to Google, US Fitbit users in preview will soon be able to link medical records directly in the app. That would let the coach answer more specific questions by combining clinical information with wearable data such as sleep, activity or glucose readings. Google says the data will not be used for ads and will remain under the user’s control. It is also careful to say Fitbit is not diagnosing or treating illness.
Why this sounds appealing
The attraction is easy to understand. Plenty of health apps are good at collecting numbers but weak at turning them into something genuinely helpful. If your watch spots a restless night, a higher heart rate or a pattern in your exercise, it can tell you that something happened. It is much less good at explaining whether that matters in the context of the statin you started last month, the blood test you had two weeks ago or the lifestyle changes your GP suggested.
That is where linked records could make AI feel more useful. Instead of spitting out generic tips about cholesterol or sleep, a coach could explain trends using the details you have already been given by clinicians. In theory, that means fewer vague wellness clichés and more relevant prompts, reminders and follow-up questions.
There is also a practical appeal for people managing long-term conditions. If an app can help organise information before an appointment, show patterns over time, or turn medical jargon into plainer English, that could be genuinely valuable. Used carefully, AI can be a decent translator and organiser. As we wrote recently, the most useful way to think about AI right now is as a helper, not a substitute. Health is a very good example of that rule.
Why UK readers should still be cautious
The first practical point is simple: this is a US preview, not a UK service rollout. So for many ManyHands readers, there is no need to rush into anything today. But it does show where consumer health tech is heading, and the same questions will follow these tools wherever they spread next.
The big one is privacy. Even if a company promises not to use health records for advertising, users still need to trust how securely the data is stored, who can access it, how sharing works, and what happens if they later change their mind. Health information can reveal an enormous amount about someone’s life, and not all of it is obvious from the headline. Medications, test results and visit history can expose everything from chronic conditions to fertility treatment to mental health concerns.
There is also the risk of overconfidence. A smoother, more personalised answer can feel more authoritative than it really is. That matters because wellness tools often live in a grey area: they are not presented as medical devices, yet they increasingly talk in ways that sound tailored, informed and clinical. We have already seen why clearer boundaries matter in ChatGPT’s new safety labels, and the same lesson applies here. Personalisation can make an AI seem wiser without actually making it responsible for your care.
What this could be good for
Used sensibly, a tool like this could help with a few realistic jobs:
- summarising trends in plain English before a GP or consultant appointment
- helping you remember what changed after a new medication or routine
- showing how sleep, exercise or glucose patterns line up over time
- turning confusing lab terminology into simpler background reading
Those are useful support tasks. None of them require pretending the app is a clinician.
What it should not replace
It should not replace a proper conversation with a doctor, pharmacist or nurse. It should not be the place where you decide whether to ignore symptoms, change medication or self-diagnose something serious. And it definitely should not become another app that quietly nudges people to hand over deeply sensitive information just because the interface feels friendly.
That caution is especially important because AI health products often arrive wrapped in a reassuring promise: more personalised care, more insight, more empowerment. Some of that may be true. But “more personalised” is not the same thing as “clinically reliable”, and “more data” is not the same thing as “better judgement”.
The sensible takeaway
For ordinary UK readers, Fitbit’s latest move is best seen as an early glimpse of a broader trend. Consumer AI is pushing closer to the parts of life that used to feel more clearly off-limits: money, relationships, work decisions and health. Sometimes that will produce genuinely helpful tools. Sometimes it will mostly produce a shinier sales pitch.
For now, the right posture is neither panic nor blind enthusiasm. If these features reach the UK, they may be worth trying for organisation, question-planning and clearer explanations. Just keep the relationship in the right order. Let the app help you read your health information. Do not let it become the thing you trust more than the professionals who are actually accountable for your care.
Sources:
The Verge — Fitbit’s AI health coach will soon be able to read your medical records
Google Blog — How Google is using AI to improve health for everyone
