A new Guardian report says two in five Australian GPs now use AI scribes to record patient notes. That figure is from Australia, not the UK, but it is still a useful glimpse of where things may be heading. NHS England has already published guidance on the use of AI-enabled ambient scribing products in health and care settings, which is a pretty clear sign that this is no longer a far-off idea.
If you have not come across the term before, an AI scribe is software that listens during a consultation, turns the conversation into a transcript, and can help draft notes, summaries or letters. In theory, that means less frantic typing from the clinician and more eye contact with the person in front of them. You can see why doctors under pressure might be interested.
That does not make it automatically bad. In fact, there are situations where it could be genuinely helpful. If a GP spends less time staring at a keyboard, the appointment may feel more human. Better notes could also reduce admin mistakes later on. But this is still sensitive personal information, and ordinary patients are entitled to more than a vague “it just helps with paperwork” explanation.
What a good introduction should sound like
If a practice wants to use an AI scribe, the starting point should be plain English. You should be told clearly that the tool is being used, what it does, and whether it is recording audio, creating a transcript, or producing a summary. This should feel like a real choice, not a poster in the waiting room that treats silence as consent.
That matters because “consent” only means much if you can actually say no without feeling awkward or penalised. Medical Protection, writing for UK clinicians, says patients should give informed consent before AI tools share their personal data with third parties, and that the consent should be documented. It also points back to ICO guidance that says consent must be freely given, specific, informed and easy to withdraw.
So the first practical question is simple: am I being asked, or merely informed? Those are not the same thing.
Questions worth asking before you agree
You do not need a confrontation here. A few calm questions can tell you a lot.
- Is this tool recording the audio, or just producing live notes? There is a difference between temporary processing and a stored recording.
- Where does the information go? Is it staying inside approved NHS or practice systems, or being handled by an outside supplier?
- Who can access it? Ask who sees the transcript, summary or recording, and whether it is used for anything beyond your care.
- Who checks the final note? The clinician should still review and approve anything that goes into your record.
- Can I say no, or ask for it to be turned off? A genuine opt-out matters, especially if you are discussing something sensitive.
NHS England’s guidance is aimed at organisations rather than patients, and it applies to England rather than the whole UK, but the principles are useful. It says providers should carry out proper safety and data-protection assessments, be transparent about how information is used, and make sure staff review outputs before further action is taken. That last point is important. An AI scribe should be a drafting tool, not the final authority on what happened in the room.
Why this matters more in some appointments than others
There is another reason patients may want to pause and think. Note-taking is not only admin. It is also part of how clinicians process what they are hearing. Researchers quoted by the Guardian pointed out that writing and summarising can help with reflection, prioritising and understanding context. AI can turn words into tidy text, but that is not the same thing as noticing hesitation, embarrassment, fear or the fact that someone says “fine” while obviously not looking fine.
That may matter most in consultations about mental health, grief, family problems, sexual health, domestic abuse, addiction, or anything else where tone and trust are doing a lot of work. In those moments, it is reasonable to ask whether the tool can be paused or skipped. You are not being difficult. You are recognising that some conversations need more than efficient transcription.
The wider lesson is one we keep seeing across consumer AI. The more natural and helpful a system feels, the easier it is to forget to ask basic questions about trust, data and oversight. We saw a version of that in our piece on voice AI that sounds more human than before, and again when looking at AI tools that still need proper guardrails and supervision. A clinical setting is more sensitive than either of those examples, not less.
The calm takeaway
None of this means you should panic if a GP mentions an AI scribe. Used well, it could reduce admin and leave more room for actual conversation. CQC has said AI can bring benefits to general practice, but only when it sits inside proper governance, safety checks and clinical oversight. That is the key idea to hold on to.
If your practice wants to use one, you do not need to become a technical expert on the spot. Just ask what the tool is doing, where the information goes, whether you can decline, and who checks the output before it becomes part of your record. Clear answers are a fair expectation when the subject is your health.
Sources:
The Guardian — Two in five Australian GPs use AI scribes to record patient notes
NHS England — Guidance on the use of AI-enabled ambient scribing products in health and care settings
CQC — GP mythbuster 109: Use of artificial intelligence in GP services
Medical Protection — Common medicolegal dilemmas healthcare professionals are facing with the use of AI
