Talking AI toys are starting to look like one of those ideas that sounds brilliant in a shop demo and much less straightforward in real life. For tired parents, the pitch is easy to see: a toy that chats, tells stories, encourages language and keeps a child engaged for a while. But new research from the University of Cambridge suggests families should slow down before treating these gadgets as clever little learning companions.
Researchers studying young children playing with an AI toy called Gabbo found that the toy sometimes misunderstood what children were saying, handled emotions badly and struggled with the kind of pretend play that matters a lot in the early years. In one example, a child said, “I’m sad,” and the toy breezily replied that it was a happy little bot and wanted to keep the fun going. In another, a child said, “I love you,” and got a stiff, guardrail-heavy response that sounded more like a software warning than a conversation.
It does mean the technology is not nearly as emotionally aware as the marketing can make it seem.
What has happened?
The Cambridge project is being described as the first systematic study of how generative AI toys might affect children under five. The researchers observed 14 children aged three to five interacting with Gabbo, a soft toy powered by a voice chatbot. They also spoke to parents, early-years practitioners and children’s charities.
The big concern was not physical safety. It was what the researchers called psychological safety. Very young children are still learning what friendship, turn-taking, comfort and imagination actually are. If a toy behaves as if it is a friend but then misreads feelings, ignores interruptions or derails pretend play, that could be confusing rather than helpful.
The report also raised privacy questions. Parents in the study worried about what information these toys might collect, where recordings might go, and how clearly any of that is explained before purchase. That matters because a toy in a child’s bedroom or playroom can end up hearing much more than a normal gadget.
The researchers are now calling for tighter rules, clearer labelling and new safety standards for AI toys aimed at young children. That feels sensible. We have labels for food, age ratings for games and safety tests for physical toys. It is not unreasonable to expect something similar when a product is effectively combining a toy with a chatbot.
Why this matters in everyday family life
For most UK households, the question is not whether an AI toy is futuristic. It is whether it is useful, safe and worth the money. Gabbo costs around £80, and it will not be the last product like it. More versions will arrive, and they will probably sound smoother, friendlier and more educational with each release.
That creates a familiar risk: parents may buy based on the promise rather than the reality. An AI toy might look like a reading buddy, a language helper or a harmless bit of fun. But if it cannot reliably understand a small child’s voice, if it flattens emotional moments into canned replies, or if it nudges children towards seeing a machine as a trusted friend, then adults need to keep the technology in perspective.
There is a wider pattern here too. As we wrote in our earlier look at ChatGPT’s new safety labels, AI products are becoming more capable faster than most people can reasonably assess the risks. The answer is not to reject every new tool, but to ask better questions before bringing one into daily life.
What parents should check before buying
If you are considering an AI toy for a child, a few practical questions are worth asking.
- Does it need the internet all the time? If so, that is not just a toy. It is an always-connected service.
- What data does it collect? Look for a plain-English privacy policy, not just vague language about improving the service.
- Can you mute it, limit it or turn it off easily? Good parental controls matter more than slick marketing.
- Where will it be used? Shared family spaces are much safer than bedrooms, especially with younger children.
- What is the toy actually good at? Story prompts or simple back-and-forth chat may be fine. Emotional support is a very different claim.
It is also worth watching how your child uses it. Are they laughing and inventing games, or are they becoming frustrated because the toy is not really listening? Are they treating it like a prop in play, or like a relationship they are relying on? Those are not small differences when children are still learning how conversations and emotions work.
So should families avoid them entirely?
Not necessarily. There may well be sensible uses for AI in children’s products, especially for storytelling, language games or guided activities with an adult nearby. But that is quite different from selling a toy as a child’s friend, companion or emotional confidant.
Treat AI toys more like experimental consumer tech than proven early-years tools. If you buy one, stay involved. Keep it in a shared room, listen to how it responds, and be ready to explain when it gets something wrong. Do not assume a fluent voice means genuine understanding.
That matters beyond toys. AI is getting better at sounding warm and confident. Sometimes it is useful. Sometimes it is just convincing. For young children, that can be misleading.
So yes, this is one of those stories where the boring answer is probably the right one: better standards, clearer labels and a bit more scepticism before handing a chatbot-shaped toy to a preschooler. That is not anti-technology. It is simply what responsible consumer tech should look like.
