The primary targets of toy manufacturers are young children who are less developed and easily influenced, the advocacy group said.
Toys embedded with artificial intelligence chatbots undermine children’s healthy development and pose unprecedented risks, according to a new advisory published by advocacy group Fairplay, which warned parents against buying AI toys for their children during this holiday season.
The Nov. 20 advisory was endorsed by more than 150 child development and digital safety experts and organizations.
“AI toys are chatbots that are embedded in everyday children’s toys, like plushies, dolls, action figures, or kids’ robots, and use artificial intelligence technology designed to communicate like a trusted friend and mimic human characteristics and emotions,” Fairplay stated.
“Examples include Miko, Gabbo/Grem/Grok (from Curio Interactive), Smart Teddy, Folotoy, Roybi, and Loona Robot Dog (from Keyi Technology). Top toy maker Mattel also plans to sell AI toys. They are marketed to children as young as infants.”
Harmful AI interactions with children have come under scrutiny by lawmakers, especially after the much publicized lawsuit against Character.AI that accused the company of triggering suicidal thoughts in children and causing the death of a 14-year-old.
“The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay stated.
The toy manufacturers’ primary target is young children, who are even less developmentally equipped to protect themselves than older children and teens, according to the advocacy group.
A one-page advisory released by the advocacy group briefly outlines five main reasons parents should not indulge their children with AI toys.
These include the fact that AI toys are typically powered by the same intelligence that has already harmed children. Additionally, these kinds of toys prey on children’s trust, disrupt healthy relationships and the ability to build resilience, invade family privacy by collecting sensitive data, and displace key creative and learning activities, according to the advisory.
“Testing by U.S. [Public Interest Research Group] has already found instances of AI toys telling children where to find knives, teaching them how to light a match, and even engaging them in sexually explicit conversations,” Fairplay stated.







