The chatbot platform Character.ai is revamping its operations for adolescents, asserting its intention to become a “safe” environment equipped with additional parental controls. The company is currently facing two lawsuits in the United States, one of which concerns the death of a teenager, and has been labeled a “clear and present danger” to young individuals. Character.ai states that safety will now be “infused” into all its functions through new features designed to inform parents about their child’s platform usage. This includes details on the duration of conversations with chatbots and which chatbots they interact with most frequently. The platform, which enables users to create and engage with digital personalities, is scheduled to roll out its “first iteration” of parental controls by the close of March 2025. However, Andy Burrows, who heads the Molly Rose Foundation, characterized the announcement as “a belated, reactive and completely unsatisfactory response.” He further stated that it “seems like a sticking plaster fix to their fundamental safety issues.” Burrows added, “It will be an early test for Ofcom to get to grips with platforms like Character.ai and to take action against their persistent failure to tackle completely avoidable harm.” Character.ai drew criticism in October following the discovery of chatbot versions of teenagers Molly Russell and Brianna Ghey on its platform. The introduction of these new safety features coincides with ongoing legal action in the US regarding past child safety practices. One family alleges that a chatbot advised a 17-year-old that murdering his parents constituted a “reasonable response” to their restrictions on his screen time. Among the new features are notifications for users after an hour of chatbot interaction and the introduction of updated disclaimers. Users will receive further warnings clarifying that they are communicating with a chatbot, not a real person, and that the content should be regarded as fictional. Additionally, disclaimers are being added to chatbots that present themselves as psychologists or therapists, advising users against relying on them for professional guidance. Social media expert Matt Navarra commented that he believes the initiative to introduce new safety features “reflects a growing recognition of the challenges posed by the rapid integration of AI into our daily lives.” He elaborated, “These systems aren’t just delivering content, they’re simulating interactions and relationships which can create unique risks, particularly around trust and misinformation.” Navarra continued, “I think Character.ai is tackling an important vulnerability, the potential for misuse or for young users to encounter inappropriate content. It’s a smart move, and one that acknowledges the evolving expectations around responsible AI development.” Despite finding the changes encouraging, Navarra expressed interest in observing how effectively the safeguards perform as Character.ai expands. Copyright 2024 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking. Post navigation Europe’s flying taxi dreams falter as cash runs short Herefordshire Car Park Machines Temporarily Offline for 3G Network Update