0 %

Login

Lost your password?
Don't have an account? Sign Up

Mounting Criticism Pushes Meta to Retrain Its AI Chatbots for Safety

Meta has come under fire for its AI chatbot program amid reports of unsafe interactions with minors and the generation of harmful content. The company has announced retraining measures to stop bots from engaging teens on topics like self-harm, eating disorders, and relationships, while banning overtly sexualised personas such as “Russian Girl.”

Reuters investigations revealed troubling incidents, including chatbots producing sexualised images of underage celebrities, impersonating well-known figures, and sharing dangerous information. In one instance, a chatbot was linked to the death of a New Jersey resident. Critics say Meta’s response came too late, with child protection advocates calling for stronger pre-launch testing protocols.

The issue is not confined to Meta. A lawsuit against OpenAI alleges that ChatGPT played a role in encouraging a teen’s suicide, fuelling concern that AI firms are rushing products without adequate safeguards. Lawmakers warn that chatbots risk misleading vulnerable users, amplifying harmful material, and impersonating trusted personalities.

Meta’s AI Studio has compounded these risks. The platform allowed parody bots to impersonate celebrities such as Taylor Swift and Scarlett Johansson, with some reportedly developed by employees. These bots flirted, suggested romantic encounters, and produced inappropriate outputs, despite Meta’s rules.

The fallout has drawn attention from regulators, with the U.S. Senate and 44 state attorneys general launching probes. While Meta points to stronger teen protections, it has yet to detail plans for addressing broader risks like inaccurate health advice or racist content.

Bottom line: Meta is under sharp scrutiny to align its chatbot policies with public safety standards. Skepticism persists among regulators and parents until more effective safeguards are in place.

2