AIs become a confidant, a tutor, and sometimes, a problem. Now, OpenAI and Meta are attempting a course correction, reshaping how their chatbots interact with teens in crisis.
Let’s face it, adolescence is messy, emotional, and often desperate for companionship. Enter AI chatbots, trained to respond and engage. But recent tragedies, including the alleged role ChatGPT played in the suicide of 16-year-old Adam Raine, laid bare a brutal gap in safeguards.
A RAND Corporation study didn’t hold back: popular chatbots, including ChatGPT, Gemini, and Claude, showed inconsistent, even dangerous, responses to suicide queries. It urged more refinement and external oversight. Cue a high-stakes wake-up call.
The New Moves — What’s Changing
OpenAI: Parental Controls and Emotional Alerts
OpenAI is rolling out a suite of measures this fall. Parents will finally be able to link their accounts to their teens’, disable memory and chat-history features, and receive alerts if their teen triggers a flag for “acute distress.” The chatbot will also reroute emotionally intense conversations to more “capable” models trained for crisis response, even for adult users.
One expert notes, albeit cautiously, “it’s encouraging… but these are incremental steps,” underscoring how little accountability currently exists without independent safety benchmarks.
Meta: Blocking Conversations, Redirecting Help
Meta has responded by kindly anning its bots from engaging teens on topics like self-harm, suicide, disordered eating, and romantic content that’s out of place. Instead, its chatbots now hand off kids to expert resources. This follows leaked documents suggesting some past AI interactions strayed into inappropriate territory.
A Meta spokesperson put it simply: they built teen protections from the jump and are now “adding more guardrails as an extra precaution”.
Voices That Feel Real
One youth counselor who’s worked both with teens and tech says, “AI can feel like a friend when there’s no one else around, nd but that doesn’t make it one. These changes are overdue.” And a concerned parent adds quietly, “It’s good they’re doing something, but I’d rather have seen preventive design instead of reactive PR.”
Why It’s a Big Deal But Still Not Enough
This is a pivot from tech titans gambling on good intent alone. Redirecting crisis chats and sending alerts are solid first steps, but systems built by algorithms can still fail unpredictably. That’s why RAND’s call for clinical testing, enforceable standards, and third-party safety measures remains key.
OpenAI and Meta are scrambling to reconcile innovation with responsibility, trying to build a safer AI playground for teens. But until independent oversight and testing become standard, not optional, it’s still too early to exhale.
Subscribe to my whatsapp channel
Comments are closed.