The Verge
US · 1 hrs ago
AI scored
ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
78Accuracy
centreBias
0Ratings
💬 0Comments
AI Analysis
Accuracy 78/100
Partisan intensity 25/100
ObjectivePartisan
Centre / Neutral ✓ Fair headline
OpenAI is introducing an optional safety feature for ChatGPT that allows users to designate a trusted contact who will be notified if the AI detects discussions of self-harm or suicide, based on expert-validated crisis intervention principles.
ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a "Trusted Contact" will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.
"Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make
Discussion 0 comments
Sort:
?
No comments yet — be the first to start the discussion!