ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
Jess Weatherbed
is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a “Trusted Contact” will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.
“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI said in its announcement. “It offers another layer of support alongside the localized helplines already available in ChatGPT.”
The Trusted Contact feature is opt-in. Any adult ChatGPT user can enable it by adding contact details for a fellow adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. The Trusted Contact must accept the invitation within a week of receiving the request. Users can remove or edit their chosen contact in the settings, and the Trusted Contact can also choose to remove themselves at any time.
OpenAI says that the notification is “intentionally limited” and will not share chat details or transcripts with the Trusted Contact. If OpenAI’s automated systems detect that a user is talking about harming themselves, ChatGPT will then encourage the user to reach out to their Trusted Contact for help, and let them know the contact may be notified. A “small team of specially trained people” will then review the situation, according to OpenAI, and ChatGPT will send a brief email, text message, or in-app ChatGPT notification to the Trusted Contact if the conversation is determined to indicate serious safety concerns.
This builds on the emergency contact feature that was introduced alongside ChatGPT’s parental controls in September, after a 16-year-old took his own life following months of confiding in ChatGPT. Meta has also introduced a similar feature that alerts parents if their kids “repeatedly” search for self-harm topics on Instagram.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
- Jess Weatherbed
Related Articles
Mira Murati’s deposition pulled back the curtain on Sam Altman’s ouster
The week leading up to Thanksgiving 2023 was the AI industry’s biggest soap opera moment. OpenAI CEO Sam Altman was abruptly ousted from his role at the ChatGPT-maker. The explanation? That Altman was “not...
Apple’s AirPods with cameras for AI are apparently close to production
Jay Peters is a senior reporter covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme.Apple’s rumored AirPods with cameras are nearing a stage where the company...
SpaceX has a $55 billion plan to build AI chips in Texas
Stevie Bonifield is a news writer covering all things consumer tech. Stevie started out at Laptop Mag writing news and reviews on hardware, gaming, and AI.Elon Musk’s plans to get into the AI chip...
原生Agent杀入画布!一站式搞定专业创作,全程可控、不抽卡
< img id="wx_img" src="https://www.qbitai.com/wp-content/uploads/imgs/qbitai-logo-1.png" width="400" height="400"> 2026-05-07 ...
