

OpenAI has rolled out a new safety feature, Trusted Contact, in ChatGPT. It allows adult users to add a trusted person who can be alerted if the system detects serious self-harm risk in conversations.
The update comes amid growing scrutiny over how AI chatbots handle mental health-related interactions. Legal cases and reports have raised questions about chatbot responses in sensitive situations. OpenAI says it has improved its safety systems and review process.
The feature is optional and designed for users aged 18 and above globally and 19 in South Korea. It introduces a structured safety link between AI and real-world support.
Key design elements include:
User selects a trusted person such as a family member, friend, or caregiver
The selected person receives an invitation and must accept within 7 days
Users can change or remove the contact anytime from settings
Once active, the contact becomes part of the emergency response flow
If ChatGPT detects high-risk self-harm signals, the system first responds within the chat with supportive messages. It may suggest helplines or encourage reaching out to real people. If the risk appears serious, the case is escalated to OpenAI’s trained safety team for review.
Only after confirmation, the trusted contact is alerted. It does not include chat history, but only states that concerning self-harm signals were detected and shares support resources.
Other tech companies are moving in the same direction. Meta has added Instagram alerts when teens repeatedly search for self-harm-related terms. Google has upgraded Gemini AI to surface crisis support options during distress signals.
The pattern is shifting across platforms. The focus is moving from content removal toward early intervention and real-time support triggers.
OpenAI says all serious cases are reviewed by trained staff within an hour. However, the system has limits. Automated detection may miss context or misread conversations. False positives are also possible.
The system depends on scanning sensitive conversations for risk detection. This raises questions about the extent to which personal chat monitoring is acceptable. Trust becomes central to how users respond to such features.
Trusted Contact changes the role of AI inside sensitive conversations. It pushes ChatGPT from a reactive tool to a preventive safety layer. The system responds to queries and triggers real-world action when risk appears.
This creates a structural trade-off:
Improved early intervention in crises
Increased dependence on automated detection systems
Ongoing tension between privacy and monitoring
Risk of both missed signals and unnecessary alerts
The broader industry trend shows similar movement. AI platforms are slowly becoming connected to external safety networks. The next challenge involves identifying the accuracy of such tools and building public trust for wider adoption.
Also Read: OpenAI Drops GPT-5.5-Cyber as Sam Altman Pushes AI Shield for Companies & Critical Infrastructure