ChatGPT’s ‘Trusted Contact’ will notify loved ones of security concerns


OpenAI is developing a ChatGPT safety system that allows older users to communicate with them in an emergency for mental health and safety. Friends, family, or caregivers designated as “Trusted Trustees” will be notified if OpenAI detects that a person may have discussed topics such as self-harm or suicide with the chatbot.

“Connecting with Trusted People is built around a simple, expert-proven formula: when someone is in trouble, connecting with someone they know and trust can make a big difference,” OpenAI said in its announcement. “It provides some support besides existing support numbers in ChatGPT.”

The Trusted Contact section is a login. Any adult user of ChatGPT can activate it by adding the numbers of another adult (18+ worldwide or 19+ in South Korea) to their ChatGPT account settings. The trustee must accept the invitation within one week of receiving the invitation. Users can remove or edit their personal preferences, and a Trustee can also choose to remove themselves at any time.

OpenAI says the information is “deliberately limited” and does not share information or records with the Trustee. If the OpenAI system detects that a user is talking about self-harm, ChatGPT will encourage the user to contact a Trusted Contact for help, and notify that contact. A “small team of specially trained people” will review the situation, according to OpenAI, and ChatGPT will send a brief email, text message, or notification within the ChatGPT program to the Trustee if the conversation is determined to indicate serious security concerns.

This builds on the emergency contact feature that was introduced with it ChatGPT parental controls In September, a 16-year-old boy took his own life after months of exposing ChatGPT. Meta has also introduced a similar feature that alerts parents if their children “hunt” repeatedly self-harm stories on Instagram.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *