OpenAI is preparing to introduce a new safety feature that will alert users if ChatGPT detects that they have mental health problems. Older users will be able to pre-select their friend or family member for this. The company has not yet specified what criteria this warning will be based on.
The new move comes amid increasing complaints that long conversations with AI chatbots are detrimental to some people's mental health. There were reports that conversations with chatbots have caused delusions and self-harm in some people. The company is trying to make changes to its AI models with the help of health experts to address these problems.
However, there are concerns that such moves will affect the privacy and confidentiality of users. Since many people share their very personal things, this system will only work with the consent of users (opt-in). In addition, the company is also testing new methods on ChatGPT to accurately identify users' mental stress.