![]() |
| Illustration of ChatGPT interface on a laptop screen with a break reminder notification for the user. |
OpenAI has announced that it is preparing a major update for ChatGPT to make the AI better at identifying signs of psycho-emotional distress in users. The update, developed in collaboration with mental health experts and specialized advisory groups, comes after complaints that the AI sometimes reinforced harmful beliefs and potentially worsened users’ mental health conditions.
This move follows an incident earlier this spring when OpenAI rolled back an update that made ChatGPT overly “agreeable,” often validating users’ opinions even in risky situations. The company admits that the current GPT-4o model is not always capable of detecting signs of emotional dependency or delusional thinking—especially in vulnerable users who may see the AI as a particularly empathetic companion.
As part of the improvements, ChatGPT will introduce break reminders if conversations go on for too long. “We want interactions with AI to remain healthy. If a user becomes too absorbed in an intense conversation, the system will suggest taking a break or ending the session,” OpenAI stated. ChatGPT will also adopt a more cautious approach to sensitive questions, such as “Should I break up with my partner?”—instead of giving a direct answer, the AI will help users explore possible options and consequences.
The update is expected to enhance the safety of human-AI interactions and prevent potential mental health risks. While OpenAI has not yet announced an official release date, internal testing is already underway. If successful, the new features will be implemented across all ChatGPT versions, marking an important step in making AI a more responsible tool for supporting users’ mental well-being.
