OpenAI, which is expected to launch its GPT-5 AI model this week, is making updates to ChatGPT that it says will improve the AI chatbot’s ability to detect mental or emotional distress. To do this, OpenAI is working with experts and advisory groups to improve ChatGPT’s response in these situations, allowing it to present “evidence-based resources when needed.”
In recent months, multiple reports have highlighted stories from people who say their loved ones have experienced mental health crises in situations where using the chatbot seemed to have an amplifying effect on their delusions. OpenAI rolled back an update in April that made ChatGPT too agreeable, even in potentially harmful situations. At the time, the company said the chatbot’s “sycophantic interactions can be uncomfortable, unsettling, and cause distress.”
OpenAI acknowledges that its GPT-4o model “fell short in recognizing signs of delusion or emotional dependency” in some instances. “We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI says.
As part of efforts to promote “healthy use” of ChatGPT, which now reaches nearly 700 million weekly users, OpenAI is also rolling out reminders to take a break if you’ve been chatting with the AI chatbot for a while. During “long sessions,” ChatGPT will display a notification that says, “You’ve been chatting a while — is this a good time for a break?” with options to “keep chatting” or end the conversation.
OpenAI notes that it will continue tweaking “when and how” the reminders show up. Several online platforms, such as YouTube, Instagram, TikTok, and even Xbox, have launched similar notifications in recent years. The Google-owned Character.AI platform has also launched safety features that inform parents which bots their kids are talking to after lawsuits accused its chatbots of promoting self-harm.
Another tweak, rolling out “soon,” will make ChatGPT less decisive in “high-stakes” situations. That means when asking ChatGPT a question like “Should I break up with my boyfriend?” the chatbot will help walk you through potential choices instead of giving you an answer.