GPT-5 Enhances Mental Health Responses, Cuts Harmful Outputs by 65%
OpenAI’s GPT-5 Delivers Breakthrough in Mental Health Support
OpenAI has unveiled GPT-5, its latest AI model, with groundbreaking enhancements specifically targeting mental health conversations. According to internal data, approximately 1 million users weekly discuss suicide-related topics with ChatGPT—highlighting an urgent need for safer, more responsible AI interactions.
Key Improvements in GPT-5
The upgraded model was developed in collaboration with 300 mental health professionals from 60 countries, focusing on three critical areas:
- Severe mental health symptoms (e.g., hallucinations, mania).
- Suicidal and self-harm tendencies.
- Emotional dependency on AI.

Performance Metrics
- A 65% reduction in unsafe responses compared to GPT-4o.
- 91% compliance rate in suicide-related tests (up from 77%).
- 52% fewer inappropriate answers as rated by experts.
- Specialized improvements include:
- 65% drop in inappropriate psychosis/mania responses.
- 80% decline in harmful answers regarding AI emotional dependency.
Why This Matters
The advancements underscore OpenAI’s commitment to AI safety, particularly for high-risk discussions. With mental health crises escalating globally, GPT-5’s precision could serve as a critical tool for users seeking immediate support.
Key Points
- GPT-5 significantly reduces harmful outputs in sensitive conversations.
- Suicide-related response compliance jumps to 91%.
- Developed with input from 300 international mental health experts.

