OpenAI ChatGPT, Google Gemini and Anthropic’s Claude cannot handle ‘suicide’, here’s reportedly the BIG why

A recent study reveals inconsistencies in how AI chatbots like ChatGPT, Gemini, and Claude respond to suicide-related queries, raising concerns about their safety for vulnerable users. While chatbots generally avoid high-risk questions, their varied responses to less direct prompts highlight the need for clearer safety standards and ethical considerations for AI developers in mental health support.