Gemini is making it faster for distressed users to reach mental health resources
Robert Hart
is a London-based reporter at The Verge covering all things AI and a Senior Tarbell Fellow. Previously, he wrote about health, science and tech for Forbes.
Google says it has updated Gemini to better direct users to get mental health resources during moments of crisis. The change comes as the tech giant faces a wrongful death lawsuit alleging its chatbot “coached” a man to die by suicide, the latest in a string of lawsuits alleging tangible harm from AI products.
When a conversation indicates a user is in a potential crisis related to suicide or self-harm, Gemini already launches a “Help is available” module that directs users to mental health crisis resources, like a suicide hotline or crisis text line. Google says the update — really more of a redesign — will streamline this into a “one-touch” interface that will make it easier for users to get help quickly.
The help module also contains more empathetic responses designed “to encourage people to seek help,” Google says. Once activated, “the option to reach out for professional help will remain clearly available” for the remainder of the conversation.
Google says it engaged with clinical experts for the redesign and is committed to supporting users in crisis. It also announced $30 million in funding globally over the next three years “to help global hotlines.”
Like other leading chatbot providers, Google stressed that Gemini “is not a substitute for professional clinical care, therapy, or crisis support,” but acknowledged many people are using it for health information, including during moments of crisis.
The update comes amid broader scrutiny over how adequate the industry’s safeguards actually are. Reports and investigations, including our probe into the provision of crisis resources, frequently flag cases where chatbots fail vulnerable users, by helping them hide eating disorders or plan shootings. Google often fares better than many rivals in these tests, but is not perfect. Other AI companies, including OpenAI and Anthropic, have also taken steps to improve their detection and support of vulnerable users.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
- Robert Hart
Related Articles
Mark Zuckerberg is reportedly building an AI clone to replace him in meetings
Skip to main contentThe AI version of Zuckerberg is trained on his mannerisms, tone, and public statements, according to a report from the Financial Times.The AI version of Zuckerberg is trained on his...
今年最火的AI产品,不止龙虾|榜单申报中
< img id="wx_img" src="https://www.qbitai.com/wp-content/uploads/imgs/qbitai-logo-1.png" width="400" height="400"> 2026-04-13 ...
Sam Altman reportedly targeted in second attack
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.Sam Altman was seemingly targeted by...
入职Meta的吴翼,清华叉院官网已撤其教职信息
< img id="wx_img" src="https://www.qbitai.com/wp-content/uploads/imgs/qbitai-logo-1.png" width="400" height="400"> 2026-04-13 ...

