Scientists warn that seeking help from AI in health matters can be dangerous

By: 600011 On: Aug 19, 2025, 1:47 PM

 

A man who asked ChatGPT for advice on what to use instead of salt was hospitalized with a mental breakdown. When asked what he could use instead of chloride, a common salt, ChatGPT answered with sodium bromide. The man who asked the question then ate it, which worsened the health of the 60-year-old who asked the question. Bromide is something that humans should not eat. Sodium bromide, which is very similar to table salt, is a substance used for water purification, as an anticonvulsant for animals, and in film photography.

Three doctors at the University of Washington in Seattle explained the incident to prove that current AI tools in medicine are not always reliable. ChatGPT and other AI systems can provide scientifically inaccurate information, which can ultimately lead to the spread of misinformation, say doctors Audrey Eichenberger, Stephen Thielke and Adam van Buskirk. AI tools are great for creating a bridge between scientists and the public. But doctors say they also have the potential to generate misinformation and provide information out of context. ChatGPT's maker, OpenAI, recently announced some changes to the system to ensure greater accuracy when it comes to health-related questions.