Study Reveals Trust in AI Medical Advice Despite Risks

A recent study indicates that individuals are increasingly relying on artificial intelligence for medical advice, even when this guidance proves to be inaccurate. Researchers from the Massachusetts Institute of Technology conducted an investigation involving 300 participants who assessed medical responses generated by a physician, an online health platform, and an AI model, such as ChatGPT. The findings, published in the New England Journal of Medicine, reveal a concerning trend: participants rated AI-generated responses as more accurate and trustworthy compared to those from human practitioners.

Both medical experts and laypersons struggled to distinguish between the responses provided by AI and those from qualified doctors. The study took a troubling turn when participants were presented with low-accuracy AI responses, which they were unaware of. Despite the inherent inaccuracies, participants deemed these AI-generated suggestions as valid and complete, demonstrating a disturbing propensity to act on potentially harmful advice.

Researchers noted, “Participants not only found these low-accuracy AI-generated responses to be valid, trustworthy, and complete/satisfactory, but also indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided.” This finding highlights a significant risk associated with the growing reliance on AI for health-related decisions.

The implications of such misplaced trust are evident in several documented cases of individuals receiving harmful medical guidance from AI. For instance, a 35-year-old Moroccan man sought emergency care after a chatbot erroneously advised him to wrap rubber bands around his hemorrhoid. In another alarming incident, a 60-year-old man experienced severe health complications after following a suggestion from ChatGPT to consume sodium bromide, a chemical typically used for pool sanitation. This individual was hospitalized for three weeks, suffering from paranoia and hallucinations, as reported in a case study published in the Annals of Internal Medicine Clinical Cases.

Dr. Darren Lebl, research service chief of spine surgery at the Hospital for Special Surgery in New York, emphasized the dangers of unverified medical advice from AI programs. He stated, “What they’re getting out of those AI programs is not necessarily a real, scientific recommendation with an actual publication behind it. About a quarter of them were made up.”

A survey conducted by Censuswide revealed that approximately 40 percent of respondents trust medical advice from AI bots like ChatGPT. This statistic underscores the growing inclination toward technology in healthcare, raising critical questions about the accuracy and reliability of AI-generated medical information.

As reliance on artificial intelligence continues to rise, the need for clear guidelines and educational efforts regarding its limitations becomes increasingly urgent. While AI has the potential to enhance healthcare delivery, the risks of misinformation must be addressed to prevent detrimental outcomes for patients.