Google is facing criticism for inadequately displaying health disclaimers related to its AI-generated medical advice. Concerns have emerged that the company’s AI Overviews, which provide summaries above search results, may mislead users regarding the reliability of this information. While Google asserts that these overviews encourage individuals to seek professional medical advice, critics argue that the warnings are not adequately visible.
When users query health-related topics, Google’s AI Overviews suggest seeking expert advice but fail to present these disclaimers prominently. According to a report by the Guardian, warnings only appear after users click a button labeled “Show more” to access additional information. Even then, the disclaimers are displayed in a smaller, lighter font at the bottom of the expanded content. The disclaimer reads: “This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes.”
Despite acknowledging that disclaimers do not initially appear with medical advice, Google maintains that AI Overviews frequently emphasize the need for professional consultation within the summaries. A spokesperson stated, “AI Overviews encourage people to seek professional medical advice, and mention seeking medical attention directly when appropriate.”
Concerns voiced by AI experts and patient advocates highlight the potential dangers of this approach. Pat Pataranutaporn, an assistant professor at the Massachusetts Institute of Technology (MIT), noted that the lack of visible disclaimers creates critical risks. He explained, “Even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy. This can be genuinely dangerous in healthcare contexts.”
Gina Neff, a professor of responsible AI at Queen Mary University of London, emphasized that the design of AI Overviews prioritizes speed over accuracy, leading to potentially harmful mistakes in health information. She pointed out that the Guardian’s investigation in January 2023 revealed the risk posed by misleading health information in these AI Overviews. Neff stated, “Google makes people click through before they find any disclaimer. People reading quickly may think the information they get from AI Overviews is better than what it is, but we know it can make serious mistakes.”
Following these findings, Google temporarily removed AI Overviews for some medical searches. Sonali Sharma, a researcher at Stanford University’s Centre for AI in Medicine and Imaging (AIMI), highlighted the issue of information presentation. She remarked, “These Google AI Overviews appear at the very top of the search page and often provide what feels like a complete answer to a user’s question. For many people, this single summary creates a sense of reassurance that discourages further searching.”
Sharma also pointed out that AI Overviews often contain a mix of accurate and inaccurate information, making it challenging for users to discern what is reliable unless they are already knowledgeable about the topic.
A Google spokesperson reiterated that AI Overviews do encourage seeking professional medical advice and that disclaimers are included in the content. Nevertheless, Tom Bishop, head of patient information at Anthony Nolan, a blood cancer charity, called for immediate changes. He stated, “We know misinformation is a real problem, but when it comes to health misinformation, it’s potentially really dangerous. That disclaimer needs to be much more prominent to make people step back and think.”
Bishop advocated for the disclaimer to be displayed at the top of the overview in a font size similar to the main content, rather than in a smaller font that is easy to overlook. This emphasis on visibility could help users recognize the importance of consulting their medical team before acting on AI-generated information.
