Unfortunately, a recent study has revealed serious obstacles that chatbot users encounter when looking for health guidance. The study found a huge disparity in the quality of information. Conveniently for users, it is often not clear to a user the difference between legitimate and misleading paid recommendations. These findings call into question the ability of these digital tools to meet users’ health needs.
Kyle Wiggers is TechCrunch’s AI Editor, based in Manhattan, NY with his partner, a music therapist. His thoughts on the rapidly changing world of artificial intelligence in health care are not to be missed. They sharpen our understanding about what the implications of this study are. The study highlights the fact that users frequently encounter answers from chatbots that combine helpful suggestions with harmful recommendations.
Mahdi, a key researcher involved in the study, elaborated on the findings by stating, “The responses they received [from the chatbots] frequently combined good and poor recommendations.” Such variation can quickly make users doubt the reliability of their health-related choices, especially when users seek immediate medical guidance.
Now, the study identifies to what extent an important issue at hand: existing evaluation methods for chatbots simply aren’t enough. For example, they miss the nuances of social engagement. Mahdi noted, “Current evaluation methods for [chatbots] do not reflect the complexity of interacting with human users.” This gap highlights a pressing need for more refined approaches to assess the effectiveness of these technologies in real-world scenarios.
Considering these challenges, the researchers encourage taking a step back before recommending chatbot recommendations in healthcare settings. Mahdi commented on this point, stating, “We would recommend relying on trusted sources of information for health care decisions.” This callout is to remind developers that information should always be validated through trusted medical resources, not just an automated tool.
Given the increasing pace of emerging technology, the research urges enhanced standards for chatbot testing and deployment. Getting these tools to deliver accurate and meaningful responses will be critical to the tools’ successful adoption into health care.