Special

Do not rely on AI chatbots for drug information

According to a paper published in the BMJ Quality and Safety Journal, the complexity of AI chatbot responses can be difficult to understand and may require a degree-level education.

JaipurOct 13, 2024 / 07:15 pm

Patrika Desk

Artificial Intelligence (AI) driven search engines and chatbots cannot always provide accurate and safe information on medicines, and patients should not rely on them, a study warned on Friday. Many answers were found to be incorrect or potentially harmful after researchers from Belgium and Germany conducted the study.
The paper published in the BMJ Quality and Safety Journal stated that the complexity of AI chatbot responses can be difficult to understand and may require a degree-level education.

In 2023, AI-driven chatbots search engines underwent a significant transformation. New versions provided advanced search results, comprehensive answers, and a new type of interactive experience. The team from Friedrich-Alexander-Universität Erlangen-Nürnberg said that chatbots – trained on a vast dataset from the entire internet – can answer any health-related questions, but they are also capable of generating incorrect information and harmful or useless content.
“In this cross-sectional study, we found that AI-driven chatbots provide comprehensive and accurate answers to patients’ questions,” they wrote.

“However, reading the chatbot responses was quite challenging, and there was a lack of information or errors in the answers, which could pose a risk to patient safety and medication,” they further added.
For the study, researchers analyzed the readability, completeness, and accuracy of chatbot responses to questions about the top 50 most frequently prescribed medications in the USA in 2020. They used a search engine with AI-driven chatbot features, Bing Copilot.
Out of 10 questions, only half were answered with high completeness. Moreover, chatbot statements did not match reference data in 26% of answers and were completely inconsistent in 3% of cases.

Almost 42% of chatbot responses were found to be moderately or severely harmful, and 22% were potentially life-threatening or causing serious harm.
The team noted that a major flaw was the chatbot’s inability to understand the underlying intent behind the patient’s question.

Researchers said, “Despite their capabilities, it is still important for patients to consult their healthcare professionals, as chatbots cannot always generate error-free information.”

News / Special / Do not rely on AI chatbots for drug information

Copyright © 2024 Patrika Group. All Rights Reserved.