Study: ChatGPT fails to answer 75% of medication-related inquiries

No items found.

A recent study revealed that ChatGPT provided inaccurate answers to around 75% of medical drug-related questions , potentially posing risks to patients

A recent study has shown that the famous AI tool, ChatGPT, failed to provide correct answers to nearly 75% of questions related to the use of medical drugs. Researchers have confirmed that some of the responses could potentially harm patients if followed.

Pharmacists at Long Island University in the United States examined 39 drug-related questions using ChatGPT and found that only 10 of the responses could be considered accurate by medical standards.

As for the remaining questions, the answers were either inaccurate or incomplete. In light of these findings, healthcare professionals and patients should exercise caution when using ChatGPT as a source of medical information.

Visit Website

Related articles

More News

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial
Join Thousands of subscribers! 🥳