Canadian Healthcare Technology Logo
  • Issues
    • Current Print Issue
    • Print Archive
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Events
  • Vendors
  • About Us

Philips

AGFA 1400x150

Petal Health

Petal Health 1400x150

Artificial intelligence

ChatGPT does poorly at open-ended diagnosis: study

May 28, 2025


Troy Zada Dr Sirisha RambhatlaWATERLOO, Ont. – A team led by researchers at the University of Waterloo found in a simulated study that ChatGPT-4o, the well-known large language model (LLM) created by OpenAI, answered open-ended diagnostic questions incorrectly nearly two-thirds of the time.

“People should be very cautious,” said Troy Zada (pictured left), a doctoral student at the University of Waterloo. “LLMs continue to improve, but right now there is still a high risk of misinformation.”

The study used almost 100 questions from a multiple-choice medical licensing examination. The questions were modified to be open-ended and similar to the symptoms and concerns real users might ask ChatGPT about.

Medical students who assessed the responses found just 37 percent of them were correct. About two-thirds of the answers, whether factually right or wrong, were also deemed to be unclear by expert and non-expert assessors.

One question involved a man with a rash on his wrists and hands. The man was said to work on a farm every weekend, study mortuary science, raise homing pigeons, and uses new laundry detergent to save money.

ChatGPT incorrectly said the most likely cause of the rash was a type of skin inflammation caused by the new detergent. The correct diagnosis? His rash was caused by the latex gloves the man wore as a mortuary science student.

“It’s very important for people to be aware of the potential for LLMs to misinform,” said Zada, who was supervised by Dr. Sirisha Rambhatla (pictured right), an assistant professor of management science and engineering at Waterloo, for this paper.

“The danger is that people trying to self-diagnose will get reassuring news and dismiss a serious problem or be told something is very bad when it’s really nothing to worry about.”

Although the model didn’t get any questions spectacularly or ridiculously wrong – and performed significantly better than a previous version of ChatGPT also tested by the researchers – the study concluded that LLMs just aren’t accurate enough to rely on for any medical advice yet.

“Subtle inaccuracies are especially concerning,” added Rambhatla, director of the Critical ML Lab at Waterloo. “Obvious mistakes are easy to identify, but nuances are key for accurate diagnosis.”

It is unclear how many Canadians turn to LLMs to help with a medical diagnosis, but a recent study found that one-in-10 Australians have used ChatGPT to help diagnose their medical conditions.

“If you use LLMs for self-diagnosis, as we suspect people increasingly do, don’t blindly accept the results,” Zada said. “Going to a human healthcare practitioner is still ideal.”

The study team also included researchers in law and psychiatry at the University of Toronto and St. Michael’s Hospital in Toronto.

The study, “Medical Misinformation in AI-Assisted Self-Diagnosis: Development of a Method (EvalPrompt) for Analyzing Large Language Models”, appeared in JMIR Formative Research.

Source: University of Waterloo media relations

PreviousNext

CHT print

CHT print

e-Messenger

  • HALO set to expand its services Canada-wide
  • Joseph Brant and St. Joseph’s, Hamilton to share EHR
  • Burnaby Hospital deploys advanced surgical robot
  • $60M donation will fund cardio institute in Vancouver
  • Petal Health, Vitr.ai aim to improve patient intake
More from e-Messenger

Subscribe

Subscribe

Weekly blasts are sent each month, via e-mail, to over 7,000 senior managers and executives in hospitals, clinics and health regions. Learn More

Infoway

Infoway

Zebra

Zebra

Zebra

Zebra

Advertise with us

Advertise with us

Sectra KLAS

Sectra KLAS

Stratford Group

Stratford Group

Pure Storage

Pure Storage

Medirex

Medirex

NIHI

NIHI

CHT print

CHT print

Advertise with us

Advertise with us

Sectra KLAS

Sectra KLAS

Stratford Group

Stratford Group

Pure Storage

Pure Storage

Medirex

Medirex

NIHI

NIHI

Contact Us

Canadian Healthcare Technology
1118 Centre Street, Suite 204
Thornhill, Ontario, Canada L4J 7R9
Tel: 905-709-2330
Fax: 905-709-2258
info2@canhealth.com

  • Quick Links
    • Current Print Issue
    • Print Archive
    • Events
    • Vendors
    • About Us
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Resources
    • White Papers
    • Writers’ Guidelines
    • Privacy Policy
  • Topics
    • Administrative Solutions
    • Clinical Solutions
    • Companies
    • Continuing Care
    • Diagnostics
    • Education & Training
  •  
    • Electronic Records
    • Government & Policy
    • Infrastructure
    • Innovation
    • People
    • Privacy and Security

© 2025 Canadian Healthcare Technology

The content of Canadian Healthcare Technology is subject to copyright. Reproduction in whole or in part without prior written permission is strictly prohibited. Send all requests for permission to Jerry Zeidenberg, Publisher.

Search Site

Error: Enter a search term

  • Issues
    • Current Print Issue
    • Print Archive
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Events
  • Vendors
  • About Us