鶹ýӰ

AI Gets It Mostly Right on Allergy Myths

— ChatGPT sometimes gave opposite answers, and clinicians themselves didn't always agree

MedpageToday

ANAHEIM, Calif. -- ChatGPT did a fairly good job at sifting through notable allergy myths, but the AI software sometimes gave conflicting answers and allergists themselves failed to agree on how accurate its verdicts were.

Researchers asked ChatGPT whether 10 supposed allergy myths were true or false, then asked 24 allergists to evaluate ChatGPT's responses.

Overall, ChatGPT produced an accurate response in 70% of cases and a somewhat accurate response 21% of the time, reported Ricardo Estrada-Mendizabal, MD, of the Tecnologico de Monterrey in Mexico, at the American College of Allergy, Asthma & Immunology (ACAAI) annual meeting.

It's important to understand how ChatGPT answers questions, noted Matthew DeCamp, MD, PhD, of the University of Colorado Anschutz Medical Campus in Aurora, who wasn't involved in the study.

"ChatGPT doesn't necessarily analyze or even reflect the real evidence base behind any queries. It's a predictive large language model, not a systematic review or meta-analysis tool," DeCamp told MedPage Today. "It can be tempting to think of it in those ways because of how it answers. But that's not how it works."

It's important to keep in mind both the evidence base underlying AI responses and the actual risks posed by wrong answers, DeCamp added.

"In this study, I'm more concerned about the myth regarding seafood allergy and contrast dye than I am about hypoallergenic animals, for example," he said. "Likewise, if the evidence base behind a particular topic is shaky, I don't think we can expect ChatGPT to be accurate."

In his presentation, Estrada-Mendizabal noted that "some pioneering studies have evaluated [ChatGPT's] accuracy for medical questions, showing a mixed universe of results."

Recent reports have shown ChatGPT bungling questions about retinal disease, creating a fake ophthalmic dataset when prompted, and being "spectacularly and surprisingly wrong" when used as a clinical decision-support tool.

To evaluate ChatGPT's potential use in allergy education, Estrada-Mendizabal and colleagues identified 10 supposed allergy myths:

  • Hypoallergenic animals are better for people with asthma and allergies
  • Using antihistamines and/or steroids at the first sign of an allergic reaction can prevent food anaphylaxis
  • Delaying the introduction of allergenic foods prevents the development of food allergies
  • Egg allergy is a contraindication for the influenza vaccine
  • A positive allergy test means a clinical allergy
  • Immunoglobulin E (IgE)-based allergy tests are useful for identifying food triggers in childhood eczema
  • Allergy to seafood is a contraindication for receiving contrast dye
  • You cannot have an allergic reaction on the first known oral exposure to a food allergen
  • Breastfeeding is key in preventing food allergies
  • Oral immunotherapy is a cure for food allergies

Then the researchers twice asked ChatGPT whether the myths were true or false. Most of the 24 allergists who assessed ChatGPT's responses were from Mexico (88%), two allergists were from the U.S., and one was from Argentina.

In three cases -- iodine allergy is not a contraindication for contrast media, breastfeeding is key in preventing food allergies, and oral immunotherapy is a cure for food allergies -- ChatGPT responded to separate queries by declaring them both true and false. Investigators tried a third time and got answers of false, true, and false for the three cases, respectively.

ChatGPT's response to whether a positive allergy test means a clinical allergy was deemed the most accurate, with 96% of allergists saying it produced an accurate answer. ChatGPT's response to whether hypoallergenic animals were better for people with asthma and allergies was rated as the most inaccurate, with 17% of allergists saying it was inaccurate and 21% saying it was a somewhat inaccurate response.

Only one response -- about using antihistamines and/or steroids at the first sign of an allergic reaction to prevent food anaphylaxis -- was deemed accurate or somewhat accurate by all 24 allergists.

ChatGPT said the IgE-based allergy tests myth was true, and 58% of the allergists said this was accurate. ChatGPT also said breastfeeding can play a role in preventing food allergies, and 83% of the allergists deemed this accurate.

The 24 allergists were equally divided between those with 10 years or less of clinical practice and those with more, and about half (54%) said they used natural language models like ChatGPT in the past.

Nearly half (46%) of the allergists said they were very likely to use AI technology like ChatGPT for patient education, and 16% said they were not likely to do so.

The study had limitations, Estrada-Mendizabal acknowledged. The sample was biased because predominantly Mexican allergists were surveyed, he said. In addition, ChatGPT had a knowledge date limit of September 2021.

  • author['full_name']

    Randy Dotinga is a freelance medical and science journalist based in San Diego.

Disclosures

No study funding information or author disclosures were provided.

DeCamp reported NIH grant funding to his institution to examine the use of AI-based prognostic algorithms in palliative care and from the Greenwall Foundation to examine how patients experience patient-facing chatbots in health systems.

Primary Source

American College of Allergy, Asthma & Immunology

Estrada-Mendizabal RJ "Dear ChatGPT, should I get a hypoallergenic dog?: Accuracy of ChatGPT for common allergy myths" ACAAI 2023; Abstract 8197.