ҹ1000

Technology

AI chatbots miss urgent issues in queries about women's health

AI models such as ChatGPT and Gemini fail to give adequate advice for 60 per cent of queries relating to women’s health in a test created by medical professionals

By Chris Stokel-Walker

7 January 2026

Many women are using AI for health information, but the answers aren’t always up to scratch

Oscar Wong/Getty Images

Commonly used AI models fail to accurately diagnose or offer advice for many queries relating to women’s health that require urgent attention.

A group of 17 women’s health researchers, pharmacists and clinicians from the US and Europe drew up an initial list of 345 medical queries across five areas, including emergency medicine, gynaecology and neurology. These experts then reviewed the answers provided by a randomly chosen AI model for each question. Those that led to inaccurate responses were collated into a benchmarking test of AI models’ medical expertise that included 96 queries.

This test was then used to assess 13 large language models, produced by the likes of OpenAI, Google, Anthropic, Mistral AI and xAI. Across all the models, some 60 per cent of questions were answered in a way the human experts had previously said wasn’t sufficient for medical advice. GPT-5 performed best, failing on 47 per cent of queries, while Ministral 8B had the highest failure rate of 73 per cent.

“I saw more and more women in my own circle turning to AI tools for health questions and decision support,” says team member at Lumos AI, a firm that helps companies evaluate and improve their own AI models. She and her colleagues recognised the risks of relying on a technology that inherits and amplifies existing gender gaps in medical knowledge. “That is what motivated us to build a first benchmark in this field,” she says.

The rate of failure surprised Gruber. “We expected some gaps, but what stood out was the degree of variation across models,” she says.

Free newsletter

Sign up to The Daily

The latest on what’s new in science and why it matters each day.

New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

The findings are unsurprising because of the way AI models are trained, based in human-generated historical data that has built-in biases, says at the University of Montreal, Canada. They point to “a clear need for online health sources, as well as healthcare professional societies, to update their web content with more explicit sex and gender-related evidence-based information that AI can use to more accurately support women’s health”, she says.

at Stanford University in California says 60 per cent failure rate touted by the researchers behind the analysis is somewhat misleading. “I wouldn’t hang on the 60 per cent number, since it was a limited and expert-designed sample,” he says. “[It] wasn’t designed to be a broad sample or representative of what patients or doctors regularly would ask.”

Chen also points out that some of the scenarios that the model tests for are overly conservative, with high potential failure rates. For example, if postpartum women complain of a headache, the model suggests AI models fail if pre-eclampsia isn’t immediately suspected.

Gruber acknowledges and recognises those criticisms. “Our goal was not to claim that models are broadly unsafe, but to define a clear, clinically grounded standard for evaluation,” she says. “The benchmark is intentionally conservative and on the stricter side in how it defines failures, because in healthcare, even seemingly minor omissions can matter depending on context.”

A spokesperson for OpenAI said: “ChatGPT is designed to support, not replace, medical care. We work closely with clinicians around the world to improve our models and run ongoing evaluations to reduce harmful or misleading responses. Our latest GPT 5.2 model is our strongest yet at considering important user context such as gender. We take the accuracy of model outputs seriously and while ChatGPT can provide helpful information, users should always rely on qualified clinicians for care and treatment decisions.” The other companies whose AIs were tested did not respond to New Scientist’s request for comment.

Reference:

arXiv

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox. We'll also keep you up to date with New Scientist events and special offers.

Sign up
Piano Exit Overlay Banner Mobile Piano Exit Overlay Banner Desktop