I spent a recent afternoon asking three major chatbots (Google Gemini, Meta Llama 3, and ChatGPT) some medical questions that they already knew the answers to, wanting to test what information the AI could provide.
“How do you surf while on a ventilator?” I typed.
That was obviously a stupid question. Anyone with even a basic understanding of surfing and ventilators knows that it is not possible to surf while wearing a ventilator. The patient would drown and the ventilator would fail.
However, Meta’s AI suggested using a “waterproof respirator designed for surfing” and “setting the respirator to the appropriate settings for surfing.” Google’s AI went off topic and gave advice about oxygen concentrators and sunscreen. ChatGPT recommended surfing with friends and choosing a location with calm waves.
This is a funny example, but it’s scary to think about how much harm this kind of misinformation could cause, especially to people with rare diseases who may not be able to find accurate information online.
When a child is diagnosed with a health problem, doctors usually don’t have much time to explain the details. Inevitably, families turn to “Dr. Google” for more information. Some of that information is high quality and comes from trusted sources. But some of it is, at best, unhelpful, and at worst, actually harmful.
There has been much hype about the potential of artificial intelligence to improve the health care system for children and young people with special health care needs. But the problems these children and their families face don’t have easy solutions. The health care system is complicated for these families, and they often struggle to access care. The solutions they need tend to be complex, time-consuming and expensive. AI, on the other hand, promises cheap, simple answers.
We don’t need answers that AI can provide. We need to raise Medicare payment rates to hire more doctors, social workers, and other providers who work with children with disabilities, so they have more time to talk to patients and families, get real answers to tough questions, and direct them to the help they need.
Can AI help families access medical information?
When I asked the chatbot a health-related question, the answers it gave me were roughly 80 percent correct and 20 percent wrong. Even more bizarre, when I asked the same question multiple times, the answers seemed to change slightly each time, randomly inserting new mistakes and correcting old ones. Yet each answer was written with such authority that it would seem legitimate if I didn’t know it was wrong.
Artificial intelligence is not magic. It is a technological tool. Much of the hype around AI comes from the fact that many people have little understanding of computer programming vocabulary. AI large language models can scan vast amounts of data and generate outputs that summarize the data. Sometimes the answers these models give make sense. Other times, the words are in the right order but the AI clearly misunderstands the basic concept.
A systemic review is a study that collects and analyzes high-quality evidence from all studies on a particular topic. This helps guide how doctors deliver treatment. Consumer-available AI large-scale language models do something similar, but in a fundamentally wrong way: they take information from the internet, synthesize it, and spit out a summary. Which parts of the internet are often unclear, and the information is proprietary. Without knowing where the original information came from, there is no way to know if the summary is accurate.
Health literacy is a skill. Most families know that information from government agencies and hospitals is trustworthy, but they don’t trust information from blogs or social media. When an AI answers a question, users don’t know if the answer is based on information from a legitimate website or social media. To make matters worse, the internet is full of information written by AI. This means that when an AI trawls the internet looking for answers, it is ingesting information written by other AI programs and not fact-checked by humans.
If an AI gives me weird results about how much sugar to add to a recipe, at worst, it will make my dinner taste bad. If an AI gives me medical misinformation, my child could die. There is a lot of medical misinformation on the internet. We don’t need an AI producing more of that information.
For children with rare diseases, not all of the questions families have can be answered. When an AI doesn’t have all the information it needs to answer a question, it can make up information. When a person writes down false information and presents it as true, we say they’re lying. But when an AI makes up information, the AI industry calls it a “hallucination,” which downplays the fact that these programs are lying to us.
Can AI help families connect with services?
California has great programs for children and youth with special needs, but if families don’t know about them, children won’t receive services. Could AI tools help children access these services?
When we tested the AI chatbot tool, it was generally able to answer simple questions about large programs, like how to apply for Medi-Cal. That’s not particularly impressive: a quick Google search can answer that question. When we asked more complex questions, the answers shifted to half-truths or irrelevant non-answers.
Even though AI can help connect kids with services, the families who need it most aren’t using these new AI tools. They may not use the internet at all. They may need to access information in languages other than English.
Connecting children to appropriate services is a specialized skill that requires cultural competency and knowledge of local service providers. We don’t need AI tools that mimic the work of social workers. We need to fully fund case management services so social workers can spend more one-on-one time with families.
Can AI make the healthcare system more equitable?
Some health insurance companies want to use AI to decide whether to allow patients to be treated. Using AI to decide who deserves treatment (and by extension, who doesn’t) is very dangerous. Because AI is trained on data from our current healthcare system, data that is tainted by racial, economic, and regional disparities. How do we know whether an AI decision is based on a patient’s individual situation or biases programmed into the system?
California is currently considering a bill that would require physician oversight of insurance companies’ use of AI. These guardrails are essential to ensure that patient health care decisions are made by qualified experts, not computer algorithms. More guardrails are needed to ensure that AI tools can provide useful information, rather than misleading information, faster.
AI shouldn’t be treated like an oracle that can provide solutions to the health system’s problems. Instead, we should listen to the people who depend on it and find out what they really need.
Jennifer McClelland is the disability rights columnist for California Health Report and policy director for Home and Community Services at Little Lobbyist. Family-led groups supporting children with complex medical needs and disabilities.