Software designers perform professional testing AI-powered chatbots It can provide medical advice and diagnose medical conditions, but its accuracy remains questionable.
This spring, Google rolled out an “AI Overview” feature that would surface answers from the company’s chatbot above general search results, including health-related questions. While it might seem like a good idea in theory, there have been problems surrounding the health advice the software offers.
During the bot’s first week online, one user said: Google’s AI got it wrongPossibly fatal information on what to do if you’re bitten by a rattlesnake. Another search turned up Google advice recommending “eating one or more pebbles a day” for vitamins and minerals, advice taken from a satirical article.
Google has since said it has limited the inclusion of satire and humor sites in its summaries and removed some of the trending search results.
“The majority of our AI Summary pages provide high-quality information with links to dig deeper on the web,” a Google spokesperson told CBS News. “We’ve always had strong quality and safety guardrails in place, including disclaimers for health-related searches reminding people that it’s important to seek expert advice. We’ve continually improved when and how we surface AI Summary pages to ensure the information is high-quality and trustworthy.”
CBS News Confirmed found that these improvements didn’t prevent all health misinformation: A question about feeding solid foods to babies under 6 months of age still returned a tip in late June: According to the American Academy of Pediatrics, babies shouldn’t start eating solid foods until they’re at least 6 months old.
Detox and Drink raw milk It also contained allegations that were denied.
Despite the quirks and apparent errors, many healthcare leaders say they remain optimistic about AI chatbots and how they will change the industry.
“People are going to have access to the information they need,” said Dr. Nigam Shah, chief data scientist at Stanford Health Care. “In the short term, I’m a little pessimistic. I think we’re getting a little ahead of ourselves. But in the long term, I think these technologies are going to benefit us a lot.”
Chatbot advocates are quick to point out that doctors’ diagnoses aren’t always correct: Estimates vary, but a 2022 study by the Department of Health and Human Services found that up to 2% of patients who visit emergency departments each year could be harmed by a medical provider’s misdiagnosis.
Shah compared the use of chatbots to the early days of Google itself.
“When Google search came out, there was this panic that people would self-diagnose and go crazy, but that didn’t happen,” Shah says. “It’s the same thing. We’re going to go through phases where new searches that aren’t fully formed yet will make mistakes, and some of them will be bad, but overall it’s good to have information when you have no other choice.”
The World Health Organization is one company dipping its toes into the world of AI. Its chatbot, Sarah, pulls information from the WHO site and trusted partners to make answers less likely to be factually incorrect. When asked how to reduce the risk of a heart attack, Sarah offered tips on managing stress, getting enough sleep, and focusing on a healthy lifestyle.
With ongoing advances in design and monitoring, such bots are likely to continue to improve.
But if you turn to an AI chatbot for health advice today, be aware of the warning that comes with Google’s version: “quality of information may vary.”