Large companies are moving rapidly to harness the promise of artificial intelligence in healthcare, while doctors and experts are trying to safely integrate the technology into patient care.
“Healthcare is probably the most impactful utility of generative AI that will exist,” said Kimberly Powell, vice president of healthcare at AI hardware giant Nvidia (NVDA), which has partnered with Roche’s Genentech (RHHBY) to improve drug discovery in the pharmaceutical industry, among other investments in healthcare companies, at the company’s AI Summit in June.
Other tech names such as Amazon (AMZN), Google (GOOG, GOOGL), Intel (INTC) and Microsoft (MSFT) have also highlighted the potential of AI in healthcare and entered into partnerships aimed at improving AI models.
Growing demand for more efficient healthcare operations has pushed tech companies to develop AI applications that help with everything from scheduling appointments and developing drugs to billing and helping read and interpret scans.
According to Precedence Research, the overall healthcare AI market is expected to reach $188 billion by 2030, up from $11 billion in 2021. The clinical software market alone is expected to grow by $2.76 billion between 2023 and 2028, according to Technavio.
Practitioners, for their part, are preparing for a potential technological revolution.
Sneha Jain, a senior research scientist in Stanford University’s division of cardiovascular medicine, said AI has the potential to become as integrated into the health care system as the internet, while stressing the importance of using the technology responsibly.
“People are going to err on the side of caution because the oath of doctors and healthcare providers is that ‘first, do no harm’ is the rule,” Jain told Yahoo Finance. “So how do we make sure that we ‘first, do no harm’ while actually advancing the way AI is used in healthcare?”
Potential patients appear wary: A recent Deloitte Consumer Health Care survey found that 30% of respondents said they “don’t trust the information” provided by generative AI for healthcare, up from 23% a year ago.
Fear: “Garbage in means garbage out.”
Patients appear to have reason to doubt AI’s current capabilities.
A study published in May evaluating large multimodal models (LMMs), which interpret media such as images and videos, found that AI models like OpenAI’s GPT-4V performed worse in medical contexts than random guessing when asked questions about medical diagnosis.
“These results highlight the urgent need for robust evaluation methodologies to ensure the reliability and accuracy of LMMs in medical imaging,” the study authors wrote.
Dr. Safwan Halabi, vice chair of imaging informatics at Lurie Children’s Hospital in Chicago, highlighted the trust issues surrounding the implementation of these models by comparing the use of AI in health care without testing to using self-driving cars without proper driving tests.
Halabi was particularly concerned about bias. If a model were trained on health data from Caucasians in Northern California, he said, it might not be able to provide appropriate and accurate care to African Americans on Chicago’s South Side.
“Data is only as good as its source, so you have to worry about what’s going on at the front end and what’s going on at the back end,” Halabi told Yahoo Finance. He added that “good medicine is slow medicine” and stressed the importance of safety testing before putting technology into practice.
But others, like Dr. John Halamka, president of the Mayo Clinic Platform technology initiative, have highlighted AI’s potential to leverage the expertise of millions of doctors.
“Wouldn’t you like to see the experience of millions and millions of patient journeys captured in a predictive algorithm so that a clinician could say, ‘Well, yes, I have my training and experience, but can I leverage patients from the past to help me treat patients of the future in the best possible way?’” Halamka told Yahoo Finance.
Create safeguards
Following an executive order signed by President Biden last year to develop policies that would advance the technology while managing risks, Halamka and other researchers have advocated for standardized, agreed-upon principles for evaluating AI models for bias, among other concerns.
“AI, whether predictive or generative, is not magic, it’s mathematics,” Halamka said. “We recognize the need to build a public-private collaboration, bringing together government, academia and industry to create guardrails and guidelines so that anyone dealing with healthcare and AI has a way to measure: Is it right? Is it appropriate? Is it valid? Is it effective? Is it safe?”
They also stressed the need to create a national network of assurance labs, spaces to test the validity and ethics of AI models. Jain is at the center of these discussions, as she is launching such a lab at Stanford.
“The regulatory environment, the cultural environment and the technical environment are conducive to this kind of progress in how we think about AI assurance and safety, so it’s an exciting time,” she said.
However, training these models also poses privacy concerns, given the sensitive nature of a patient’s medical data, including their full legal name and date of birth.
Until rules around the technology are formalized, Lurie Children’s Hospital has instituted its own regulations regarding the use of AI in its practice and has been careful not to disclose patient information online.
“My predictions are [that AI is] “This is here to stay, and we’re going to see it more and more often,” said Halabi, vice president of imaging informatics at Lurie. “But we won’t really notice it because it’s going to happen in the background or as part of the entire care process, without being explicitly announced or disclosed.”
Click here for an in-depth analysis of the latest healthcare industry news and events impacting stock prices