A new study has been published. Nature We conducted a systematic analysis of the ethical landscape regarding the use and application of large-scale language models in medicine and healthcare. The study found that while LLMs offer many benefits in terms of data analysis, insight-based decision support, and information accessibility, issues of fairness, bias, and misinformation remain major concerns in the healthcare field.
In fact, artificial intelligence technology and the use of LLMs in healthcare has grown exponentially, especially with the rapid development of the technology over the last two years. While the launch of Chat GPT has catalyzed much of this effort, the reality is that research surrounding LLMs and the general incorporation of AI into industry use cases has been happening for decades.
Technology experts, privacy advocates, and industry leaders have expressed concern about the rapid pace of this effort. Regulators simply can’t keep up with the growth. As a result, organizations and leaders are working to develop frameworks to guide industry use cases and ethical nuances. For example, the Coalition for Health AI, also known as CHAI, aims to “develop ‘guidelines and guardrails’ to advance quality healthcare by facilitating the adoption of trustworthy, fair, and transparent health AI systems.” Another example is the Trustworthy & Responsible AI Network (TRAIN), an initiative between Microsoft and European organizations that aims to operationalize ethical AI principles and build a network where best practices on the technology can be shared. The massive investment and resources being put into such efforts shows how important the challenge has become.
The reasons for this emphasis are well founded, especially in the context of healthcare use cases. AI in healthcare holds great promise for streamlining workflows, supporting insight-driven decisions, facilitating new ways of interoperability, and even more efficient use of resources and time. However, looking at the larger timeline, the work surrounding these applications is still in a relatively early stage. Furthermore, when it comes to data fidelity, LLMs are often deemed to be only as effective as the datasets and algorithms used to train them. Therefore, innovators must always ensure that the training data and methods used are of the highest quality. Furthermore, data must be relevant, updated, unbiased, and backed by legitimate references so that the system can continue to learn as paradigms evolve and new data emerge. Even when training conditions are perfect and all these criteria are met, AI systems still frequently produce hallucinations or generate content that is confidently asserted to be true but is inaccurate. For end users who do not have a more accurate source of information, these hallucinations may prove harmful and may be of significant concern in the healthcare field.
Therefore, increased attention to ethical AI and the development of AI guidelines are crucial aspects in nurturing this transformative technology and will ultimately be paramount to truly unlocking the potential and value of AI in a safe and sustainable manner.