The rapid growth of large language models in various industries, especially in the medical field, raises important ethical concerns related to data privacy, intellectual property, and societal risks, requiring a comprehensive framework for responsible integration. In ‘Ethical and regulatory challenges of large language models in medicine’ Viewpoint, a team led by Jasmine Ong of the Singapore University of Technology and Design (2024) noted that prominent computer scientists and technology experts, including Elon Musk and Steve Wozniak, called for a 6-month pause on artificial intelligence due to concerns about large language models (LLMs) and their unique regulatory challenges. LLMs in the medical field raise ethical issues regarding data privacy, intellectual property, and model accuracy. The development and deployment of LLMs challenge data privacy regulations, requiring a pragmatic approach to regulation based on data sensitivity. The plastic nature of LLMs allows for dynamic learning and continuous evolution based on user inputs. Regulators must weigh the risks of data breaches against the benefits of LLM-based models. Frameworks for evaluating LLM-based models in medicine are needed to mitigate risks and ensure ethical use. Regulatory bodies can create sandbox environments for exploring LLM-based applications without compromising security.
With the rapid growth of interest in and use of large language models (LLMs), which are advanced AI systems capable of processing and generating human-like text, across various industries, we face some crucial and profound ethical concerns, especially in the medical field.
LLMs’ unique technical architecture and purported emergent abilities differentiate them substantially from other artificial intelligence (AI) models and natural language processing techniques, necessitating a nuanced understanding of LLM ethics.
In this Viewpoint, the authors underscore the ethical concerns arising from the perspectives of users, developers, and regulators. They particularly focus on data privacy and rights of use, data provenance, intellectual property contamination, and the broad applications and plasticity of LLMs. A comprehensive framework and mitigating strategies are not just important, but imperative for the responsible integration of LLMs into medical practice, ensuring alignment with ethical principles and safeguarding against potential societal risks.
Introduction
In the wake of ChatGPT’s public release, over a thousand prominent computer scientists and technology industry experts, including Elon Musk and Steve Wozniak, signed a letter calling for an immediate 6-month pause on AI.
They argued that the current trajectory of generative AI development had spiraled “out of control,” posing “profound risks to society”.
These concerns fail to capture the distinctive concerns posed by LLMs fully. Archetypal discussions on AI ethics in medicine revolve around poor model accuracy for users not represented in the training data, transparency of models and model-building, accountability for model output, potential model bias, and risk for privacy and confidentiality breaches.
Data privacy and data rights of use
The development and deployment of LLM models challenge the boundaries of data privacy regulations.
Clinical LLM models trained with patient information should undergo rigorous cross-examination before implementation. Clinical LLM models trained with patient information should undergo rigorous cross-examination before implementation as a form of penetration test
Cybersecurity measures, such as using pseudonyms to implement differential privacy techniques, could counteract the risks of malicious attacks and data poisoning through deliberate adversarial prompting.
Preliminary studies have suggested that LLMs can be taught to shield or protect specific categories of personal information under simulated scenarios.
Benchmark approaches that effectively measure the balance between the privacy and utility of LLMs are currently absent.
Regulators and governance bodies
Some of the recommandations include using a pragmatic, tiered approach to regulation based on the sensitivity of data used in training and inputs into LLM; and evaluating data security measures required by the data risk category.
If the classification of training data is unclear because of an absence of transparency, regulators will need to weigh the potential risks of data breaches against the benefits the LLM-based model can bring to the general population.
Data provenance and intellectual property contamination
Data provenance, a term often used in the context of LLMs, refers to the origins, custody, and ownership of the information used to train these models. It is a critical aspect of LLM ethics as it can impact the reliability and trustworthiness of the model’s output.
The authors encourage developers to maintain transparency when describing the training datasets used in developing LLM-based models when possible, including the source, quantity, and diversity of data.
Segal and colleagues proposed a decentralized or blockchain-based, token economy-based market for medical research and publishing
The purported benefits of this blockchain-based platform should include:
- Data and workflow transparency.
- The immutability of original work.
- The minimization of fraud.
- The incentivization of reviewers through token payments.
Such endeavors need prospective research and review to evaluate the feasibility of scaling.
Regulators can apply principles of the fair use doctrine for generative AI-based models developed for medical uses.
Broad applications and plasticity of LLMs
The potential applications of LLMs in medicine are vast and diverse, offering a promising future for healthcare.
The industry is exploring the performance of medical chatbots in assisting patient care, counseling, expressing empathy, and providing information about health recommendations.
These broad and varied applications of LLMs in medicine mean that a single governing framework for their use is impractical.
The plastic nature of LLMs allows for dynamic learning and continuous evolution based on user inputs and changing clinical contexts.
To address the concerns discussed in this Viewpoint, robust frameworks for evaluating LLM-based models in medicine are being developed with utmost urgency and care.
Such a framework can incorporate transparent assessment methodologies, such as quality improvement before implementation.
Panel: Fair use doctrine principles
The fair use doctrine calls for
(1) Purpose and character of use describes the intended use of original material for commercial or not-for-profit use.
(2) Transformative applications that repurpose the use of the material and cannot be substituted by the original work.
(3) LLM-based medical applications with clearly defined attributes could be transformative solutions, with work repurposed mainly from its original material.
(4) Nature of the original work: The use of creative or imaginative work, such as novels or movies, is less likely to support a claim of fair use than the use of factual work
As for the amount and substantiality of original material used The black-box nature of LLMs renders this evaluation highly challenging.
Regulatory bodies can take on a proactive role by creating sandbox environments, controlled and secure spaces where developers can test and refine LLM-based applications without compromising security or patient safety. These environments can serve as a crucial tool for risk mitigation and innovation facilitation in the context of LLMs.
Conclusion
The rapid advancement of LLMs in the medical field has ushered in a new era of technological capabilities alongside complex ethical and regulatory considerations.
Such challenges are unique to LLM-based models instead of conventional machine-learning or deep-learning-based models.
Developers and regulatory bodies need to collaborate to encompass the multifaceted nature of LLMs, ensuring data protection without stifling innovation.