Download Our DHN Survey Result 2024
Exclusive
Realize your Healthcare’s Digital Transformation journey with ScaleHealthTech Learn More

WHO Releases Guidance on the Ethical Use of LMMs in Healthcare AI

Written by : Jayati Dubey

January 22, 2024

Category Img

The guidance recognises potential risks of LMMs, such as generating false or biassed information with potential implications for health decisions.

The World Health Organisation (WHO) has issued guidance addressing the ethical and governance challenges associated with Large Multi-Modal Models (LMMs), a rapidly advancing form of generative artificial intelligence (AI) technology.

With applications across the healthcare sector, the guidance provides over 40 recommendations for governments, technology companies, and healthcare providers to ensure the responsible and beneficial deployment of LMMs.

Understanding Large Multi-Modal Models (LMMs)

Large Multi-Modal Models, a type of generative AI, have gained unprecedented popularity for their ability to accept various data inputs, such as text, videos, and images, and generate diverse outputs.

Notable platforms including ChatGPT, Bard, and Bert have surged into public consciousness in 2023. LMMs mimic human communication and possess the capability to perform tasks for which they were not explicitly programmed.

Applications of LMMs in Healthcare

The WHO guidance delineates various applications of LMMs in healthcare. These include aiding in diagnosis and clinical care by responding to patients' written queries and providing diagnostic support. LMMs also contribute to patient-guided use, assisting individuals in investigating symptoms and comprehending treatment options.

Moreover, they are crucial in handling clerical and administrative tasks, such as documenting and summarising patient visits within electronic health records.

Additionally, LMMs extend their impact to medical and nursing education, offering simulated patient encounters to trainees. Furthermore, they contribute significantly to scientific research and drug development by identifying new compounds and actively participating in various research endeavours.

While LMMs present opportunities for transformative improvements in healthcare, the guidance also acknowledges associated risks, including the potential for generating false, inaccurate, biassed, or incomplete information. These risks may have implications for health decisions made based on such information.

Potential Risks & Considerations

LMMs in healthcare bring forth concerns related to quality and bias as they may be trained on subpar or biassed data, potentially impacting the accuracy and fairness of their generated outputs. Ensuring the integrity of training data becomes pivotal to mitigate these challenges.

Apart from quality concerns, issues regarding the accessibility and affordability of the best-performing LMMs may pose hurdles for health systems. Overcoming these challenges is crucial to ensure that the benefits of LMMs are widely accessible and do not exacerbate existing healthcare disparities.

Additionally, cybersecurity risks must be addressed proactively to safeguard patient information and maintain trust in healthcare algorithms, recognising that LMMs, like other AI forms, are susceptible to potential breaches.

Key Recommendations for Governments & Regulators

The WHO guidance emphasises the crucial role of governments in setting standards for developing and deploying LMMs. Key recommendations include:

1. Investment in Public Infrastructure: Governments should invest in or provide not-for-profit or public infrastructure, including computing power and public datasets, accessible to developers in various sectors. This infrastructure should adhere to ethical principles and values.

2. Legal Frameworks: Laws, policies, and regulations should ensure that LMMs used in healthcare meet ethical obligations and human rights standards, addressing aspects such as dignity, autonomy, and privacy.

3. Regulatory Oversight: Governments should assign regulatory agencies to assess and approve LMMs intended for use in healthcare. The regulatory process should include mandatory post-release auditing and impact assessments by independent third parties.

Key Recommendations for LMM Developers

Developers play a crucial role in ensuring the responsible design and deployment of LMMs. Key recommendations for developers include:

1. Inclusive Design: LMMs should be designed in collaboration with potential users, stakeholders, medical providers, researchers, healthcare professionals, and patients. Inclusive, transparent design processes should encourage ethical discussions and input.

2. Task Accuracy and Reliability: LMMs should be designed to perform well-defined tasks with the necessary accuracy and reliability to improve healthcare capacity and advance patient interests. Developers should be able to predict and understand potential secondary outcomes.

Global Collaboration for Ethical AI Development

WHO's guidance underscores the importance of global collaboration in effectively regulating the development and use of AI technologies, particularly LMMs.

The engagement of various stakeholders, including governments, technology companies, healthcare providers, patients, and civil society, is deemed essential at all development and deployment stages.

As the healthcare landscape continues to evolve with the integration of AI technologies, WHO's comprehensive guidance aims to ensure the ethical use of LMMs, contributing to improved health outcomes and addressing persisting health inequities globally.

The recommendations provided serve as a foundation for developing transparent, accountable, and ethically sound AI applications in healthcare.


About Chime India

The College of Healthcare Information Management Executives (CHIME) is an executive organization dedicated to serving senior digital health leaders. CHIME includes more than 5,000 members in 56 countries and two US territories and partners with over 150 healthcare IT businesses and professional services firms. CHIME enables its members and business partners to collaborate, exchange ideas, develop professionally and advocate the effective use of information management to improve the health and care throughout the communities they serve. CHIME's members are chief information officers (CIOs), chief medical information officers (CMIOs), chief nursing information officers (CNIOs), chief innovation officers (CIOs), chief digital officers (CDOs), and other senior healthcare leaders. The CHIME India Chapter became the first international chapter outside North America in 2016 and is now a community of over 70+ members in India. For more information, please visit www.chimecentral.org

ABOUT US

Digital Health News ( DHN) is India’s first dedicated digital health news platform launched by Industry recognized HealthTech Leaders. DHN Is Industry’s Leading Source Of HealthTech Business, Insights, Trends And Policy News.

DHN Provides In-Depth Data Analysis And Covers Most Impactful News As They Happen Across Entire Ecosystem Including Emerging Technology Trends And Innovations, Digital Health Startups, Hospitals, Health Insurance, Govt. Agencies & Policies, Pharmaceuticals And Biotech.

Contact us: info@digitalhealthnews.com

© Digital Health News 2024