ASHENEWS reports that the World Health Organization (WHO) has released new guidelines on the ethical use of artificial intelligence in healthcare.
The report titled “Ethics and Governance of Large Multi-modal Models (LMMs) seeks to promote the appropriate use of AI in healthcare delivery.
LMMs are a type of fast-growing generative artificial intelligence (AI) technology with applications across health care.
The guideline offers five broad applications of LMMs for health namely
- Diagnosis and clinical care, such as responding to patients’ written queries;
- Patient-guided use, such as for investigating symptoms and treatment;
- Clerical and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
- Medical and nursing education, including providing trainees with simulated patient encounters, and;
- Scientific research and drug development, including to identification of new compounds.
The health body expressed concerns that while LMMs are starting to be used for specific health-related purposes, there are also documented risks of producing false, inaccurate, biased, or incomplete statements.
This, it said could harm people using such information in making health decisions.
The guidance also details broader risks to health systems, such as accessibility and affordability of the best-performing LMMs.
LMMS can also encourage ‘automation bias’ by health care professionals and patients, whereby errors that would otherwise have been identified or difficult choices are improperly delegated to an LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the trustworthiness of these algorithms and the provision of health care more broadly.
To create safe and effective LMMs, WHO underlines the need for engagement of various stakeholders: governments, technology companies, healthcare providers, patients, and civil society, in all stages of development and deployment of such technologies, including their oversight and regulation.
The guideline recommends that the government creates policies that ensure the ethical use of LMMs. Such applications must be certified by a regulatory body before they are approved.
It further recommends that developers of such applications must be able to predict and understand potential secondary outcomes.
This is in addition to all rounded inputs from various stakeholders as they may be concerned in the development of such applications.
“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr Jeremy Farrar, WHO Chief Scientist.