- Newsletter Mai 2020 | Nr. 187
- Artificial intelligence in healthcare and its challenges
Artificial intelligence in healthcare and its challenges
AI is already applied in a number of ways in healthcare organization, medical research, clinical care, patient facing applications, or public health projects. With respect to healthcare organization, IBM Watson Care Manager system aims to improve cost efficiency by matching individuals with a care provider that meets their needs within a set care budget. In medical research, AI is used in literature searches as well as in speeding up the process of drug discovery. For instance, the canSAR database combines genetic and clinical data from patients with information from scientific research, and uses AI to make predictions about new targets for cancer drugs. In clinical care, AI can analyze clinical data, research publications, and professional guidelines and thus help with informing decisions about treatment. In patient facing applications, AI can offer personalized health assessment such as the app Ada Health Companion, where AI operates a chat bot that combines information about symptoms from the user with other information to offer possible diagnoses. In public health projects, AI has been used to predict adverse drug reactions and it is suggested to help with early detection of infectious disease outbreak.
The application of AI, however, faces a number of challenges. Technological challenges related to digital data, such as inconsistencies in data availability and data quality, restrict its potential use. Furthermore, AI also faces ethical and social challenges. Clinical practice often involves complex judgments, need for compassion, reading of social cues, or contextual knowledge and these challenges are something that AI cannot currently overcome. Due to the complexity of the underlying logic based on which AI operates – complexity that may be too difficult for humans to understand – values of transparency and accountability are at stake. The level of complexity could also make it hard to detect errors, which could have serious implications in terms of reliability and safety. There are further concerns about AI reinforcing biases already present in the datasets used for training AIs such as discrimination against gender, ethnicity, disability, or age. On the side of the public, there is a potential lack of trust that technologies are developed in the public interest and the concern over individual autonomy that AI may restrict based on calculations about risk. A further concern is related to data privacy, security, and the malicious use of AI that could be used to covert surveillance or screening. On the side of healthcare professionals, AI may also challenge the professional autonomy as well as authority derived from expertise and undermine ethical obligations of healthcare professionals towards individual patients.
AI technologies have the potential to help address important health challenges, but might be limited by the quality of available health data. The application of AI perhaps also should be limited in case it gets to be at odds with some of our key ethical and social principles. MS
Nuffield Council on Bioethics/ UK 2020: Bioethics Briefing Note: Artificial intelligence (AI) in healthcare and research. https://www.nuffieldbioethics.org/assets/pdfs/Artificial-Intelligence-AI-in-healthcare-and-research.pdf