OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.05.2026, 03:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Digital Transformation Needs Trustworthy Artificial Intelligence

2023·2 Zitationen·Mayo Clinic Proceedings Digital HealthOpen Access
Volltext beim Verlag öffnen

2

Zitationen

1

Autoren

2023

Jahr

Abstract

As the Editor-in-Chief, Francisco Lopez-Jimenez,1Lopez-Jimenez F. Digital health in the 21st Century: the best is yet to come.Mayo Clin Proc Digital Health. 2023; 1: 52-53https://doi.org/10.1016/j.mcpdig.2023.03.001Abstract Full Text Full Text PDF Google Scholar very impressively pointed out in his first editorial that there is no way to stop the digital transformation, whether we like it or not, much like the steam engine or the electric current. And the comparison is apt—data is our oil today, and artificial intelligence (AI) is the new electricity because AI is now nearly everywhere. When AI successes, such as the current chat generative pretrained transformer-4 (ChatGPT4), are discussed in daily newspapers, it is safe to say that we are not only living in a new AI spring but already in an AI summer. In fact, ChatGPT is a good example. It shows what modern machine learning methods are capable of. However, it also very clearly shows the limitations.2Thorp H.H. ChatGPT is fun, but not an author.Science. 2023; 379: 313https://doi.org/10.1126/science.adg7879Crossref PubMed Scopus (116) Google Scholar After the initial enthusiasm, disillusionment sets in, and then the question arises as to how these machines can be used trustworthily in medicine. And that brings us to our topic. Perhaps the most important topic of AI in medicine, but also in many other areas of application, is trust. The new work by Farah et al3Farah L. Murris J.M. Borget I. Guilloux A. Martelli N.M. Katsahian S.I.M. Assessment of performance, interpretability, and explainability in AI-based health technologies: what healthcare stakeholders need to know.Mayo Clin Proc Digital Health. 2023; 1: 120-138Abstract Full Text Full Text PDF Google Scholar shows very impressively and convincingly that trust in AI-based medical devices depends on transparency (interpretability and explainability of the results) and ethics (in the sense of trustworthiness and regulation).4Müller H. Holzinger A. Plass M. Brcic L. Stumptner C. Zatloukal K. Explainability and causability for artificial intelligence-supported medical image analysis in the context of the European in vitro diagnostic regulation.New Biotechnol. 2022; 70: 67-72https://doi.org/10.1016/j.nbt.2022.05.002Crossref PubMed Scopus (9) Google Scholar The authors, after identifying the 3 main evaluation criteria for AI-based medical devices according to the Health Technology Assessment guidelines, provided a set of tools and methods to help understand how and why Machine Learning algorithms work and what predictions they make. This is of particular interest now and in the future because digital transformation (with AI as a vehicle to get there) is expected to change medicine permanently and help doctors diagnose and treat diseases, but also facilitate workflows in daily life, such as the time-consuming but mandatory medical documentation. Ideally, by reducing the routine tasks, the time freed up should be used for the following things that only human experts and not machines can do now: Although AI is able to generate art, music, or even writing, it is to date not able to create something similar to what humans can do; AI simply lacks the intuition and creativity of a human mind.5Alfaro-LeFevre R. Critical Thinking, Clinical Reasoning, and Clinical Judgment: A Practical Approach.7th ed. Elsevier Saunders, 2013Google Scholar AI can make decisions on the basis of factual data quickly and in parallel, however, AI lacks the ability to consider the wider context, social norms, ethical considerations, and genuinely human personal values, which are often essential in complex decision making.6Schoonderwoerd T.A.J. Jorritsma W. Neerincx M.A. Van Den Bosch K. Human-centered XAI: developing design patterns for explanations of clinical decision support systems.Int J Hum-Comput Stud. 2021; 154: 102684https://doi.org/10.1016/j.ijhcs.2021.102684Crossref Scopus (38) Google Scholar Although AI is technically able to recognize emotions using sensors,7Minsky M. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind.1st ed. Simon & Schuster, 2007Google Scholar it is neither able to experience nor express emotions, nor can AI interpret them in any kind of human-like manner. Empathy and emotional intelligence are essential in many professions, such as counseling, or social work—and in medicine! Although AI-driven robots and cyber-physical systems are able to perform many physical tasks, they cannot match the dexterity and flexibility of human hands, particularly in fine and delicate tasks, such as operation (the operative robots we have are not robots in the technical definitions, they are manipulators—the knowledge comes from the human surgeon).8O’Sullivan S. Leonard S. Holzinger A. et al.Anatomy 101 for AI-driven robotics: explanatory, ethical and legal frameworks for development of cadaveric skills training standards in autonomous robotic surgery/autopsy.Int J Med Robot. 2020; 16: 1-13https://doi.org/10.1002/rcs.2020Crossref Scopus (6) Google Scholar AI may understand language and recognize faces—technically, but it lacks the ability to comprehend social contexts and nuances, such as cultural norms, body language, and context-dependent meanings. This ability is crucial in many professions, such as diplomacy, law, or teaching, and in medicine. AI can follow ethical rules and guidelines but cannot make ethical or moral judgments or weigh different values and interests in complex situations that require critical thinking and reflection. This capacity is particularly important in professions such as law, politics, and in medicine. Although AI can learn from large data sets (ChatGPT is a good example) and is able to identify patterns, it lacks the flexibility to adapt to new contexts or situations quickly. Human experts have the ability to apply their expertise to completely novel situations, use their intuition, and adjust their approach to suit the specific needs of the situation. Humans are awesome—they have common sense (in German: Hausverstand). AI can definitely help find new solutions to the most pressing challenges facing our health care system, but for all its benefits, the widespread adoption of AI technologies also holds large and unforeseen potential for novel and unforeseen threats.9Holzinger A. Weippl E. Tjoa A.M. Kieseberg P. Digital transformation for sustainable development goals (SDGs)—a security, safety and privacy perspective on AI. In: Springer Lecture Notes in Computer Science, LNCS 12844. Springer, 2021: 1-20Google Scholar Therefore, it is essential to ensure that AI is developed with these potential threats in mind and that the safety, re-traceability, transparency, explicability, validity, and verifiability of AI applications in every medical application are ensured. And here we come full circle. To ensure this interpretability in a broader sense, Farah et al3Farah L. Murris J.M. Borget I. Guilloux A. Martelli N.M. Katsahian S.I.M. Assessment of performance, interpretability, and explainability in AI-based health technologies: what healthcare stakeholders need to know.Mayo Clin Proc Digital Health. 2023; 1: 120-138Abstract Full Text Full Text PDF Google Scholar have noted that metrics and methods for “explainable AI” need to be combined with ethical and legal analyses, and that acceptable standards for explainability are always context-dependent and depend on the risks of the clinical scenario. Raising awareness of these concepts is crucial to their widespread adoption and answering ethical questions. We must succeed in this in the future. We must collectively take a synergistic approach to human-centered AI internationally to enable humans to take control of AI by aligning AI with human intelligence, human values, ethical, and legal requirements to ensure secure and safe human-AI interactions—robustness and trust are the necessary ingredients.10Holzinger A The next frontier: AI we can really trust.in: Kamp M Proceedings of the ECML PKDD 2021, CCIS 1524. Springer, 2021: 427-440Crossref Scopus (42) Google Scholar This can be done by using the human-in-the-loop approach to interactively include the human into the AI pipeline. The author declares that there are no competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. This work does not raise any ethical issues. The author acknowledges his funder, the Austrian Science Fund (FWF), project: P-32554 explainable artificial intelligence. Assessment of Performance, Interpretability, and Explainability in Artificial Intelligence–Based Health Technologies: What Healthcare Stakeholders Need to KnowMayo Clinic Proceedings: Digital HealthVol. 1Issue 2PreviewThis review aimed to specify different concepts that are essential to the development of medical devices (MDs) with artificial intelligence (AI) (AI-based MDs) and shed light on how algorithm performance, interpretability, and explainability are key assets. First, a literature review was performed to determine the key criteria needed for a health technology assessment of AI-based MDs in the existing guidelines. Then, we analyzed the existing assessment methodologies of the different criteria selected after the literature review. Full-Text PDF Open Access

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen