Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence hallucinations in anaesthesia: Causes, consequences and countermeasures
10
Zitationen
3
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI) hallucinations occur when large language models, such as chatbots or computer vision systems, generate outputs containing non-existent patterns, leading to inaccurate results. Also known as AI confabulations or delusions, these instances challenge expectations of appropriate responses from AI tools due to unrelated or pattern-lacking outputs, similar to human hallucinations. Addressing such issues with generative AI presents significant challenges despite ongoing efforts to resolve them.[1,2] CAUSES OF AI HALLUCINATIONS Various causes of AI hallucinations have been identified and include: Insufficient or biased training data: An AI model designed to assist anaesthesiologists in administering anaesthesia may be trained predominantly on data from patients of a certain demographic, such as adults of average weight. When faced with a paediatric patient or an obese patient, the AI model may possibly hallucinate dosage recommendations that are inaccurate or unsafe, as it lacks sufficient exposure to diverse patient populations.[3] Model complexity: A highly complex AI system tasked with monitoring vital signs during surgery may exhibit hallucinatory responses when encountering unusual physiological patterns. This complexity underscores the need for simpler models to avoid such hallucinations.[4] Lack of explainability (black box): An AI algorithm designed to predict anaesthesia induction times may produce unexpectedly long or short estimates without providing clear explanations for its predictions. In cases where anaesthesiologists cannot understand or verify the AI system’s reasoning, there is a risk of blindly following its recommendations, potentially leading to errors or patient harm. This highlights the urgent need for explainable AI in anaesthesia.[5] MULTIFACETED THREAT OF AI HALLUCINATIONS IN ANAESTHESIA An AI hallucination occurs when an AI system produces demonstrably incorrect or misleading outputs, appearing confident and plausible despite factually flawed. The possible impacts of AI hallucinations on anaesthesia domains are varied[6-9] [Table 1].Table 1: Examples of AI hallucinations’ possible impact on anaesthesia domainsMisdiagnosis and mistreatment: Hallucinations can misinterpret patient data, resulting in unnecessary interventions or delayed treatments. Medication errors: AI-driven systems may recommend incorrect drug dosages, impacting patient safety. Communication and documentation: Misinterpreted verbal commands or procedure details can hinder accurate documentation and patient safety. Research skewing: AI-driven analysis of anaesthesia data for research could be skewed by hallucinations, leading to misleading conclusions. Legal and ethical concerns: Liability: Who is responsible for the errors caused by AI hallucinations? This remains a complex question with no clear answer. Depending on the specific circumstances, potential targets include the AI developer, healthcare provider or hospital. Informed consent: How can patients be adequately informed about the risks of AI hallucinations in anaesthesia, given the technical complexity involved and the dynamic nature of AI outputs? Striking a balance between transparency and patient anxiety is crucial. Bias: AI algorithms can perpetuate societal biases, leading to discriminatory outcomes in health care. Imagine an AI system trained on biased data; it might recommend different treatments based on a patient’s race or socioeconomic background.[10-12] STRATEGIES TO MITIGATE AI HALLUCINATIONS Various mitigation strategies need to be adhered to for the impact of AI hallucination on health care [Figure 1].Figure 1: Impact of AI hallucination on health care and mitigation strategies. AI = artificial intelligenceHigh-quality, diverse training data: Utilising diverse datasets improves AI model accuracy and reduces hallucination risks. For example, research by Jones et al.[13] demonstrated how incorporating various demographic factors and medical histories in training data significantly improved the accuracy of an AI-driven diagnostic tool for skin cancer detection. Explainable AI: Developing transparent AI models aids in identifying and rectifying hallucinations. For instance, the explainable nature of a deep learning model used in financial fraud detection allowed analysts to trace back erroneous predictions to specific data points, enabling targeted adjustments to the model’s training data and architecture.[14] Human oversight and collaboration: Human involvement reduces hallucination risks, especially in sensitive domains like health care. Collaborative efforts between AI systems and human experts have effectively reduced hallucination risks.[15] Continuous monitoring and evaluation: Regular evaluation detects and addresses hallucinations promptly. Continuous monitoring of its AI-powered recommendation system and real-time user feedback analysis allows for swift identification and correction of hallucinated product suggestions, improving user satisfaction and trust.[16] Algorithmic auditing and regulatory frameworks: Establishing robust auditing mechanisms and regulatory frameworks ensures AI system’s accountability and reliability.[17] To conclude, AI hallucinations in anaesthesia pose risks of misdiagnosis, medication errors and skewed research outcomes. Prioritising diverse training data, embracing explainable AI, maintaining human oversight, continuous monitoring and regulatory frameworks are crucial in mitigating these risks and fostering trust in AI technologies in health care. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.