OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.04.2026, 18:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Trainee Focus debate: Artificial intelligence will have a negative impact on emergency medicine

2024·2 Zitationen·Emergency Medicine AustralasiaOpen Access
Volltext beim Verlag öffnen

2

Zitationen

1

Autoren

2024

Jahr

Abstract

In the current environment, it is difficult to conceptualise how artificial intelligence (AI) will impact positively on EM. In considering potential negative implications of narrow AI, the below discussion will repeatedly reference the ‘black box’ concept: that AI operates according to opaque or non-interpretable machine learning, which is impossible for clinicians and patients to understand.1, 2 The utility of AI in EM will be limited by the impact of bias on algorithms, and may threaten the nature of the clinician–patient relationship. Bias prejudices all healthcare providers, influencing assumptions and decisions that impact patient care. Unfortunately, AI will not overcome bias: AI will learn from available data sets, and therefore, previously acknowledged limitations of research, including sampling bias and gender bias, risk being amplified, as algorithms derived from existing data will lack external validity for under-represented populations.1, 3 The ‘black box’ nature of AI will ensure it is difficult, if not impossible, to appreciate how algorithms are informed.1, 3 Although ED clinicians already use clinical decision tools to aid rational investigation and treatment decisions, research pertaining to the deveopment of these tools is transparent, allowing clinicians to review and consider the applicability to specific populations. AI will arguably limit, if not completely remove, the ability to do this. Qualitative research has highlighted clinician concern regarding the loss of human interaction and rapport with patients, as well as the loss of clinical intuition or clinician gestalt; presently, these cannot be replicated with AI.4, 5 Practicing EM without these elements risks fundamentally altering patient–clinician interactions in ways not yet quantified. AI will alter medical training and education, posing challenges in terms of an ever-widening curriculum, deskilling and potentially jeopardising the integrity of training programs. Stewart et al. recently demonstrated that specific education about AI, including practicalities of how it works, applications in medicine and limitations, is lacking in medical education for university students.6 Incorporating a comprehensive curriculum covering the relevant facets of AI into training will require more time from students and trainees in terms of navigating clinical and exam preparation. From a systems perspective, many Australasian ED clinicians are familiar with the experience of information technology (IT) platforms unexpectedly going offline, either for maintenance or, less commonly, because of cyber incidents, as in Victoria in 2019.7 Trainees, in addition to incorporating AI, must maintain a minimum skill set to function using non-IT systems; again, this can be anticipated to increase the volume of study and preparation for trainees. An overreliance on clinical decision support provided by AI may contribute to deskilling of healthcare workers.3, 4 Particularly for junior doctors, uncritically accepting input from AI may be associated with patient harm where an algorithm is inappropriately applied, or incorrect.4 The newly revised FACEM training program incorporates a series of training requirements, including research; audit or guideline review; and clinical teaching and morbidity and mortality presentations. The ability of AI models to generate outputs based on a series of prompts allows AI to aid individuals in the completion of writing tasks, raising plagiarism and academic integrity concerns.8 The potential for trainees to utilise AI to assist with completion of training requirements, and how ACEM will maintain the integrity of these assessments, is a subject requiring discussion. AI challenges the four core ethical principles: justice, beneficence, non-maleficence and autonomy, in emergency care. Autonomous decision-making may be weakened by AI.1 Broad application of AI algorithms will impact shared-decision making, where neither patient nor clinician understand how decisions are reached; further, such algorithms are potentially incapable of considering a patient's beliefs and values, which are often personal and individual.1, 3, 4 Beneficence will be compromised because of the ‘black box’ nature of AI, as decisions taken by AI are opaque; weighing the risks and benefits for an individual patient becomes impossible under these circumstances.1 Non-maleficence will be endangered where decisions are not transparent, or where healthcare providers lose control of decisions made by AI.1 AI may cause harm, as outlined above, through applicability of algorithms lacking external validity to under-researched populations, or through amplification of an error in a decision tool; who is morally accountable in these circumstances remains unclear.1 Arguably, insufficient consideration has been given to justice and equity of access to AI.3 This will impact minority groups for which less data are available to train AI, and potentially further exacerbate existing metropolitan and rural inequity.4 Further, data sets trained for resource-rich contexts will likely not apply to resource-limited contexts; this risks causing harm through inappropriate use, or by further exacerbating existing inequalities through not utilising AI in resource-limited settings.2 Threats to patient safety, questions around confidentiality and consent, and protection from cyber hazards, pose major issues for the implementation of AI in EM. Patient safety is a leading legal matter; AI may generate harmful decisions for some populations, particularly where bias (as discussed above) influences algorithms.1 Existing frameworks fail to delineate accountability for such harm, with AI system designers, clinicians or regulators potentially implicated.1, 4 Medical malpractice could be apportioned to clinicians for following algorithmic advice that leads to patient harm; conversely, scenarios where clinicians are liable for not using AI could arise.2 Adequate safeguards surrounding consent are yet to be established. It remains unclear how patients can consent to sharing data with AI systems without entirely understanding how, or by who, data will be used.1-3 Presently, health data is predominantly generated and managed within the public domain, for example, in the public healthcare system, whereas AI development occurs within the private sector.3 However, increasingly data is collected by health applications through wearable devices operating outside existing health privacy legislation.2 Perceptions relating to what, and how much, data are required to develop AI will vary depending on individual perspective, with the assumption the private sector will perceive more data is better.3 Questions remain regarding patient confidentiality, who subsequently owns this data, who can access and use it, and who profits from this use.2 Finally, AI systems pose a cyber security risk.1, 2 This may take the form of theft of healthcare information, or disruption of healthcare delivery, including through manipulation of algorithms to deliver inappropriate interventions or treatments.2 Thus, EM clinicians risk operating in an environment where legal liability for using, or not using, AI remains undefined; where they are unable to ensure confidentiality and safeguarding of patient data and/or patient consent for use of this data; and where systems used are vulnerable to cyber-attack. Debating the role of AI in EM is highly nuanced; the above is not an exhaustive list of the challenges AI poses, and each discussion point requires its own in-depth analysis. However, the brief review of the literature conducted for the purpose of this debate highlights a multitude of reasons why AI will not positively impact EM in the current climate. AH is a section editor for Emergency Medicine Australasia.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCardiac, Anesthesia and Surgical OutcomesAutopsy Techniques and Outcomes
Volltext beim Verlag öffnen