Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Prioritising Trust: Podiatrists' Preference for AI in Supportive Over Diagnostic Roles in Healthcare: A Qualitative Analysis (Preprint)
0
Zitationen
4
Autoren
2024
Jahr
Abstract
<sec> <title>BACKGROUND</title> As Artificial Intelligence (AI) evolves, its roles have expanded from helping out with routine tasks to making complex decisions, once the exclusive domain of human experts. This shift is pronounced in healthcare, where AI aids in tasks ranging from image recognition in radiology to personalised treatment plans, demonstrating the potential that, at times, AI may surpass human accuracy and efficiency. Despite AI's accuracy in some critical tasks, the adoption of AI in healthcare is a challenge, in part because of scepticism about being able to rely on AI's decisions. </sec> <sec> <title>OBJECTIVE</title> Our study aims to identify and delve into more effective and acceptable ways of integrating AI into a broader spectrum of healthcare tasks. </sec> <sec> <title>METHODS</title> Our study comprises two qualitative phases to explore podiatrists' views on AI in healthcare. Initially, we interviewed nine podiatrists (7 females and 2 males) with a mean age of 41 (SD=12), and aimed to capture the sentiments of podiatrists regarding the use and role of AI in their work. Subsequently, a focus group with five podiatrists (4 females and 1 male), with a mean age of 54 (SD=10), delved into AI's supportive and diagnostic roles based on the interviews. All interviews were recorded, transcribed verbatim, and analysed using Atlas.ti and QDA-Miner, employing both thematic analysis for broad patterns and framework analysis for structured insights per established guidelines. </sec> <sec> <title>RESULTS</title> Our research unveiled 9 themes and 3 subthemes, clarifying podiatrists' nuanced views on AI in healthcare. Key overlapping insights in the two phases include a preference for utilising AI in supportive roles (such as triage) because of its efficiency and process optimisation capabilities. There is a discernible hesitancy towards leveraging AI for diagnostic purposes, driven by concerns regarding its accuracy and the essential nature of human expertise. The need for transparency and explainability in AI systems emerged as a critical factor for fostering trust in both phases. </sec> <sec> <title>CONCLUSIONS</title> The findings highlight a complex view from podiatrists on AI, showing openness to its application in supportive roles while exercising caution with diagnostic use. This result is consistent with a careful introduction of AI into healthcare in roles (such as triage) in which there is initial trust, as opposed to roles that ask the AI for a complete diagnosis. Such strategic adoption can mitigate initial resistance, gradually building the confidence to explore AI's capabilities in more nuanced tasks, including diagnostics, where scepticism is currently more pronounced. Adopting AI stepwise could thus enhance trust and acceptance across a broader range of healthcare tasks, aligning technology integration with professional comfort and patient care standards. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.