Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Medical Practitioners’ Perceptions of Artificial Intelligence in Healthcare: A Mixed-Methods Study
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Background: Artificial Intelligence (AI) is increasingly integrated into healthcare systems to enhance diagnostic accuracy, streamline workflows and improve patient outcomes. While the technological capabilities of AI are advancing rapidly, the attitudes and preparedness of medical practitioners remain underexplored, particularly in the context of developing healthcare systems. Existing research has predominantly focused on technical applications, with limited attention to end-user perceptions. Objective: This study aimed to assess medical practitioners’ perceptions of AI in clinical practice, focusing on familiarity, perceived benefits, barriers and ethical concerns. The goal was to identify factors influencing acceptance and readiness for AI adoption in healthcare. Methods: A convergent mixed-methods design was employed. Quantitative data were collected via a structured survey (n = 342) and qualitative insights were obtained through semi-structured interviews (n = 38). Descriptive statistics, chi-square tests, logistic regression and MANOVA were used for quantitative analysis, while thematic analysis was applied to qualitative transcripts. Results: A majority (82.1%) of respondents were familiar with AI and 54.3% perceived it as “very useful.” Radiologists and younger practitioners (<30 years) demonstrated the highest confidence and acceptance (p<0.001). Key barriers included limited training (37.0%) and data privacy concerns (43.5%). Thematic analysis highlighted the need for structured AI education and ethical governance frameworks. Conclusion: Medical practitioners generally hold favorable attitudes toward AI, yet substantial barriers remain. These findings underscore the importance of targeted training, interdisciplinary collaboration and policy development to ensure ethical and effective AI integration in clinical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.