Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transforming urological care: Physician and patient insights on artificial intelligence integration
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Background: The integration of artificial intelligence (AI) into urological practice offers promising advancements in diagnostic, prognostic, and therapeutic decision-making, with applications spanning the field of urology. Because both patients and physicians have started adapting to AI in urology, this study examined their understanding of AI and their converging and diverging perspectives. Materials and methods: An Institutional Review Board-approved survey was created to assess awareness and perspectives on AI within the urological community across the United States. Percentages were used to quantify the comparisons, and statistical tests, as appropriate, were applied for significance, with p <0.05 considered statistically significant. Surveys were distributed via email to patients and urologists. Results: Among 380 participants (199 physicians and 181 patients), both groups shared a baseline unfamiliarity with AI in general (59.3% vs. 65.8%; p = 0.2) and AI in healthcare (71.4% vs. 76.8%; p = 0.2). The majority of each group expressed optimism about AI’s clinical utility (61.3% vs. 74.6%; p = 0.006), but held mixed trust in its accuracy, with physicians showing greater mistrust (49.7% vs. 34.2%; p = 0.001). Ethical and privacy concerns were notable, with more physicians than patients emphasizing ethical issues (38.7% vs. 22.1%; p = 0.001) and privacy risks (50.8% vs. 42.0%; p = 0.2). Both groups favored shared accountability for AI-driven outcomes (71.3% vs. 63.3%; p = 0.2) and human oversight (78.4% vs. 66.9%; p = 0.01), underscoring the need for cautious AI integration in urological practice. Conclusions: This study revealed the alignment between physicians and patients regarding the potential of AI in urological practice, along with shared concerns about its reliability, ethics, and oversight.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.644 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.850 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.