Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Chain-of-Thought reasoning in large language models for thyroid ultrasound interpretation: a dual-information approach
0
Zitationen
12
Autoren
2026
Jahr
Abstract
Grok-3 excelled in qualitative tasks, while Gemini-2.5 Pro and DeepSeek-R1 showed strengths in quantitative analysis. CoT-enabled LLMs offered interpretable reasoning with promise for clinical decision support.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.418 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.288 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.726 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.516 Zit.