OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.04.2026, 06:17

Thilo Hagendorff

99 Arbeiten1.674 Zitationen

University of Stuttgart · DE

Relevante Arbeiten

Meistzitierte Publikationen im Bereich Gesundheit & MedTech

To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems

2022 · 191 Zit. · PLOS Digital Health

Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier

2021 · 64 Zit. · Frontiers in Human Dynamics

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

2021 · 51 Zit. · Frontiers in Human Dynamics

Lessons Learned from Assessing Trustworthy AI in Practice

2023 · 24 Zit. · Digital Society

Publisher Correction to: The Ethics of AI Ethics: An Evaluation of Guidelines

2020 · 16 Zit. · Minds and Machines

Triage 4.0: On Death Algorithms and Technological Selection. Is Today’s Data- Driven Medical System Still Compatible with the Constitution?

2021 · 10 Zit. · Journal of European CME

How to Assess Trustworthy AI in Practice

2022 · 10 Zit. · arXiv (Cornell University)

Human-Like Intuitive Behavior and Reasoning Biases Emerged in Language Models -- and Disappeared in GPT-4

2023 · 5 Zit. · arXiv (Cornell University)

Artificial Intelligence Governance and Ethics: Global Perspectives

2019 · 5 Zit. · arXiv (Cornell University)

Why we need biased AI -- How including cognitive and ethical machine biases can enhance AI systems

2022 · 1 Zit. · arXiv (Cornell University)

Emergently Misaligned Language Models Show Behavioral Self-Awareness That Shifts With Subsequent Realignment

2026 · 0 Zit. · ArXiv.org

Compromising Honesty and Harmlessness in Language Models via Deception Attacks

2025 · 0 Zit. · ArXiv.org

Emergently Misaligned Language Models Show Behavioral Self-Awareness That Shifts With Subsequent Realignment

2026 · 0 Zit. · arXiv (Cornell University)

Beyond Chains of Thought: Benchmarking Latent-Space Reasoning Abilities in Large Language Models

2025 · 0 Zit. · ArXiv.org