OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.04.2026, 09:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable Conversations: Enabling Transparency in Large Language Model Responses

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Conversational AI systems powered by large language models increasingly handle high-stakes enterprise tasks, yet their reasoning processes remain opaque to users. This opacity creates barriers to trust, limits adoption in regulated industries, and complicates compliance auditing. We introduce the Explainable Conversations Framework (X-LLM), a three-layer architectural approach that embeds transparency throughout conversational AI systems rather than treating explainability as an afterthought.X-LLM integrates model-level mechanisms (citation frameworks, reasoning traces, confidence calibration), interaction-level design patterns (progressive disclosure interfaces, adaptive explanation depth), and system-level infrastructure (audit logging, governance controls, evaluation harnesses). We formalize the Cognitive Transparency Index (CTI), a composite metric combining factual traceability, reasoning clarity, and user interpretability into a unified transparency assessment.Through a validation study using demonstration data from the AgentArch benchmark [24], we demonstrate how X-LLM principles guide practical implementation decisions and improve system trustworthiness. We position our framework against existing explainability approaches and RAG architectures, identifying where X-LLM provides novel contributions and where it synthesizes established patterns. The framework offers a structured methodology for organizations building conversational AI systems that must balance sophisticated capabilities with regulatory requirements and user comprehension needs.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen