Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Black-Box to Glass-Box: A User-Centric Framework for Explainable Visualization with LLMs
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) have revolutionized the fields of Artificial Intelligence and Data Science, holding great potential in a variety of domains such as E-Learning, Business, and Data Analytics. However, the challenge of relying on the output generated by LLMs still persists, specifically in sensitive domains such as healthcare and finance. This paper presents an enhanced framework for generating explainable visualizations using LLMs as well as introducing an approach to make the LLM outputs more interpretable to users, enhancing their trust in the validity of the output. The work presented in this paper builds upon the capabilities of the existing tool: LIDA [7]. The paper proposes modifications to its core components as well as the addition of new components. The goal is to enhance the explainability and reliability of the produced outputs. The proposed system refines how goals are identified and explored as well as how visualizations are assessed. The aim is to address limitations in explainability. Key contributions of this work include the integration of a new scoring mechanism that ensures transparency for the LLM-generated evaluations across multiple dimensions, enforcing user trust in the outputs generated. This study contributes by merging the growing field of explainable Artificial Intelligence with the domain of data exploration and analytics by offering a modular and extensible approach to visualization generation, emphasizing user-centric design and interpretability. The findings have implications for researchers, data analysts, and developers looking to bridge the gap between complex data and actionable insights.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.576 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.300 Zit.
"Why Should I Trust You?"
2016 · 14.396 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.