Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
PRECISE -Comment passer d'explicabilité à explication : proposition d'un processus d'explication des prédictions d'une IA
0
Zitationen
8
Autoren
2026
Jahr
Abstract
<div> In the case of decisions made by AI, this article explores how to shift from a technology-centric approach to a human-centric approach to explainability, providing understandable and contextualized explanations to end users, particularly in the workplace. The article highlights the importance of developing explainability mechanisms tailored to user profiles, needs, and contexts, adopting a collaborative, dynamic, and contextualized approach. It proposes a structured process, inspired by the CRISP-DM model, comprising the following steps: context analysis, needs analysis/gathering, modeling, implementation, integration, and validation. This process, called "PRECISE", aims to ensure that explanations are relevant, understandable, and actionable, while avoiding bias and building trust. The approach encourages close collaboration between technical stakeholders and professional users. An illustrative example is also given in the case of fraud detection. * Building trust [4]: Explainability makes AI decision-making processes understandable, which promotes user trust. Without clear explanations, users may sometimes doubt or mistrust the automatic decisions made by AI when they have access to them 5 . * Ensuring transparency and accountability [14]: In critical environments such as nuclear power stations, the military, cybersecurity and medicine, it is crucial to understand how and why decisions are made, particularly in order to justify or control them. * Facilitating adoption and acceptance [6, 34]: Tailoring explanations to users' needs and co-designing them with users enables better adoption of the tool, avoiding mistrust or suspicion. Adoption improves users' decision-making abilities. They also make it possible to quantify the intuitions of business teams. * Supporting the co-construction of meaning: Explanations are not merely technical; they must be contextualized and cooperative, enabling users to understand, control, and adjust AI in their activities. They must also enable the potential discovery of new knowledge [30]. * Facilitate compliance with regulations [18] and ethical requirements regarding the use of AI. * Ensure greater accountability by enabling the identification of the causes of automated decisions. AI systems should be subject to explanation standards similar to those currently applied to humans [18]. AI provides support at times, but it remains on the sidelines at other times and must not erase human autonomy [8,46]. There is also the idea that the system and explanations must not only be flexible (adapted to the situation) but also adjustable. In other words, they should be able to learn from their mistakes, from the situation, and from the user to evolve and improve... 4 Often considered "black box" methods 5 In some use cases (Waze, for example), users sometimes do not have access to the "decisions" made by AI and therefore do not have the opportunity to question them. </div>
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.576 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.300 Zit.
"Why Should I Trust You?"
2016 · 14.396 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.