Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Knowledge, Mental Models, and Trust Regarding AI for Baggage Screening
0
Zitationen
3
Autoren
2025
Jahr
Abstract
AI-based automated prohibited item detection (APID) is increasingly being introduced at airports to support security officers (screeners) for cabin baggage screening. Literature shows that a good mental model about AI and appropriate trust are necessary for successful human-AI interaction. We tested whether providing screeners with factual knowledge about APID increased the quality of their mental model and their trust. Thirty-three screeners were randomly assigned to either receive basic or detailed information about the APID. They were then tested on their knowledge and on the goodness of their mental model by being asked to explain X-ray images showing correct and incorrect suggestions from the APID. They then completed a questionnaire on trust in the APID. Even though screeners who received detailed information scored higher in the knowledge test about the APID, we did not find an increased goodness of their mental models. We derive from this that further interventions might be necessary, e.g. to discuss examples of APID decisions or to provide opportunities to gain experience and train with the APID using computer-based or on the job training, which has shown promising first results in studies on other AI systems. Finally, screeners who received detailed information did not report more trust in the APID. However, in the group who received detailed information, trust correlated positively with affinity for technology interaction, suggesting that detailed information can increase the trust of screeners with high affinity for technology interaction, whereas it can lower trust in APID of screeners with low affinity for technology interaction.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.604 Zit.
Generative Adversarial Nets
2023 · 19.893 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.303 Zit.
"Why Should I Trust You?"
2016 · 14.432 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.167 Zit.