Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trust in human–AI collaboration in finance: a bibliometric–systematic literature review
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Abstract Artificial intelligence (AI) is becoming deeply embedded in financial services, including credit scoring, robo-advisory, trading, compliance, and reporting. In these contexts, failures of trust in human–AI collaboration do not merely affect technology adoption but raise fiduciary, reputational, and systemic concerns. Yet, despite its centrality, trust remains conceptually fragmented and inconsistently operationalized across the literature. To address this fragmentation, this study presents a Bibliometric–Systematic Literature Review of trust in human–AI collaboration in finance. Accordingly, we first stabilize trust in the financial context as a latent evaluative belief under uncertainty by specifying the trustor, trustee, and object of trust, and by distinguishing trust from other constructs. Then following a PRISMA-guided Scopus retrieval (June 9, 2025), 430 records were screened, yielding a final corpus of 114 finance-specific publications published between 2018 and 2025. Using bibliographic coupling, weighted Leiden clustering, and centrality metrics, the review maps the intellectual structure of the field and identifies six research clusters: (i) AI governance in finance, (ii) eXplainable AI for finance, (iii) anthropomorphism in financial AI agents, (iv) user-interface design for human–AI interaction in finance, (v) robo-advisors for financial decision-making, and (vi) infrastructural trust technologies. Across these clusters, trust is variously framed as cognitive, affective, procedural, or infrastructural, with limited integration between analytical levels. Building on the bibliometric mapping and qualitative synthesis, the study develops a multi-level socio-technical framework that organizes how trust is discussed in the literature at the micro-level (user perceptions and calibration), meso-level (organizational design and corporate AI governance), and macro-level (regulatory and infrastructural). The micro–meso–macro framework is operationalized through eight analytically distinct propositions that synthesize recurrent patterns regarding trust calibration, overreliance, transparency-as-assurance, accountability, and systemic trust vulnerability. Four finance-oriented use cases illustrate how trust is treated as a distributed property of individuals, organizations, and institutions rather than as a feature of AI systems alone. By consolidating a fragmented body of work and clarifying its conceptual structure, this review provides a bounded, governance-oriented foundation for future empirical, experimental, and longitudinal research on trust in human–AI collaboration in finance.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.995 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.374 Zit.
"Why Should I Trust You?"
2016 · 14.750 Zit.
Generative adversarial networks
2020 · 13.352 Zit.