Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human-AI collaboration in legal services: empirical insights on task-technology fit and generative AI adoption by legal professionals
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Purpose This study aims to investigate the use of generative artificial intelligence (GenAI) in the legal profession, focusing on its fit with tasks performed by legal practitioners and its impact on performance and adoption. Design/methodology/approach This study uses a mixed methods approach, combining a survey of 279 legal professionals with qualitative insights from open-ended responses. The quantitative part uses structural equation modeling-partial least squares (PLS-SEM) offering statistical evidence on the relationships between Task Characteristics, Technology Characteristics, Task-Technology Fit (TTF), Utilization and Performance Impact. The qualitative analysis explores participants’ detailed experiences, perceptions and concerns through thematic and sentiment analyses, providing deeper contextual insights. Findings The study highlights variability in the alignment between legal tasks and GenAI capabilities. GenAI fits data-intensive tasks like research but struggles with complex human judgment. A strong TTF improves performance and adoption. Familiarity helps results but does not increase use, as legal practitioners use GenAI selectively, even when they are highly familiar with its capabilities. Participants’ comments highlight both opportunities and challenges, including efficiency gains and concerns over data security, trust and output quality. Despite these challenges, most respondents expressed a positive sentiment. Originality/value By extending the TTF theory to GenAI in the legal domain and integrating quantitative and qualitative evidence, the study identifies where GenAI adds value and where professional oversight is essential. It offers practical recommendations, including deploying GenAI in areas where it is most suitable and promoting responsible use through targeted training, professional development and confidence-building initiatives that also address associated risks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.539 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.426 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.921 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.586 Zit.