Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Does ChatGPT Understand the Law? A Case Study on Road Homicide in Italy
2
Zitationen
2
Autoren
2025
Jahr
Abstract
This manuscript proposes a structured methodology and a replicable framework for empirically assessing the argumentative and interpretative capabilities of Large Language Models (LLMs) in the field of justice. As a demonstrative case, the framework is applied to the specific offence of vehicular homicide under Italian law, using GPT-4o as the tested model. The evaluation is structured in two complementary phases. The first phase investigates the model’s conceptual understanding in isolation: 60 legal concepts were tested through targeted prompts eliciting definitions, legal nuances, and illustrative examples. The responses were scored to assess the model’s abstract comprehension of core legal notions. The second phase evaluates the model’s ability to recognize, interpret, and apply these same legal concepts within real judicial reasoning. The model was provided with complete rulings from the Italian Court of Cassation and asked to summarize the decisions, identify the ratio decidendi, and reconstruct the legal reasoning underlying the Court’s conclusions. Outputs from both phases were assessed by a legal expert, who evaluated coherence, conceptual depth, and interpretative accuracy. The results highlight some correlation between the model’s prior conceptual grounding and its ability to understand and replicate complex judicial reasoning. These findings underscore the importance of expert oversight in any forensic or judicial application of LLMs. Beyond the specific case of vehicular homicide, the study proposes a generalizable framework for evaluating the legal reasoning capabilities of AI systems across different domains.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.