Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating strengths, limitations, and future directions of ChatGPT in psychological analysis within case conceptualization: A qualitative analysis
0
Zitationen
6
Autoren
2026
Jahr
Abstract
This exploratory qualitative study investigates ChatGPT-4’s capacity to apply the LIBETcase formulation model by analyzing its feedback on anonymized interview transcripts. The study aimed to assess whether ChatGPT-4’s outputs reflectedaccurate identification and interpretation of two key psychological constructs—lifethemes and semi-adaptive plans—while adhering to theoretical principles, and toexplore recurring errors and limitations in its clinical reasoning. Ten non-clinicalparticipants underwent semi-structured interviews, and a custom-configured versionof ChatGPT-4 was provided with structured instructions and theoretical material.Reflexive thematic analysis revealed four overarching themes: (1) limitations inabstraction and interpretative barriers, (2) consistent structure and contentorganization, (3) hypothesis-driven reasoning with cautious language, and (4) partialadherence to LIBET theory through appropriate terminology. While ChatGPT’sstructured reasoning and alignment with theoretical vocabulary suggest its potentialas a reflective support tool—particularly in training or supervision—it also showeddifficulties in distinguishing emotional vulnerabilities from coping strategies, and ininterpreting abstract, relational constructs such as life themes. Findings support theimportance of improving prompt design, expanding training on psychologicalconstructs, and developing rigorous validation pipelines. Future research shouldaddress these limitations before deploying LLMs as assistive tools in clinical reasoningand decision-making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.439 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.315 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.756 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.526 Zit.