Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Prompting for Meaning: Exploring Generative <scp>AI</scp> Tools for Qualitative Data Analysis in Leadership Research
0
Zitationen
5
Autoren
2025
Jahr
Abstract
As generative AI (GenAI) tools rapidly evolve and become more accessible, their application in leadership education and research demands critical reflection and experimentation. The current practitioner‐focused study presents two use cases exploring how GenAI tools—including Retrieval‐augmented generation platforms like NotebookLM and large language models like ChatGPT and Claude—can support qualitative data analysis in leadership contexts. The first case analyzes open‐ended responses from 237 participants about their “best” and “worst” bosses, while the second examines semi‐structured interviews from a phenomenological study of leadership educators. These methods were piloted with graduate students through a three‐way comparison methodology. Students conducted AI‐assisted analysis, compared findings with expert human coding, and examined peer variations in analytical approaches. The comparative analysis reveals key differences across AI tools regarding transparency, analytic depth, usability, and ethical implications, highlighting both affordances and limitations, including variable output quality, learning curves, and the need for methodological rigor. Student outcomes demonstrate that AI tools can effectively support various phases of qualitative methodology while requiring human oversight for interpretive depth, bias detection, and validation of outputs. GenAI can be a helpful analytical partner in leadership research when integrated thoughtfully through pedagogical frameworks emphasizing human–AI collaboration rather than replacement, preparing emerging researchers to leverage technological capabilities while maintaining—and at times enhancing—the interpretive richness essential to qualitative inquiry in leadership studies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.