Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Harnessing generative AI to drive responsible business research and accelerate social impact
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Purpose This paper aims to examine how generative artificial intelligence (GenAI) can strengthen the evaluation of responsible business research by identifying work with high potential for social impact in business. It focuses on ChatSDG+RR7, a GenAI tool grounded in the United Nations sustainable development goals (SDGs) and Responsible Research in Business and Management (RRBM)’s seven principles of responsible research. The study explores how AI can support the peer review process in selecting and promoting research that advances meaningful societal outcomes. The question addressed is whether AI can be used effectively to assist in the peer review process. Design/methodology/approach ChatSDG+RR7 was used in the peer review process for the RRBM Honor Roll to evaluate submissions based on their alignment with responsible research standards. The study used a comparative design to examine the reliability and rigor of AI-only, human-only and AI–human collaborative evaluations of responsible business research. Findings ChatSDG+RR7 enhanced the human-only peer review process by increasing consistency, reducing bias and improving efficiency. It delivered more standards-based and comprehensive assessments. AI assistance more effectively identifies and promotes responsible research focused on advancing social impact in business than human-only evaluations. Originality/value This study offers new insights into how AI can strengthen peer review by assessing the substantiveness of a paper’s social impact focus. It introduces a novel AI tool that enhances the visibility of responsible research and supports scholars and institutions in aligning academic work with meaningful societal and global challenges.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.