Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Negative performance feedback from algorithms or humans? effect of medical researchers’ algorithm aversion on scientific misconduct
5
Zitationen
4
Autoren
2024
Jahr
Abstract
Institutions are increasingly employing algorithms to provide performance feedback to individuals by tracking productivity, conducting performance appraisals, and developing improvement plans, compared to traditional human managers. However, this shift has provoked considerable debate over the effectiveness and fairness of algorithmic feedback. This study investigates the effects of negative performance feedback (NPF) on the attitudes, cognition and behavior of medical researchers, comparing NPF from algorithms versus humans. Two scenario-based experimental studies were conducted with a total sample of 660 medical researchers (algorithm group: N1 = 411; human group: N2 = 249). Study 1 analyzes the differences in scientific misconduct, moral disengagement, and algorithmic attitudes between the two sources of NPF. The findings reveal that NPF from algorithms shows higher levels of moral disengagement, scientific misconduct, and negative attitudes towards algorithms compared to NPF from humans. Study 2, grounded in trait activation theory, investigates how NPF from algorithms triggers individual's egoism and algorithm aversion, potentially leading to moral disengagement and scientific misconduct. Results indicate that algorithm aversion triggers individuals' egoism, and their interaction enhances moral disengagement, which in turn leads to increased scientific misconduct among researchers. This relationship is also moderated by algorithmic transparency. The study concludes that while algorithms can streamline performance evaluations, they pose significant risks to scientific misconduct of researchers if not properly designed. These findings extend our understanding of NPF by highlighting the emotional and cognitive challenges algorithms face in decision-making processes, while also underscoring the importance of balancing technological efficiency with moral considerations to promote a healthy research environment. Moreover, managerial implications include integrating human oversight in algorithmic NPF processes and enhancing transparency and fairness to mitigate negative impacts on medical researchers' attitudes and behaviors.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.439 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.315 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.756 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.526 Zit.