Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI tools vs AI text: Detecting AI-generated writing in foot and ankle surgery
9
Zitationen
2
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI) has gained traction in scientific research, but concerns about plagiarism and fraud have surfaced.This study explores AI detection tools' capacity to distinguish AI-generated from human-generated text in foot and ankle surgery literature.Six publicly available AI detection tools were employed, and 12 abstracts were analyzed, including 6 AI-generated and 6 human-generated.Copyleaks demonstrated the highest raw accuracy (83 %).Overall, the tools exhibited 63 % accuracy, with a 25 % false positive rate.GPTZero, retested after three months, showed increased sensitivity (24.5 %) in identifying AI-generated content.To assess countermeasures, AI-generated abstracts were reworded using ChatGPT 3.5.The rewording led to a 54.83 % decrease in AI content detection.These findings highlight the challenges in reliably detecting AI-generated content in scientific literature, emphasizing the need for robust countermeasures and continued vigilance against potential fraudulent research.The study sheds light on the evolving landscape of AI detection technologies and emphasizes the urgency of adapting journal policies to safeguard against emerging threats.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.439 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.315 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.756 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.526 Zit.