Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Collaborative approaches to integrating large language models in academic writing
2
Zitationen
2
Autoren
2024
Jahr
Abstract
We read with interest the article “Comparing letters written by humans and ChatGPT: A preliminary study” by Matsubara, in which the author investigated the potential of ChatGPT, a large language model (LLM), in composing letters to the editor and explored its readability compared with human-written counterparts.1 This study raises critical questions about the role of artificial intelligence (AI) in academic writing, particularly in correspondence that relies on individual voice and nuanced expression. As non-native English speakers actively exploring the applications of AI to enhance scientific writing, we recognize the value of Matsubara's findings and would like to offer suggestions to broaden the research scope, including ways to make AI tools more practically applicable for diverse scholarly audiences. First, while the study provides valuable insights from Japanese professors as evaluators, its findings may benefit from a broader range of perspectives. The study involved a Japanese author whose work was evaluated by Japanese professors, which may have introduced shared linguistic and cultural backgrounds that influenced the “readability” assessments, leading to certain biases. When the author's and evaluator's native languages align, subtle language features or familiar phrasing may be more readily understood, potentially making the text appear more readable than it might to speakers of other languages. Including native English speakers and non-native speakers from different linguistic backgrounds could yield a more comprehensive understanding of how various audiences perceive AI-generated versus human-written text. Therefore, adding diversity to the evaluators' language backgrounds would increase the generalizability of these findings. Second, we see great potential in LLMs as a supplementary tool for non-native English speakers in crafting initial drafts. Rather than viewing LLMs as a complete replacement for human writing, it may be more productive to position it as a collaborative tool that assists in generating foundational drafts.2 For instance, non-native authors might use LLMs to help overcome initial language barriers or create well-structured sentences, which they could then refine to align with their own voice and intent. This hybrid approach—where a human drafts content that AI structures, or AI generates text that is then human-refined—was not included in Matsubara's study, which evaluated four types of letters.1 Adding such collaborative, iterative methods would offer a more realistic and practical perspective, as many authors find that combining human and AI input enhances quality and readability. Including this approach could make the study more comprehensive and better illustrate the potential of AI as a supportive tool in partnership with human authors. In a previous letter on this subject, several key aspects of integrating LLMs in scientific writing were discussed, with emphasis on the importance of educating users about the strengths and limitations of LLMs.2 This includes understanding potential pitfalls, such as “hallucinations,” in which LLMs can produce credible-sounding but fabricated information. Additionally, the potential of LLMs to bridge language gaps for non-native English speakers was highlighted, while stressing that human oversight remains essential for final validation. Moreover, transparent ethical guidelines are necessary to ensure readers comprehend the contributions of LLMs to any scholarly text. These considerations align well with Matsubara's findings, suggesting that LLMs can best support human authors by enabling non-native speakers, enhancing rather than replacing the human voice, and adhering to ethical standards in academic writing. Lastly, as LLMs continue to evolve, the ethical considerations surrounding their use in academic writing will only grow in importance. Matsubara's study highlights the need for guidelines on the appropriate use of LLMs, particularly regarding the transparency of their involvement in the writing process. Further research could explore the best ways to disclose the role of LLMs when used in drafting manuscripts, ensuring that readers are fully informed of any assistance provided by these technologies. As an example, Park discuss how the Korean Journal of Radiology has implemented specific guidelines to encourage responsible use of LLMs in manuscript preparation, including the prohibition of authorship assignment to LLMs and the requirement for transparent reporting of their use in generating scientific content.3 Such policies illustrate how academic journals are actively integrating LLMs while safeguarding research integrity and ethics. In conclusion, Matsubara's study provides a timely and important contribution to the discourse on the potential role of LLMs in academic writing. By considering additional evaluative perspectives, positioning LLMs as a collaborative tool, and addressing ethical guidelines, we may further optimize the balance between leveraging strengths of LLMs and preserving the personal touch that is central to scholarly correspondence. Shunsuke Koga drafted the manuscript. Wei Du edited and reviewed the manuscript. The authors have no conflicts of interest. Not applicable.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.549 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.941 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.