Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Pulmonary and Immune Dysfunction in Pediatric Long COVID: A Case Study Evaluating the Utility of ChatGPT-4 for Analyzing Scientific Articles
0
Zitationen
24
Autoren
2025
Jahr
Abstract
Coronavirus disease 2019 (COVID-19) in adults is well characterized and associated with multisystem dysfunction. A subset of patients develop post-acute sequelae of SARS-CoV-2 infection (PASC, or long COVID), marked by persistent and fluctuating organ system abnormalities. In children, distinct clinical and pathophysiological features of COVID-19 and long COVID are increasingly recognized, though knowledge remains limited relative to adults. The exponential expansion of the COVID-19 literature has made comprehensive appraisal by individual researchers increasingly unfeasible, highlighting the need for new approaches to evidence synthesis. Large language models (LLMs) such as the Generative Pre-trained Transformer (GPT) can process vast amounts of text, offering potential utility in this domain. Earlier versions of GPT, however, have been prone to generating fabricated references or misrepresentations of primary data. To evaluate the potential of more advanced models, we systematically applied GPT-4 to summarize studies on pediatric long COVID published between January 2022 and January 2025. Articles were identified in PubMed, and full-text PDFs were retrieved from publishers. GPT-4-generated summaries were cross-checked against the results sections of the original reports to ensure accuracy before incorporation into a structured review framework. This methodology demonstrates how LLMs may augment traditional literature review by improving efficiency and coverage in rapidly evolving fields, provided that outputs are subjected to rigorous human verification.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.
Autoren
- Susanna R. Var
- Nicole Maeser
- Jeffrey Blake
- Elise Zahs
- Nayan Deep
- Zoey Vasilakos
- Jennifer McKay
- S. Johnson
- Phoebe Strell
- Allison Chang
- Holly Korthas
- Venkatramana D. Krishna
- Manojkumar Narayanan
- Tuhinur Arju
- Dilmareth E. Natera-Rodriguez
- Alex Roman
- S. Schulz
- Anala V. Shetty
- Mayuresh Vernekar
- Madison A. Waldron
- Kennedy Person
- Maxim C.‐J. Cheeran
- Ling Li
- Walter C. Low