Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Abstract 2740: Applications of large language models to CAR-T cell therapy clinical data using Google cloud computing.
0
Zitationen
34
Autoren
2026
Jahr
Abstract
Abstract Large Language Models (LLM) are being widely adopted into the medical field for their impressive ability to analyze and summarize large amounts of text data. These models enable clinicians and researchers to extract meaningful insights from complex datasets and may assist with decision making. Here we present our workflows and application of LLM for the interpretation and summarization of clinical data related to CAR-T cell therapy. Using an LLM (Gemini 2.5 pro) in the Google Cloud Computing (GCP) environment, two applications were developed to analyze CAR-T cell therapy clinical data: 1) extracting and summarizing CRS and ICANS event-related data to streamline the compliance team workflow, and 2) identifying features available at time of CAR-T infusion able to classify patients into high- or low-monitoring needs 14 days post CAR-T. Patient data (vitals, labs, hematology notes, and EKGs) were extracted from the electronic medical record (EMR) using Google BigQuery into SQL tables in GCP. For each application, relevant data fields were retrieved, formatted into JSON objects, and embedded in the LLM prompt for context-aware processing. Both applications have undergone iterative prompt engineering after analyzing the LLM output against ground truth data in the EMR. For the application extracting and summarizing CRS and ICANS events, the LLM was optimized on Mayo Clinic Rochester data. When compared to the IEC compliance database, the LLM achieved 100% accuracy and F1 score for CRS events, and 96% accuracy and 82% F1 score for ICANS events. The match rate for CRS and ICANS grades were 83% and 89% respectively. We applied the same LLM to Mayo Clinic Arizona (MCA) and Mayo Clinic Florida (MCF) cases, who document clinical notes and toxicity flowsheet differently, and achieved an accuracy (MCA: 93%, MCF:93%) and F1 score (MCA: 96%, MCF: 96%) for CRS and accuracy (MCA: 81%, MCF:82%) and F1 score (MCA: 76%, MCF: 75%) for ICANS. LLM was able to capture events missed by manual reviews. For most of the discrepancies where compliance team final adjudication is needed, LLM will be updated to flag discrepancies for review by the compliance team. For the application related to the monitoring needs 14 days post CAR-T, the LLM identified 5 categories predictive of high or low monitoring needs post CAR-T infusion (disease status, inflammatory and tumor burden markers, hematologic status, renal function and performance status). Our model achieved a sensitivity of 85.7%, specificity of 23.8%, and F1 score of 65.5% for our first cohort, compared with data extracted by the LLM from the EMR. For our second cohort, with demographics statistically similar to cohort 1 and using the same 5 categories, the LLM achieved a sensitivity of 83.3%, a specificity of 28.6%, and F1 score of 65.4%. Our workflow and applications of LLM’s provide examples and guidance to others interested in applying LLM’s for clinical and research applications. Citation Format: Emmanuel Contreras Guzman, Matthew Jankowski, Andre De Menezes Silva Corraes, Malvika Gupta, Monica L. Shaw, Madiha Iqbal, Talal Hilal, Saurabh Chhabra, Ricardo Daniel Parrondo, Jody K. Mclean, Kim R. Riester, Kayla Joseph, Melinda Tan, Holly Ross, Cleyonia Barnett, Sylvia Carter, Semy Girmay, Rachel Wolan, Milana Ramsey, Christian Downhour, Kristy Morgan, Shae Sibley, Erica Rushing, Lucy Holmes, Allison Burgstahler, Stephen M. Ansell, Hassan Alkhateeb, Matthew Hathcock, Ramona Bruno, Allison C. Rosenthal, Hemant Murthy, Patrick B. Johnston, Jonas Paludo, Yi Lin. Applications of large language models to CAR-T cell therapy clinical data using Google cloud computing [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2026; Part 1 (Regular Abstracts); 2026 Apr 17-22; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2026;86(7 Suppl):Abstract nr 2740.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.
Autoren
- Emmanuel Contreras Guzman
- Matthew Jankowski
- Andre de Menezes Silva Corraes
- Malvika Gupta
- Monica Shaw
- Madiha Iqbal
- Talal Hilal
- Saurabh Chhabra
- Ricardo Parrondo
- Jody McLean
- Kim R. Riester
- Kayla Joseph
- Melinda Tan
- Holly Ross
- Cleyonia Barnett
- Sylvia L. Carter
- Semy Girmay
- Rachel M. Wolan
- Milana Ramsey
- Christian Downhour
- Kristy Morgan
- Shae Sibley
- Erica C. Rushing
- Lucy Holmes
- Allison R. Burgstahler
- Stephen M. Ansell
- Hassan B. Alkhateeb
- Matthew Hathcock
- Ramona L. Bruno
- Allison Rosenthal
- Hemant S. Murthy
- Patrick B. Johnston
- J. Paludo
- Yi Lin