Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Combining machine learning models and rule engines in clinical decision systems: Exploring optimal aggregation methods for vaccine hesitancy prediction
6
Zitationen
3
Autoren
2025
Jahr
Abstract
BACKGROUND: With the increasing application of artificial intelligence (AI) technologies in the healthcare sector and the emergence of new solutions, such as large language models, there is a growing need to combine medical knowledge, often expressed as clinical rules, with advances in machine learning (ML) offering higher prediction accuracy at the expense of decision-making transparency. PURPOSE: This study investigates the efficacy of various aggregation methods combining the decisions of an AI model and a clinical rule-based (RB) engine in predicting vaccine hesitancy to maximize the effectiveness of patient incentive programs. This is the first study of parallel ensemble of rules and machine learning in clinical context proposing RB confidence-led fusion of ML and RB inference. METHODS: A clinical decision system for predicting hesitation to vaccinate is developed based on a differentially private set of longitudinal health records of 974,000 US patients and clinical rules obtained from the present literature. Various approaches based on possibility theory have been explored to maximize classification accuracy, capture and hurdle rates while ensuring trustworthiness in clinical interventions. RESULTS: Our findings reveal that the hybrid approach outperforms the individual models and RB systems when transparency and accuracy are critical. A RB confidence-led approach emerged as the most effective method. The aggregation of mismatched classes relies on RB results when the RB engine has high confidence (expressed as more than the median degree of membership to the vaccination hesitation output function) and on ML predictions when the RB engine exhibits lower confidence. CONCLUSIONS: Implementing such an aggregation method preserves the accuracy and capture rates of a clinical decision system, while potentially improving acceptance among healthcare providers.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.652 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.567 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.083 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.856 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.