Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mind the Gap: Evaluating Domain-Specific Language Models Beyond General Benchmarks in Exercise Health AI (Preprint)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> Standard benchmarks used for general large language models (LLMs) are inadequate for evaluating specialized models in the high-stakes exercise health domain, where safety, personalization, and deep domain knowledge are paramount. Current evaluation practices, including expert scoring, knowledge-based Q&A, and user feedback, offer limited insights into real-world applicability and clinical robustness. This discussion highlights these critical evaluation gaps and argues for the necessity of a multi-faceted, domain-specific evaluation framework. We propose incorporating strategies such as contextualized scenario simulations, comparative benchmarking against professional tools, structured cross-disciplinary audits, iterative real-world feedback loops, and rigorous adversarial safety testing. Developing and adopting such tailored, comprehensive evaluation methods is crucial for ensuring the reliability, safety, and effectiveness of LLMs, thereby fostering trust and enabling their responsible integration into exercise health practice to benefit athletes and patients. </sec>
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.702 Zit.
Generative Adversarial Nets
2023 · 19.895 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.323 Zit.
"Why Should I Trust You?"
2016 · 14.544 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.195 Zit.