Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Model Hemorrhage and the Robustness Limits of Large Language Models
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Large language models (LLMs) demonstrate strong performance across natural language processing tasks, yet undergo significant performance degradation when modified for deployment through quantization, pruning, or decoding strategy adjustments. We define this phenomenon as model hemorrhage - performance decline caused by parameter alterations and architectural changes. Through systematic analysis of various LLM frameworks, we identify key vulnerability patterns: layer expansion frequently disrupts attention mechanisms, compression techniques induce information loss cascades, and decoding adjustments amplify prediction divergences. Our investigation reveals transformer architectures exhibit inherent robustness thresholds that determine hemorrhage severity across modification types. We propose three mitigation strategies: gradient-aware pruning preserves critical weight pathways, dynamic quantization scaling maintains activation integrity, and decoding calibration aligns generation trajectories with original model distributions. This work establishes foundational metrics for evaluating model stability during adaptation, providing practical guidelines for maintaining performance while enabling efficient LLM deployment. Our findings advance understanding of neural network resilience under architectural transformations, particularly for large-scale language models.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.684 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.960 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.777 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.493 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.690 Zit.