Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Significance Weighting in Large Language Models and RAG: Cross-Architecture Behavioral Evidence
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Large language models and RAG systems routinely operate in environments where authority is contested, identities overlap, and probabilistic inference preserves multiple plausible readings without determining which distinctions should govern analysis or action. This paper reports empirical evidence that explicit significance weighting, formalized as an S-vector with dimensions for identity stability (Sr), operational consequence (Sc), and temporal relevance (Su) (among others) produces systematic and convergent effects on reasoning behavior across architecturally distinct language model systems and RAG. We tested significance-guided reasoning using structured scenarios requiring resolution of contested authority where inference proved insufficient. Testing covered two system classes: four frontier conversational models and three retrieval-augmented generation systems (OpenAI GPT-5.2, Google Gemini 3.0, Anthropic Claude Sonnet 4.5, xAI Grok 4.1, NotebookLM, Claude Projects, Perplexity). All systems were evaluated under controlled conditions comparing inference-only responses with significance-weighted responses using identical scenario content. Where reasoning traces were available, S-vector application reduced reasoning effort by 40-60% while improving completion rates. In retrieval-augmented systems, significance weighting addressed a distinct failure mode: not knowledge insufficiency, but domain collision among equally well-sourced competing truths. All three RAG systems produced identical priority orderings under significance criteria that they could not generate through inference alone, demonstrating that the framework enables principle-based resolution of cross-domain authority conflicts without narrative synthesis or institutional defaulting. All seven systems tested, spanning two architectural classes and four organizations, converged on identical operational priority orderings under significance criteria, a consensus none could generate through inference or retrieval alone. These findings establish behavioral validity for significance weighting as a governance mechanism for semantic ambiguity in large language models. The observed effects emerged at the prompt level without architectural modification, suggesting immediate production applicability, while validating the theoretical framework for deeper integration. The results position significance weighting as a missing control layer in contemporary language model systems, one that becomes essential as retrieval breadth expands and operational deployment requires resolution of contested claims under conditions of genuine ambiguity.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.626 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.532 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.046 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.843 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.