Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Preventing Discriminatory Risk Assessment: A Bias Detection Framework for LLM-Powered Insurance Decision Support
0
Zitationen
1
Autoren
2024
Jahr
Abstract
The increasing adoption of large language models (LLMs) in insurance underwriting and risk assessment has introduced new forms of algorithmic bias that are not adequately addressed by traditional fairness evaluation techniques. Unlike conventional predictive models, LLM-powered decision support systems reason over unstructured documentation, policy language, and contextual narratives, creating additional pathways for both direct and proxy-based discrimination. In regulated insurance environments, such bias poses significant ethical, legal, and regulatory risks, particularly when AI systems influence high-impact financial decisions. This paper proposes a bias detection framework for LLM-powered insurance decision support systems designed to prevent discriminatory risk assessment while preserving human oversight and auditability. The framework continuously monitors model interactions and decision context to identify bias signals arising from protected attributes, proxy indicators, documentation asymmetry, and inconsistent reasoning patterns. Bias detection is achieved through a combination of prompt instrumentation, contextual feature analysis, counterfactual evaluation, and policy-aligned constraints that operate alongside existing underwriting workflows. Rather than enabling autonomous decision-making, the framework treats LLMs as assistive reasoning components whose outputs are evaluated for fairness risk before informing human judgment. Representative underwriting use cases demonstrate how the framework surfaces biased reasoning, supports corrective intervention, and reduces downstream risk of unfair outcomes. The results indicate improved transparency, bias containment, and regulatory readiness without compromising operational efficiency. While evaluated in an insurance underwriting context, the proposed framework generalizes to other regulated decision domains where generative AI systems influence consequential human decisions.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.672 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.879 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.490 Zit.
Fairness through awareness
2012 · 3.298 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.