OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 05:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Preventing Discriminatory Risk Assessment: A Bias Detection Framework for LLM-Powered Insurance Decision Support

2024·0 Zitationen·International Journal of AI BigData Computational and Management StudiesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2024

Jahr

Abstract

The increasing adoption of large language models (LLMs) in insurance underwriting and risk assessment has introduced new forms of algorithmic bias that are not adequately addressed by traditional fairness evaluation techniques. Unlike conventional predictive models, LLM-powered decision support systems reason over unstructured documentation, policy language, and contextual narratives, creating additional pathways for both direct and proxy-based discrimination. In regulated insurance environments, such bias poses significant ethical, legal, and regulatory risks, particularly when AI systems influence high-impact financial decisions. This paper proposes a bias detection framework for LLM-powered insurance decision support systems designed to prevent discriminatory risk assessment while preserving human oversight and auditability. The framework continuously monitors model interactions and decision context to identify bias signals arising from protected attributes, proxy indicators, documentation asymmetry, and inconsistent reasoning patterns. Bias detection is achieved through a combination of prompt instrumentation, contextual feature analysis, counterfactual evaluation, and policy-aligned constraints that operate alongside existing underwriting workflows. Rather than enabling autonomous decision-making, the framework treats LLMs as assistive reasoning components whose outputs are evaluated for fairness risk before informing human judgment. Representative underwriting use cases demonstrate how the framework surfaces biased reasoning, supports corrective intervention, and reduces downstream risk of unfair outcomes. The results indicate improved transparency, bias containment, and regulatory readiness without compromising operational efficiency. While evaluated in an insurance underwriting context, the proposed framework generalizes to other regulated decision domains where generative AI systems influence consequential human decisions.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen