OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.04.2026, 12:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Haruna, a Reasoning Governance Scaffold for Context-, Time-, and Harm-Aware Artificial Intelligence

2025·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Current large language models (LLMs) increasingly produce outputs that are coherent, fluent, and technically correct. However, a growing class of failures does not arise from incorrect reasoning, but from correct reasoning applied within an insufficient, implicit, or ethically unexamined frame. These failures often lead to human, social, or long-term harm without triggering conventional safety mechanisms. This paper introduces Haruna, a lightweight, language-based reasoning governance scaffold designed to constrain how an AI system reasons rather than what it knows. Haruna explicitly structures reasoning along dimensions of context sufficiency, assumptions, trade-offs, time, irreversibility, and human impact. It does not modify model architecture, training data, or internal representations, but operates as a transparent overlay on the reasoning procedure itself. Haruna is proposed as a pre-incident innovation: a framework intended to address classes of AI failure that are otherwise only recognized after harm has occurred. Its relevance increases as AI systems become more capable, autonomous, and socially embedded.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen