Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Haruna, a Reasoning Governance Scaffold for Context-, Time-, and Harm-Aware Artificial Intelligence
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Current large language models (LLMs) increasingly produce outputs that are coherent, fluent, and technically correct. However, a growing class of failures does not arise from incorrect reasoning, but from correct reasoning applied within an insufficient, implicit, or ethically unexamined frame. These failures often lead to human, social, or long-term harm without triggering conventional safety mechanisms. This paper introduces Haruna, a lightweight, language-based reasoning governance scaffold designed to constrain how an AI system reasons rather than what it knows. Haruna explicitly structures reasoning along dimensions of context sufficiency, assumptions, trade-offs, time, irreversibility, and human impact. It does not modify model architecture, training data, or internal representations, but operates as a transparent overlay on the reasoning procedure itself. Haruna is proposed as a pre-incident innovation: a framework intended to address classes of AI failure that are otherwise only recognized after harm has occurred. Its relevance increases as AI systems become more capable, autonomous, and socially embedded.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.620 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.876 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.435 Zit.
Fairness through awareness
2012 · 3.293 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.