OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.05.2026, 05:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

An Empirical Study of Structured Prompt Engineering in Large Language Models

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Abstract: Large Language Models (LLMs) have significantly transformed artificial intelligence by enabling advanced reasoning, natural language understanding, and content generation. While architectural advancements and large-scale datasets contribute to their effectiveness, prompt engineering has emerged as a critical factor influencing output quality and reliabilityThis study presents a structured empirical evaluation of five prompting strategies: unstructured, role-based, chain-of-thought (CoT), instructional, and constraint-based prompting. The findings indicate that structured prompting techniques improve accuracy by up to 35% in reasoning-intensive tasks and significantly reduce hallucinations. Key Contributions: Development of a structured evaluation framework for prompt strategies. Comparative analysis across multiple task domains. Quantitative assessment of reasoning and safety improvements.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen