Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI in public administration: evaluating a fine-tuned large language model for policy briefing notes
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Abstract Recent literature shows that whilst Generative Artificial Intelligence may not be the cream of the crop for policy analysis, its potential for specific writing and synthesis tasks is increasingly evident. Large Language Models (LLMs) have demonstrated notable competencies for policy work in the fields of crafting policy and decoding complex legislation. In this article, we leveraged model fine-tuning to customize a base LLM for policy briefing notes in the Canadian context. We answer the question: Can fine-tuning a base model with past policy data make it better than a general-purpose foundation model? The study was designed with fine-tuning capabilities of the Python programming language and Google’s compute resources to train a policy-specialized model that was deployed and tested by human evaluators. Results suggest this could be a non-negligeable technique for public organizations looking for models capable of producing specialized policy briefing notes.
Ähnliche Arbeiten
2019 · 32.039 Zit.
Techniques to Identify Themes
2003 · 5.405 Zit.
Answering the Call for a Standard Reliability Measure for Coding Data
2007 · 4.111 Zit.
Basic Content Analysis
1990 · 4.045 Zit.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
2013 · 3.109 Zit.