Abstract
This essay reframes administrative decision-making in the generative AI era by identifying how epistemic constraints rather than traditional information constraints shape administrative rationality. We introduce the concept of epistemic boundedness: the inability to verify the veracity and foundations of available information. Large language models (LLMs) exemplify this challenge through their opaque reasoning processes and tendency to produce plausible but inaccurate outputs. We propose sociotechnical strategies to mitigate these constraints, including retrieval-augmented generation (RAG) and institutionalized verification procedures for AI-generated content. By implementing these complementary strategies, government agencies can take advantage of LLMs’ capabilities while preserving the integrity and accountability of administrative decision-making processes.
Keywords
Get full access to this article
View all access options for this article.
