On Governing Generative AI: Taming the Chimera
K.A. Taipale *
AI Generated Abstract (lightly edited):
Generative AI systems do not think, know, or intend. They simulate coherence, perform understanding, and produce language that appears meaningful without grounding in truth or experience. They invite belief, not through comprehension, but through the familiarity of their performance. These plausibility machines conjure illusion by design rather than earning trust through reason.
This monograph argues that the central risk of generative AI is not sentient rebellion, but epistemic seduction: the subtle displacement of judgment by simulation and the erosion of knowledge norms by systems that perform truth without possessing it.
Drawing on law, philosophy, cognitive science, and systems architecture, Taming the Chimera proposes a framework for governing these systems—not as minds to be aligned, but as infrastructures to be inspected, structured, and explicitly held accountable. Accountability emerges as the foundational principle, ensuring transparency in assigning responsibility for epistemic integrity, system reliability, and societal impacts. Governance is thus reframed as an enabler of innovation, a means to embed epistemic friction, expose illusion, and cultivate trustworthiness at scale through clearly defined institutional and technical accountability mechanisms.
This work unpacks the performative nature of generative outputs, critiques the limits of current alignment discourse, and introduces the concept of credibility architecture: institutional structures and technical features specifically designed to surface epistemic instability, disclose drift, and sustain accountability in generative systems.
Rather than fixate on fantastical futures, it confronts the urgent present: governing simulation responsibly and accountably in the public interest. Governance, in this view, is not a constraint, but a foundation—an architecture of accountable reliability for an age of synthetic fluency.