table_of_contents

On Governing Generative AI: Taming the Chimera

K. A. Taipale *

Table of Contents

Preface

Prolegomenon– A New Monster

Reframes the Frankenstein myth by introducing the Chimera as a more precise metaphor for generative AI. Highlights generative AI’s simulation rather than intentional action, setting the central theme of epistemic illusion and the urgent governance stakes associated with these plausibility machines.

Chapter 1 – Governing Generative AI: The Rise of Plausibility Machines

Introduces generative AI explicitly as plausibility machines—systems that simulate coherent outputs without genuine comprehension. Repositions hallucination as an inherent property, not a defect. Explores epistemic friction and reframes governance as infrastructural and epistemic rather than behavioral.

Chapter 2 – Architectural Agency: Speaking Without Saying

Explores how generative AI models function as editorial infrastructures rather than neutral tools. Defines architectural agency and its implications for responsibility, emphasizing embedded decision-making, curation, and epistemic integrity.

Chapter 3 – Felicity and Illusion: How Generative Models Perform Credibility

Employs speech-act theory to analyze generative AI, demonstrating how these systems produce outputs that appear credible without actual grounding or intentionality. Highlights the substrate factors—training data and regime, architectural decisions, and reinforcementthat shape these credibility illusions.

Chapter 4 – Beyond Skynet: Sentience as Distraction

Critiques popular narratives focused on artificial general intelligence (AGI) and sentient rebellion. Argues these stories obscure the real epistemic and infrastructural risks posed by generative AI, including the false impressions of agency and understanding.

Chapter 5 – Drift: Coherence Without Continuity

Introduces epistemic drift, the tendency of generative AI to shift positions unpredictably across contexts. Examines instability in sustained reasoning and domain-specific tasks, offering strategies like contradiction detection and epistemic scaffolding to maintain coherence.

Chapter 6 – Embodied Interface: Smiling Puppets

Addresses the emerging risks of embodied generative AI systems—robots, avatars, and agents simulating human presence and emotion. Warns of the potential for deceptive realism and affective trust, proposing constraints and transparency to maintain epistemic clarity.

Chapter 7 – The Alignment Illusion: Surface Control Masquerading as Safety

Reevaluates alignment discourse, critiquing its practice as a superficial form of behavioral control, important, but insufficient for epistemic safety. Frames alignment as rhetorical compliance and argues for deeper, architectural solutions.

Chapter 8 – Credibility Architectures: Toward Epistemic Governance

Proposes credibility architectures as a structured, architectural approach to epistemic governance. Outlines design principles and regulatory mechanisms that foreground uncertainty, enforce transparency, and institutionalize accountability in generative AI systems.

Chapter 9 – Epilogue – Taming the Chimera

Reiterates the central argument, emphasizing the importance of managing illusion rather than fearing rebellion. Calls for governance approaches grounded in friction, transparency, and thoughtful regulation to protect democratic epistemic integrity.

Postscript on Control

Appendices

  1. Glossary
  • Technical Primer