Dan Davies argues in The Unaccountability Machine that modern institutional systems become complex enough that no individual meaningfully understands the whole — so when something goes wrong, accountability diffuses through layers of legitimate partial-responsibility until it disappears. Not through malice. Through opacity plus incentive layering.

The AI version of this is already visible. Model training pipelines are opaque. Decision boundaries are unclear. Responsibility for model behavior is fragmented across research, infrastructure, product, policy, and legal in ways that make it genuinely difficult to identify who owns a specific output characteristic. When a model produces a harmful output, the org chart doesn’t have a clear answer for whose decision that was.

Davies is right that this is a systemic risk. Where I think the AI case goes further: Davies focuses on systems losing coherence over time, accountability eroding through accumulated complexity. In AI, there’s an additional layer of fragility that Davies doesn’t fully address — automation compresses decision velocity beyond human governance capacity.

Institutional unaccountability is slow. A corporation makes a bad decision; the harm unfolds over months or years; the accountability diffusion happens on a human timescale. AI systems can make millions of decisions in the time it takes a governance process to notice that anything has happened. The unaccountability isn’t just structural — it’s temporal. The reaction time gap between AI decision velocity and human oversight capacity creates a different category of risk than Davies maps.

The design implication: AI governance that’s modeled on institutional accountability frameworks will be too slow by a structural margin. The feedback loops need to be redesigned for the timescale at which AI systems actually operate. That’s not a policy problem. It’s a systems architecture problem, and it requires designers who understand both.

Leave a comment