The most underappreciated implication of Davies’s unaccountability framework for AI development: organizational structure is a risk factor, not just a management question.
How a team is structured determines what information flows where, who has authority over which decisions, and where accountability is formally located. In AI development, those structural decisions directly determine which failure modes get caught and which propagate undetected.
I’ve seen this specifically in design organizations adopting AI. When AI capability sits entirely in an engineering or data science function, and design interacts with it only through product requirements, the feedback loop between user-facing failure and model behavior is broken. Designers observe failures. They don’t have a direct path to the people who can address them. The information decays as it crosses organizational boundaries.
The governance implication: AI safety and quality are not problems you can solve with policy documents and review boards sitting outside the development process. They require structural integration — feedback paths that connect user-facing observation to model behavior, accountability that’s located at the level of decisions rather than distributed across functions, and information flows designed to surface the failures that matter rather than the ones that are easiest to measure.
Davies frames this as an institutional problem. In AI it’s also a velocity problem. The organizational structures inherited from slower-moving development processes are too slow for the feedback loops that AI system quality requires. Redesigning those structures is governance work that most organizations haven’t started.

Leave a comment