Daugherty and Wilson’s framework in Human + Machine is useful for a specific reason: it forces explicit accounting of which cognitive functions AI should amplify and which should remain human-controlled. Not as a moral question — not “what should humans retain for dignity reasons” — but as a performance question: where does the human-machine combination actually outperform either alone?
Their answer, which I think is right as a starting point: AI has advantage in pattern recognition at scale, consistency, and processing speed. Humans have advantage in contextual judgment, value-laden decisions, and novel situations that don’t fit historical patterns. The optimal division routes each function to its comparative advantage.
What the framework understates: the division of labor is not static. When AI handles pattern recognition consistently, human pattern recognition skill atrophies. When AI surfaces risk signals reliably, human risk detection becomes less practiced. The comparative advantage calculation shifts over time as the human half of the system changes in response to what the AI half is doing.
This has direct implications for how you design the division. If you design a system that optimally divides labor at deployment, you may have a suboptimal division two years later because the human capabilities you were relying on have degraded from non-use. A static division of labor analysis is wrong for any system where human capability is use-dependent.
In margin systems, we designed the division so AI surfaces concentration risk patterns and humans make liquidation decisions. That division made sense with a team that had spent years developing risk intuition. Whether it still makes sense after two years of that intuition being exercised less frequently is a question the original design didn’t ask. It should have.

Leave a comment