Ethan Mollick’s most useful practical contribution is the framing of AI as collaborator rather than tool. It’s operationally productive — treating AI interaction as dialogue rather than command changes how people engage, produces better outputs, and builds more accurate mental models of system capability. I’ve applied it in workshop design, in how I structure prompt iteration, in how I coach designers working with AI for the first time.

Where I push back: Mollick is optimistic about humans’ natural ability to maintain appropriate epistemic distance from AI outputs. His collaborator framing assumes people will treat AI like a capable junior colleague — valuable, but requiring oversight, capable of being wrong, not the final word.

What I’ve observed: people often treat AI like an expert. Not because they’ve decided to. Because the outputs are fluent, confident, and arrive faster than independent verification feels worth doing. The authority transfer isn’t a conscious choice. It’s a cognitive default that the interface design does almost nothing to interrupt.

This matters because the collaborator framing, without countermeasures for authority transfer, produces a specific failure mode: users who are engaged with the AI, iterating with it, treating it as a dialogue partner, and systematically over-weighting its outputs relative to their own judgment. The collaboration is real. The epistemic dependency is also real. You can have both simultaneously, and the collaboration framing can obscure the dependency.

The design question Mollick doesn’t fully answer: how do you build systems that support genuine collaboration without triggering the authority transfer reflex? My working answer involves structuring interactions so users commit to their own assessment before seeing the model’s — but that introduces friction that most products won’t accept. The tension is real and I don’t think it’s been resolved.

Leave a comment