I used to believe that better AI UX meant smoother AI UX. Fewer interruptions, more seamless handoffs, less visible seam between what the user intended and what the system produced. Smoothness as a proxy for quality.
I’ve changed my mind about this, and I want to be precise about where and why.
There’s a category of friction that is pure waste — confirmation dialogs that nobody reads, warnings that appear regardless of context, forced pauses that don’t give the user anything useful to do. Eliminating that friction is unambiguously correct.
There’s a different category of friction that is doing epistemic work. It’s the moment where a user has to stop, evaluate, commit. Where the system creates a pause not because it can’t proceed but because the decision at this point requires human judgment, and removing the pause would remove the judgment with it.
The insight from building QA automation: I originally designed the escalation path as a failure mode. When the model couldn’t decide, it escalated to a human. We framed this as a limitation to be minimized. What we observed was that the escalation moments were often the most valuable ones — the cases that exposed genuine ambiguity in design standards, that forced conversations that should have happened earlier, that generated the shared definitions that made the automated cases more reliable over time.
The friction was doing alignment work. Removing it would have removed the work.
I now think about friction in AI interfaces in terms of a question: is this pause asking the user to do something they’re capable of doing that the system shouldn’t do for them? If yes, the friction is probably warranted and possibly important. If the pause is asking the user to evaluate something they have no basis to evaluate, or to make a decision they have no information to make, that friction is waste.
The design question isn’t “how do we reduce friction.” It’s “which friction is doing work that matters, and how do we design the rest away while preserving it.”

Leave a comment