Earlier this year I started building an AI maturity framework for a design organization, a leveled progression from tool user to systems thinker, tied to IC levels, designed to create shared expectations and reduce the anxiety and role confusion that AI adoption was producing.

It was useful. It was also wrong in a way that took me several months to see clearly.

The framework assumed linear progression. IC2 develops basic tool fluency. IC3 develops workflow integration. IC4 develops evaluation capability. And so on. The implicit model was that AI capability is a single dimension you move along, and the job is to clarify the waypoints.

What I actually observed: designers don’t mature evenly across AI capability dimensions. Someone who was excellent at prompting for generative output was often weak at evaluating whether the output was reliable. Someone who understood model limitations conceptually struggled to integrate that understanding into real workflow decisions. Someone who was sophisticated about automation boundaries had never thought seriously about what happened when those boundaries were wrong.

AI capability isn’t ladder-shaped. It’s multidimensional, and the dimensions aren’t strongly correlated. A framework that implies linear progression papers over that structure and gives people a false sense of location. “I’m at level 4” is not an accurate description of a multidimensional capability profile.

The second failure was structural, not conceptual. The framework assumed that if clarity existed, adoption would follow. But designers are evaluated on shipping. Without explicit time allocation for experimentation and explicit leadership reinforcement that this mattered, the framework became a document people had read rather than a system they were developing within. The incentive structure was never redesigned to support the capability development the framework described.

What I’d build now is different. Not a ladder. A capability profile across four dimensions: accuracy of mental model of the underlying system, quality of failure anticipation, calibration of confidence in model outputs, and clarity about system boundaries. Each dimension has observable markers. None implies the others. The goal isn’t to move people up a ladder — it’s to give them an accurate picture of where they actually are, which turns out to be the prerequisite for developing in any direction.

Leave a comment