When I built the AI maturity framework, I assumed the primary barrier to adoption was skill anxiety. Designers were worried they weren’t capable enough. If I could clarify what capability looked like at each level, the anxiety would reduce and adoption would follow.
I was solving the wrong problem.
Skill anxiety was real but it wasn’t the primary barrier. The primary barrier was incentive misalignment — and I didn’t see it clearly until I watched the framework stall out in organizations where leadership support was nominal rather than structural.
The pattern: an AI working group launches, a framework gets developed, there’s genuine enthusiasm in early workshops, and then three months later adoption is spotty and the framework has become a document rather than a practice. Not because people don’t believe in it. Because the day-to-day incentive structure hasn’t changed. Designers are still evaluated primarily on shipping velocity and output quality. Time spent developing AI fluency, running experiments, documenting failure patterns — that time has no explicit value in the system. So it gets deprioritized, and the framework becomes aspirational rather than operational.
This is a structural problem, not a skills problem. And it requires a structural solution: explicit time allocation, evaluation criteria that include capability development, leadership behavior that models AI engagement rather than just endorsing it, and feedback loops that make the organizational benefit of capability development visible.
The harder realization: frameworks that don’t account for the incentive structure they’re being deployed into will consistently underperform their design. Building the framework is the easy part. Redesigning the organizational conditions that determine whether the framework takes hold is the actual work. I underestimated that by a significant margin.

Leave a comment