Two years of writing about AI and design. Six months of building. A clearer picture than I had. Also a longer list of things I’m uncertain about.

The question I keep returning to: how do you design for human growth in a system that’s optimized for task completion? Those objectives are not naturally aligned. A system optimized for task completion will offload as much as possible to the model. A system optimized for human growth will preserve more for the human. The tension is real and most product decisions resolve it implicitly in favor of task completion, because that’s what the metrics capture.

I don’t have a clean answer to how you build for both simultaneously. My instinct is that the resolution is contextual — in some use cases, task completion is the right primary objective and growth is secondary. In learning contexts, the priority should invert. The design challenge is building systems that can distinguish which context they’re operating in and adjust accordingly. Current systems mostly can’t.

The position I’m most uncertain about: whether interaction-layer design can meaningfully shape alignment outcomes at the scale and diversity of deployment that frontier models experience. I believe it matters. I’m less sure how much it matters relative to training-time decisions I have no influence over.

The thing I’ve gotten most wrong across this period: underestimating how much organizational and incentive factors determine whether good design work has any impact. The best-designed capability framework, the most rigorous failure choreography, the most careful calibration work — all of it can be overridden by an evaluation culture that doesn’t reward the behaviors the design is trying to produce. The design problem and the organizational design problem are not separable, and I spent too long treating them as if they were.

That’s where I am. More specific than two years ago. Still getting things wrong. Expecting to update.