The AI literacy framework I built in its first version was essentially a capability ladder organized around tool fluency. Learn to prompt. Learn to evaluate outputs. Learn to integrate AI into workflow. Learn to design AI systems. Each level added a layer of tool sophistication.
The framework produced designers who were more comfortable with tools and hadn’t substantially changed how they reasoned about model outputs. I could see it in how they talked about model behavior — still treating outputs as either right or wrong, still surprised by inconsistency, still uncertain how to handle cases where the model was confidently wrong about something they couldn’t independently verify.
The tool fluency was real. The epistemic development wasn’t happening.
What I should have been teaching from the beginning: how to hold probabilistic beliefs about system outputs. How to reason about confidence as a property of the evaluator, not just the model. How to notice when you’re deferring to model authority rather than exercising independent judgment. How to design for your own uncertainty, not just the model’s.
These are not AI-specific skills. They’re epistemological habits that are useful in any complex system domain and are specifically critical in AI contexts where the system’s apparent confidence is decoupled from its actual reliability.
The reason I didn’t teach this originally: it’s harder to operationalize than tool training. “Here’s how to write a better prompt” has a clear curriculum. “Here’s how to develop calibrated uncertainty about model outputs” doesn’t fit neatly into a two-hour workshop. But the tool training without the epistemic foundation produces a specific kind of dangerous user — someone who is fluent with the tools and systematically over-trusts them.
The redesigned curriculum starts with the epistemic posture and treats tool fluency as downstream of it.

Leave a comment