Following from the SoFi experience, I want to make a claim that will be unpopular in most product design conversations: the compliance instinct — the impulse to make uncertainty explicit, to resist simplification that misrepresents system behavior, to insist on conditional precision — is one of the most useful instincts you can bring to AI product design, and most AI designers don’t have it.

The dominant culture in AI product design comes from consumer tech. Ship fast, reduce friction, optimize for engagement, simplify ruthlessly. That culture produces good outcomes in contexts where failures are recoverable and asymmetric downside is low. It produces dangerous outcomes in contexts where users are making consequential decisions based on system outputs.

AI systems are increasingly being deployed in consequential contexts. Medical information, financial decisions, legal questions, educational assessments. In those contexts, the design instinct that says “this conditional disclosure is confusing, let’s simplify” is not serving users. It’s serving engagement metrics at the expense of the epistemic accuracy users need to make good decisions.

What compliance culture understands that product culture often doesn’t: the user’s trust in a system is an asset that belongs to the user, not the product. When a product design choice inflates that trust beyond what the system warrants, it’s borrowing against the user’s future wellbeing. The debt comes due when the system fails in a way the user wasn’t calibrated to expect.

I’m not arguing for compliance theater — disclosures nobody reads, warnings that appear uniformly regardless of context, legal language that obscures rather than informs. I’m arguing for the underlying instinct: that accuracy of representation is a design value, not just a legal constraint.

Leave a comment