At SoFi we were redesigning how margin risk was communicated to retail investors — excess equity, liquidation thresholds, concentration requirements. The product instinct was to simplify. Complex conditional disclosures created cognitive load. Users weren’t reading them. Clean the interface up, reduce the noise, make the key number legible.
Compliance pushed back in a review. If we simplified the volatility assumption display, we’d be misrepresenting how the threshold actually behaved. The number wasn’t stable — it was sensitive to concentration and volatility in ways that mattered enormously in adverse conditions and were invisible in normal ones.
We ran user sessions to resolve the tension. What we observed was uncomfortable enough that I’ve been thinking about it since.
Users didn’t misinterpret the complex disclosures. They over-trusted the simplified ones.
When the interface felt clean, users read it as stability. When it showed conditional thresholds with explicit dependencies, they slowed down. They asked questions. They made different decisions. The friction wasn’t noise — it was signal. The complexity of the disclosure was accurately representing the complexity of the underlying risk, and users were using that complexity as information.
We simplified anyway, with modifications, because the compliance constraints were real and the UX problems with the original were real. But I came away with a belief I haven’t been able to shake: legibility of uncertainty is a design responsibility. An interface that makes a probabilistic, conditional, context-sensitive situation feel stable and simple is not a well-designed interface. It’s a miscalibration machine.
Every time I see a smooth AI experience that collapses model uncertainty into a clean confident output, I’m back in that review. The question I now ask reflexively: are we making the system feel more reliable than it is, and who bears the cost when the gap between feeling and reality closes suddenly?

Leave a comment