On Trust in Systems That Can't Explain Themselves
We extend trust to systems we don't understand every day. The question is whether we're doing it wisely.
I explore how autonomous systems move from code into the physical world — and the trust infrastructure required to make them scale.
15+ years in strategy across Waymo, McKinsey, and the companies building what comes next.
What I think about
What does it take for an autonomous system to earn trust — not just pass a benchmark? The gap between lab performance and real-world reliability is where most systems fail.
When AI leaves the screen and enters the road, the failure modes change. How systems navigate uncertainty in environments that don’t forgive errors.
Governance, incentives, verification, safety cases. The systems beneath the system — the ones nobody wants to build but everyone depends on.
On Autonomy, Trust & the Systems Beneath AI
No noise. No hype. Just signal.
I'm interested in how trust, governance, and incentive design shape the future of autonomous systems. If you're thinking about similar problems, I'd like to hear from you.