SYSTEMSTHAT FAILSAFELY.
A decade engineering infrastructure where failure meant millions in losses and physical consequences — BHP Billiton, Hatch, across Canada, Australia, Chile. Now applying that same rigor to AI systems designed to be reliable, interpretable, and steerable.
ONE SNOB. FOUR RULES.
Correct at scale, or silent.
Systems are built to work correctly under load and fail safely when they cannot. No demo-ware. No happy path. If it cannot handle the hard case, it is not shipped.
Safety is the design, not a department.
Interpretability, steerability, and predictable failure modes belong at the architecture layer — not tacked on at review time. The cheapest place to make an AI system trustworthy is before it runs.
Responsible compute.
Every token has a cost and a carbon footprint. FinOps is not a dashboard — it is a design constraint. The system optimizes its own resource consumption or I rewrite it until it does.
Depth beyond domain.
Engineers have side quests that prove depth. A C++20 reverb engine, a camera, a company from whiteboard to revenue. The side quests are where the primary domain learns new tricks.
A COMPANY WHERE ALL MY INTERESTS CONVERGE.