Most of what gets labeled a "systems issue" or a "tooling gap" is actually a human one — someone who doesn't have the context they need, someone who's been handed a workflow that doesn't match how they actually think, someone who's been asked to change before they've been asked what's working.
The process is downstream of all of that. You can't fix the process without touching the human question first. Most people try anyway.
When I joined JUUL Labs, I had zero experience with medical device compliance. Within months I was running an enterprise-wide regulatory program across four continents — and the department closed the year with zero major audit findings.
At Renew Financial, I walked into a 30,000-loan portfolio without a financial services background and ended up redesigning the prepayment process that was generating most of our customer complaints. At Meta, I came in as a Product Ops leader in an org full of people who'd been inside tech their whole careers. Built the OKR program that got 80% of a 200-person organization aligned on shared goals.
Coming in without the assumed expertise means I ask the questions the experts stopped asking. I don't know which workarounds are sacred. I don't know which processes are "just how we do it here."
The pattern's been consistent enough that I've stopped treating it as a fluke. I read the system as it actually is, not as everyone's agreed to pretend it is. That turns out to be useful more often than not.
The industries have been different. The work has been the same: read the room, find where the pressure is, build something the humans inside can actually run.
AI is a mirror. It reflects back whatever you bring to it — your clarity, your confusion, your biases, your care for the people on the other end.
I'm not anti-AI. I'm pro-human. That distinction matters, and I don't think it gets said plainly enough.
What that means in practice: AI is one tool in a toolkit, chosen when it's the right fit for the problem, not when it's the most exciting thing in the room. I'm not here to sell anyone on it. I'm here to figure out what's actually right for the people who'll be living inside the system after I'm gone.
My instinct, for a long time, was to defer to whoever had the most authority in the room.
I came up in environments where the senior person was usually right — or at least where everyone behaved as if they were — and I built a habit of holding my strongest calls until someone with more title had weighed in. It took me a while to name that as the pattern it was.
The problem was that I was deferring to seniority, not expertise. Those aren't the same thing.
What I've landed on: I make the best call I can with the information available, and I build a feedback loop into every system I design so the call can be corrected in flight.
That's not certainty. It's the best available answer right now, with a built-in mechanism to fix it when it stops being the best available answer.
That's what operational leadership actually looks like to me — not knowing you're right, but being willing to decide and willing to revise.
Enough formal training to take the craft seriously, enough operator time to know what the frameworks leave out.
That's the shape of how I think about the work.
If you want to see what it looks like in practice, How I Work is next. If you'd rather just talk, reach out.