More of What, Exactly?
A team builds a dashboard. The dashboard shows that churn went up 3%. Someone asks what to do about it. Nobody knows, because the dashboard only answers "what" — never "so what," and never "what did we try last time, what did we expect to happen, and did it work."
That feedback loop — what did we decide, what did we expect, what happened, what did we learn — doesn't exist at most analytics organizations. And right now, every exec in tech is saying AI lets us produce more of these dashboards, more of these reports, with fewer people.
More of what, exactly?
I know — me, a former tech lead and new engineering manager, arguing against headcount cuts has an obvious bias. But my argument isn't about preserving my career or fighting headcount reductions. It's "know why you're doing it."
A tale of two theories
There are two ways I've observed the industry talk about AI and engineering teams since the start of the year.
The first is adoption-driven: get your engineers using AI tools, track token usage, measure output. The theory is that if everyone adopts the tools, productivity follows. Companies are building dashboards to measure how much their teams use AI, which is ironic in ways I'll get to.
The second is headcount-driven: given the productivity boost, you can get the same output with fewer people. Companies like Meta and Block have reduced their staff in the name of AI. Their thesis: engineers augmented by AI produce 10x the output, so you need a tenth of the engineers.
Both approaches share a blind spot. They optimize for output — more code, more dashboards, more reports — without asking whether that output was the right thing to produce in the first place.
The output isn't the problem. What's underneath is.
AI can now generate a dashboard in seconds, auto-summarize a dataset, build a report with a prompt. But if there was no decision infrastructure underneath — no record of what the team decided, what they expected, and whether it worked — you've just made the proxy faster. (AKA, a jet engine on a treadmill.)
The best firms take this knowledge and democratize it into strategic documents like WBRs, QBRs, and other data-informed artifacts to guide the business. The dashboards were always a proxy for that deeper work. AI is stripping away the proxy, and what's left is either a real decision infrastructure or nothing at all.
Reduce your context at your own peril
AI output is only as good as the context you give it. Generic input produces generic output — the model defaults to the most common pattern in its training data and presents it as if it's the obvious conclusion. Without the context of what your business actually is, the model picks "generic consumer tech" and runs with it.
At most analytics organizations, that context lives in people's heads. The product manager who knows why a metric is defined the way it is. The analyst who remembers that the team tried a particular approach in Q2 and it failed because of a constraint nobody documented. The engineer who can look at a model's output and feel that something is off before they can articulate why.
The strongest version of the cut-first argument says AI can capture context itself — that the tools will systematize what used to live in people's heads. I'm skeptical — not because it's impossible, but because most analytics organizations haven't done the prerequisite work. You can't systematize knowledge you haven't identified. Systematizing context is the trillion-dollar idea.
But the sequencing matters: you have to build the context layer first, then optimize headcount.
If you cut without thinking about what you're cutting for, you run the risk of not capturing the value you think you're capturing.
Fire those people before that work is done and you didn't save money. You deleted the context layer that made everything downstream useful. You now have a very fast, very confident system defaulting to "generic" every time.
How we build
The thing I keep coming back to is that the value isn't in the model. It's not even in the harness around the model, though that matters enormously. The value is in three layers working together.
The context layer — the institutional knowledge, the business logic, the metric definitions, the decision history. Not just facts, but what we do with those facts. This is the layer that most analytics organizations never built, and it's the layer that makes every AI output downstream either specific and useful or generic and dangerous.
The judgment layer — the human capacity to look at output and know whether it's right. Not because you checked every line, but because you have enough experience and taste to feel when something is off. One of the more interesting things I've read on this comes from Azeem Azhar, who distinguishes between cognitive offloading and cognitive surrender. Offloading is when you take something that isn't critical to reasoning and delegate it — his example is remembering phone numbers. We all used to do that, and we stopped when smartphones made it unnecessary. We didn't lose a critical reasoning skill. We just offloaded it to technology.
Cognitive surrender is different. That's when you truly let a system think for you. And I think that's what we want to protect against in this industry, because it's very powerful when engineers can delegate tasks and focus on the problems that matter. But there's research showing that left unchecked, in a world where more and more code is generated by AI, engineers start to lose those skills. They start to lose judgment on what's happening within the systems they're managing. Left unchecked, you end up with an entire group of engineers — or an entire organization — that doesn't understand the systems they're responsible for. That's a huge liability for any business. And if the people who remain after cuts aren't growing, you've traded a headcount problem for a capability problem.
The architecture — the harness, the systems, the feedback loops. One of the most interesting things emerging in AI-built systems is that you can build in feedback loops where the output gets stronger over time. If you build the right infrastructure — testing, ways to capture the context of your environment — you can produce better results than if you just one-shot the model with all the context at once. A better environment produced 64% better results from the same model. That's a mixture of deterministic code, tests, and systems you write (or co-write with AI), combined with the context layer, letting the model help guide you to the best decision.
That shift matters, because then as an engineer, you're not just doing what Luca Rossi and I have both experienced — being a factory manager, generating tons of code nonstop that you've never even looked at, just trying to optimize the agents. Instead, you have judgment in the loop. If you can get feedback loops through the work you're producing, that leads to really powerful engineering platforms.
And in a world where you have that, your role starts shifting from factory operator to what I'd call a judgment architect. This is someone thinking about those systems end to end. Not just "what did the AI output?" but "what decision was made downstream of that output, and how do I re-encode that back into the context layer so the next time we encounter this problem, we loop through it faster?" If you're able to capture institutional knowledge, unstructured documents, context — all of it — in a system that does this, you start building really interesting platforms.
And that's where I sometimes feel like the conversation around optimizing for AI misses what we're actually optimizing for. The goal isn't to never make the wrong choice, or to never reduce headcount. It's about recovering faster from wrong choices, capturing what we missed, and doing better next time. Engineers who can build systems that do that are the ones worth investing in.
The future of how we think about analytics, product development — all of it — with these AI systems isn't "give every engineer a Claude subscription and count their tokens." That's one way to start, but it can't be the end. We need to think about building systems and tools that reinforce the work we're doing, that allow engineers to learn faster, recover faster, and ship faster in a way that builds more durable outcomes.
That future is exciting to me. It offers a glimpse of what product development and analytics engineering can look like when it isn't just focused on reducing what we have today.