All articles
·ai-productengineering-managementteam-productivitytech-leadership

Comprehension Debt Is Compounding

AI development speeds up writing code but quietly slows down reading it. Comprehension debt is the hidden tax on team velocity — and it's accumulating fast.

There's a metric no sprint board tracks: how long it takes a new engineer to understand what the previous sprint actually built.

That number is getting worse. Not because developers are writing more complex logic — but because they're writing less of it themselves. And the code that AI writes tends to be verbose, contextless, and optimized for correctness over comprehension. Addy Osmani put a name to this problem in a post that's circulating in engineering circles this week: comprehension debt.

The argument cuts through the noise on AI coding tools in a way most takes don't. It's not "AI will replace engineers" or "AI code is buggy." It's quieter and more corrosive than either of those. Osmani writes that when teams adopt AI pair programming at scale, they gain velocity on output while steadily losing fluency in their own codebase. The code ships. Nobody fully understands it. The debt accumulates silently — until it doesn't.

What Comprehension Debt Actually Is

Technical debt is a familiar concept. You cut corners under deadline pressure, you'll pay for it later. Comprehension debt is structurally different. You don't incur it by doing something wrong. You incur it by moving fast in a way that disconnects the team from the thing they're building.

When an engineer writes a function from scratch, they model the problem in their head. They make decisions. They understand trade-offs. When an AI generates that same function — or a more elaborate version of it — the engineer reviews output rather than authors intent. The function might be better. The engineer's mental model of it is usually shallower.

Multiply this across a codebase. Multiply it across a team. Multiply it across six months of AI-assisted sprints. The result is a codebase that works but that no single person can hold in their head — and where onboarding a new engineer takes two months instead of two weeks.

This connects directly to something the Lobste.rs community has been wrestling with under a different framing: "Every layer of review makes you 10x slower" argues that added process compounds cost exponentially. Comprehension debt is the inverse problem — when you remove the layer where humans internalize what they're building, speed accrues in the short term and confusion accrues in the long term.

The QCon Signal

The timing matters. InfoQ's QCon London 2026 just ran a track titled "AI Agents Write Your Code. What's Left For Humans?" — which suggests this isn't a fringe concern. It's the question senior engineers are actively debating in rooms where architecture decisions get made.

The answer that's emerging from that conversation isn't "write code without AI." It's more nuanced: teams need to deliberately invest in comprehension as a practice, not assume it happens automatically when engineers do code review. A reviewer who reads AI-generated code but didn't author it, didn't spec it, and barely touched it has technically signed off — but hasn't absorbed it.

Dev.to has a related piece floating this week that frames a similar tension differently: Claude Code as a full dev team, describing an autonomous TDD cycle from feature request to merged PR with minimal human intervention. It's genuinely impressive as a technical demonstration. It's also a perfect example of how a team could ship a feature where nobody on the team deeply understands the implementation. The test passes. The PR is merged. The comprehension didn't transfer.

Why This Is a PM Problem, Not Just an Engineering Problem

Product managers don't write the code, but they own the roadmap that determines how fast code gets written and what gets skipped to hit a deadline.

If your team is running AI-assisted development at full throttle, and comprehension debt is accumulating, here's what that looks like from the PM seat: estimates become less reliable as the codebase becomes harder to reason about. Feature work that touches older AI-generated modules takes longer than expected because no one is quite sure what those modules actually do. Bug investigations balloon. Onboarding new team members becomes expensive. And the team starts to feel like they're maintaining a system that was handed to them, not one they built.

This is the hidden tax on velocity. The sprints feel productive. Output metrics look good. Throughput is up. But the team's collective grasp on the product is weakening, and that shows up in planning accuracy, incident response time, and eventually in churn.

The Changelog's piece on "The mythical agent-month" touches on a related reality: AI agents can produce enormous output volume, but volume isn't the same as understood, maintainable work. Brooks' mythical man-month showed that adding humans doesn't linearly increase output because communication overhead scales. The mythical agent-month problem is different — agents don't create communication overhead, they create comprehension gaps.

What Teams Can Actually Do

The solution isn't to slow down AI usage. It's to design explicit comprehension checkpoints into the workflow.

Require authorship briefs, not just code review. When AI generates a significant chunk of code, the engineer who prompts it should write a 3-5 sentence brief explaining what it does and why it was structured that way. This isn't documentation — it's a forcing function for comprehension. If they can't write the brief, they haven't understood the output.

Rotate ownership deliberately. On AI-heavy teams, it's tempting to let whoever prompted the AI "own" the resulting module. Rotate that ownership explicitly. Forcing a second engineer to get fluent with AI-generated code they didn't prompt creates a second layer of comprehension.

Track onboarding time as a health metric. How long does it take a new engineer to make their first meaningful contribution to a given module? That number is a proxy for comprehension debt. If it's growing, something is wrong — even if velocity metrics look fine.

Budget comprehension time into sprints. This is the uncomfortable one. AI development is fast enough that teams often have capacity to slow down and understand what they're building. They just don't prioritize it. Making comprehension a line item in sprint planning — "two hours this week to walk through the new authentication module as a team" — treats it like the work it is.

The broader lesson from Osmani's framing is that ai development doesn't automatically produce shared understanding. It produces output. Shared understanding is still a human process that requires deliberate investment. The teams that figure out how to do both — ship fast and maintain comprehension — will compound their advantage. The teams that optimize only for output will eventually find themselves maintaining a codebase that nobody fully owns.

That gap is already opening. The question is which side of it your team is on.