All articles
·engineering-managementteam-productivityproduct-strategytech-leadership

Coding Was Never the Bottleneck

AI coding tools promised faster delivery. InfoQ and dev.to say delivery didn't speed up — because code was never the slow part.

There's a line buried in an InfoQ piece from this week that should make every PM stop scrolling: "AI Coding Assistants Haven't Sped up Delivery Because Coding Was Never the Bottleneck."

Read that again. We've spent the last two years buying subscriptions, onboarding tools, and writing prompts — all to speed up the thing that wasn't actually holding us back.

The Part Everyone Skipped Over

The industry's dominant narrative about AI coding tools has been simple: write code faster, ship faster. Faster feedback loops. Fewer backlogs. Developers unblocked. It's been a compelling enough story that tool adoption exploded across the board, from Copilot seats to full agentic coding setups.

But delivery velocity — the thing that actually matters to PMs and business stakeholders — hasn't meaningfully improved for most teams. And the InfoQ analysis points directly at why: the constraint was never in the IDE.

The real bottlenecks are where they've always been. Requirements ambiguity. Stakeholder alignment. Unclear acceptance criteria. The three-day wait for a design review. The two-week back-and-forth on a spec that should have taken two hours. The post-sprint retrospective where everyone admits they weren't sure what "done" meant.

AI can write a function in seconds. It cannot tell you which function to write.

A Pattern Surfacing Across Sources

This tension is showing up in multiple places simultaneously, which is usually a signal worth following.

Over on dev.to, a post titled "Agile or Winging It" is asking whether most teams practicing "Agile" are actually just improvising with stand-ups. The gap between the ritual and the discipline is enormous — and faster code generation doesn't close it.

Meanwhile, the Lobste.rs community is circulating a piece called "Comprehension Debt — the hidden cost of AI generated code" by Addy Osmani, which puts a name on something teams are quietly experiencing: code that works but that nobody fully understands. When a developer generates code they can't mentally model, they've traded a short-term velocity gain for a long-term maintenance liability. Comprehension debt compounds. It slows the team down in ways that don't show up in sprint velocity metrics until suddenly they do.

And on the human side, there's the dev.to post that's quietly one of the most honest things published this week: "52 Days, 711 Commits, Zero Users". The title is the story. Someone shipped constantly, built technically, and produced nothing of value. The bottleneck was never commits.

What This Means for How You Scope Work

If the bottleneck isn't code, product teams need to honestly audit where time actually disappears in a delivery cycle. Not where they assume it goes — where it actually goes.

A few patterns that show up consistently:

Discovery-to-spec drag. The gap between "we should build this" and "we all agree on what this means" is usually measured in weeks, not days. AI doesn't shrink this. Better facilitation, clearer decision rights, and shorter feedback loops with customers do.

Asynchronous alignment failures. Most sprint slowdowns are caused by decisions that weren't made, not code that wasn't written. When a designer and a PM have different mental models of a feature, no amount of Copilot autocomplete fixes the resulting rework.

Testing and integration bottlenecks. The InfoQ analysis points to QA and integration work as persistent choke points. Teams that used AI to write more unit tests — rather than just more feature code — report more actual delivery improvement.

Review layer accumulation. The Lobste.rs-trending post from apenwarr, "Every layer of review makes you 10x slower", has been circulating for good reason. Adding AI to a process with too many approval gates doesn't speed anything up. The gates are still there.

The Uncomfortable Reframe

Faster code generation has, in some cases, made certain slow processes more visible by contrast. When a developer can produce a working feature skeleton in an afternoon, it suddenly becomes painfully obvious that it then sits in review for four days. The constraint doesn't disappear — it gets highlighted.

This is actually useful information if you're willing to look at it honestly. AI adoption is inadvertently running a natural experiment on your team's actual process health. The teams that aren't seeing delivery improvements despite heavy AI usage are learning something important about where their real friction is.

The teams that are seeing improvements tend to share a few characteristics: they had already invested in clear specs before handing anything to a developer (human or AI), they had tight feedback loops with stakeholders, and they had QA embedded in the workflow rather than bolted on at the end.

In other words, the process fundamentals that made those teams fast before AI tools also make them fast with AI tools. The tools amplified existing process quality — they didn't substitute for it.

Where to Actually Invest

If the analysis holds, the highest-leverage investments for most product teams right now aren't more AI tooling licenses. They're in:

  • Spec quality. Can your team's written specs be handed to anyone — human or AI — and produce the right output? If not, that's where time is going.
  • Decision velocity. How long does it take your team to resolve an open question on a feature? Shorten that, and you'll see delivery improve faster than any tool change.
  • Comprehension culture. Before AI-generated code ships, someone on the team needs to understand it well enough to own it. Osmani's comprehension debt concept is worth building into your definition of done.
  • Reducing approval layers. This one is politically hard, but the evidence is accumulating. Review theater — reviews that exist because they've always existed rather than because they add value — is a significant tax on delivery.

The most honest thing a PM can tell their leadership team right now: we bought tools that made one part of the process faster, but the process has more parts than that. The good news is that those other parts are addressable. They just require a different kind of investment than another SaaS subscription.

The code was never what was slowing you down.