Boring Tech, Wild Practices
Hillel Wayne argues boring technology plus innovative practices beats the reverse. Why your stack choice and your process choice are not the same decision.
There's a piece quietly circulating on Lobste.rs today that deserves more attention than it's getting. Hillel Wayne posted "Choose Boring Technology and Innovative Practices" — and it complicates something the developer community has argued about for a decade without ever quite resolving.
The canonical version of the argument, traced back to Dan McKinley's 2015 essay, is simple: use boring technology. Postgres over CockroachDB. Redis over your custom in-memory store. MySQL over whatever came out of a YC batch last quarter. Don't burn your "innovation tokens" on infrastructure when the real work is the product.
Wayne agrees with the boring tech half. The part most people skip: the practices half of the equation should be the opposite.
The Wrong Thing Gets Frozen
Most engineering teams do this exactly backwards.
They adopt every new framework, database, and runtime that shows up in GitHub Trending — then they run it with the same standup structure, the same sprint planning, the same PR review process they've had since 2018. The stack is exciting. The way the team works is frozen in amber.
Wayne's argument flips that. Your infrastructure choices have enormous compounding costs — migration risk, hiring friction, operational complexity, the on-call burden at 2am when something in your exotic database breaks in a way no Stack Overflow answer has ever addressed. That's where conservatism pays off.
But your practices — how you run retros, how you do code review, how you structure ownership, how you decide what to build — these carry far lower switching costs. You can run a six-week experiment with a new planning approach and roll it back with almost zero damage if it fails. You cannot do that with your data store.
The asymmetry is obvious once you see it. But almost no teams optimize for it.
Why Developers Keep Getting This Backward
Part of it is visibility. Infrastructure choices are public and legible — you can read about them on conference talks and job listings. "We use Kubernetes, Kafka, and a service mesh" announces something about a company's engineering culture, for better or worse.
Process choices are invisible from the outside. Nobody's LinkedIn profile says "we do weekly async retrospectives and ship in two-week discovery cycles." Nobody gets hired because of how a team runs its meetings.
So the incentive structure rewards infrastructure novelty and ignores process innovation. InfoQ's coverage of QCon London 2026 has been full of teams talking about tooling and platform choices. The process-level conversations are shorter and rarer.
There's also a comfort mechanism at work. Trying a new database feels like innovation. It produces artifacts: a spike branch, a benchmark document, a migration plan. It looks like work was done. Changing how your team makes decisions is harder to justify, harder to measure, and more personally uncomfortable because it requires people to actually behave differently.
The Relationship to AI Pair Programming
Here's where this connects to something actively reshaping how teams operate in 2026: the gap between AI pair programming tooling and the processes surrounding it.
Most teams that have adopted ai coding tools — Copilot, Cursor, Claude Code, whatever's in your stack — adopted the tool immediately and adapted the process not at all. The standup still asks "what did you work on yesterday?" as if the unit of work is the same. PR review still treats every line as equivalent, even though the cost of generating a line of code collapsed by an order of magnitude. Sprint estimation still prices tasks roughly by complexity, even though AI pair programming has made certain kinds of complexity nearly free.
The Changelog's recent discussion about "the mythical agent-month" gets at this — when autonomous agents start shipping thousands of PRs, what does a sprint even mean? The teams struggling with that question aren't struggling because of their tech stack. They're struggling because their practices never updated.
Wayne's framing gives you the lever. The boring tech side of ai coding is settled: the frontier model providers are your Postgres. You're not going to out-engineer OpenAI or Anthropic on foundation models. Pick one, commit, stop chasing each new release. The innovative practices side is wide open.
What does code review look like when AI for coding generates the first draft and a human reviews intent, not syntax? What does estimation look like when the primary variable is no longer "how long will this take to write"? What does ownership look like when your ai coding tool has touched 40% of the codebase?
Those are process questions. They have lower switching costs than your database. You could run an experiment on one of them this sprint.
The Hardest Part of "Boring"
There's a psychological cost that Wayne doesn't fully address, and it's real: boring technology feels like falling behind.
This is especially acute in AI tooling right now. The release cadence is genuinely staggering. Simon Willison has been tracking the churn for years, and his blog reads like a fire hose of new models, new capabilities, new tool releases, new integrations. Every week there's something that seems like it changes the picture.
Most of it doesn't. The underlying capability curve is advancing, but for any given team, the decision to adopt a new model or switch to a new ai coding tool carries real switching costs: updated prompts, new workflows, training, integration work. The boring technology principle says: let the frontier move, wait for things to stabilize, pick one thing that's clearly working and stick with it long enough to actually learn it.
The practice of "writing little proofs in your head" — a separate Lobste.rs thread today on maintaining mental models even when code is generated for you — is itself a practice innovation. It doesn't require any new tooling. It's just a different way of thinking while you work.
That's the pattern. Stack conservatism, practice experimentation.
A Test Worth Running
Before your next sprint planning session, try asking two questions:
- What infrastructure choice are we considering that has a high switching cost and uncertain long-term support? If the honest answer is "we're not sure this technology will exist in three years," that's a sign you may be burning an innovation token you can't afford.
- What practice have we been running on autopilot for over a year? If you can't remember why you do your retro the way you do it, that's a sign you have an innovation token sitting unused.
The boring technology argument is not an argument for stagnation. It's an argument for where you put your energy. Postgres keeps running. Your team keeps learning. The bottleneck, almost always, is the second one.
Sources: Hillel Wayne on boring technology and innovative practices · InfoQ QCon London 2026 · Simon Willison's blog · Writing little proofs in your head · Changelog podcast