All articles
·engineering-culturedeveloper-productivitysoftware-teamsopen-source

Rob Pike Was Right All Along

Rob Pike's 5 rules are trending on Lobste.rs. What the resurgence of programming fundamentals means when code generation is essentially free.

Something is happening in the curated technical corners of the internet this week that deserves more than a passing scroll. Rob Pike's 5 Rules of Programming is circulating on Lobste.rs — alongside "To be a better programmer, write little proofs in your head", "Is simple actually good?", and an ancient post about how little Turbo Pascal actually weighed. Three or four distinct posts, pointing in the same direction. That's not coincidence — that's a community working something out.

The timing is uncomfortable in a specific way.

The Rules That Refused to Expire

Pike wrote his five rules at Bell Labs, in an era of 16-bit constraints and punchcard memory. Abbreviated:

  1. You can't predict where a program spends its time. Measure it.
  2. Measure before optimizing. Don't guess about bottlenecks.
  3. Fancy algorithms are slow when n is small. n is usually small.
  4. Fancy algorithms are buggier than simple ones.
  5. Data dominates. Choose the right data structures and the algorithms become obvious.

What's striking isn't that these rules are profound — each one, isolated, sounds almost embarrassingly obvious. What's striking is that they keep getting rediscovered. They showed up in the 1980s, got quoted in every systems programming course through the 2000s, and now here they are again in 2026, being upvoted by engineers who have access to AI that can scaffold an entire service in the time it takes to make coffee.

The rules survive because the failure modes they address have never been fixed. We just keep re-encountering them at a higher level of abstraction.

What "Simple" Actually Means

The "Is simple actually good?" post makes a sharper argument than its title implies. It isn't a defense of naive code or a complaint about abstraction. It's an examination of a conflation that has become professionally dangerous: simple and easy are not the same thing.

Easy means low friction to produce. Simple means low cognitive overhead to understand, verify, and modify.

A lot of modern tooling optimizes hard for easy. Scaffolding generators, AI completions, no-code backends — they all reduce the friction to create something. They do very little to ensure that what you've created is understandable at the boundaries. That gap, between easy-to-produce and simple-to-reason-about, is where most production incidents live.

This is why the Turbo Pascal trivia is more than nostalgia. The entire Turbo Pascal 3.0 IDE, compiler, and runtime fit into 39KB. Not because the engineers in 1983 were smarter — but because the constraints forced them to carry the full mental model. You couldn't hide in an abstraction layer you didn't build yourself. The boundary conditions were always visible.

That constraint is essentially gone now. You can ship a production system in 2026 without understanding what it does at n=0.

The Proof Problem

The "write little proofs in your head" post is the most direct of the cluster. Its core argument: the most important programming skill isn't syntax fluency or framework knowledge — it's the habit of informal verification. At every step: does this actually work? Can I show why?

Not Coq-style formal proofs. The informal loop: what does this function do with an empty list? With n=1? With the maximum integer? What happens when this returns null?

Senior engineers do this automatically, invisibly. It's so ingrained they often can't describe it. But it's precisely the discipline that gets skipped when velocity pressure is high and the AI suggestion is already three lines ahead. The autocomplete doesn't wait for you to finish thinking.

There's a specific failure mode this creates: bugs that live in edge cases no test covered, in assumptions nobody stated, in paths the profiler never saw because production traffic hadn't hit them yet. Pike's Rule 1 — you can't tell where a program will spend its time, so measure — is a specific instance of the same principle: your intuitions about runtime behavior are wrong more often than you think.

What This Means for Teams Right Now

None of this is an argument against modern tooling or AI-assisted development. But there's a specific engineering discipline that the current ecosystem does not reinforce: slowing down to verify your own understanding before moving forward.

A few practices that translate Pike's rules into current workflows:

  • Profile before optimizing. Still true, more urgent. AI-generated code is often naive on performance characteristics. Measure before you accept the first working solution.
  • Choose boring data structures. A HashMap and a sorted list will get you further than a clever trie in 95% of real production scenarios. Start there.
  • Read the generated code. Not skim — actually read and reason about it. What does this do with an empty list? What happens when this database call returns null?
  • Resist the abstraction impulse. The "data dominates" rule is especially relevant: getting your data model right is worth more than any algorithmic cleverness on top of it.

The irony of 2026 is that the fundamentals matter more now, not less — precisely because the tools that generate code don't hold the mental model of correctness. That part is still yours.

The engineers upvoting Rob Pike on Lobste.rs this week seem to know this. The question is whether it registers before or after the incident report.


Sources: Rob Pike's 5 Rules of Programming · To be a better programmer, write little proofs in your head · Is simple actually good? · Things That Turbo Pascal Is Smaller Than