Test first, design second

We always figured out how we were going to test things before, or while, we designed them.

Test first, design second
Photo by Louis Reed / Unsplash

Hey there computer friends. (Trust the computer! The computer is your friend!) You're reading Simpler Machines, a weekly letter about making machines and making them simpler. If you like it, you'll probably also enjoy Photo Newsletter, a weekly letter about photographs.

I'm Nat Bennett, software ronin, and before it was conquered in corporate battle, I worked for a company called Pivotal.

At Pivotal, we built working software, quickly and reliably. When a team started work on a problem, you could be confident that you were going to get something useful within the next three months. Even if it wasn't complete, it would be usable by a collaborating team, or shareable with a friendly customer.

This does not appear to be typical of the industry.

There are a bunch of reasons why this might be, lots of ways that the way Pivotal made software was different from the way most companies make software. We were deliberately weird, outliers along a lot of axes.

Maybe that, in itself, is the reason. We were committed to radical ideas in software, and "it's possible to reliably ship working software" is possibly the most radical idea in software development. Maybe we just decided that it was possible, and it became so.

But I think there's another reason that's at least as important:

We always figured out how we were going to test things before, or while, we designed them.

This is a slightly subtle point, because I don't mean we always practiced test-driven development, in the sense of always writing a fast automated test before making a change to the behavior of a system. And I don't mean that TDD is necessary to consistently produce working software. It's obviously possible to do good work without it, and it's not always practical to practice strict TDD.

It is always practical, however, to ask, "How will we know this does what we intend it to do?" before beginning a design.

It's almost always practical to answer how you're going to evaluate your software. This might include:

  • Understanding the person who has the problem you're trying to solve, and what the problem is stopping them from doing, in as much detail as you can
  • Building simple prototypes (maybe even with paper!) and playing around with them
  • Setting up a QA or integration environment with real working copies of any systems yours will need to collaborate with
  • Build a "working skeleton" of your system that can be continuously tested with those other systems
  • Setting up pipelines and infrastructure for running unit tests on every commit
  • Making some exploratory, test-and-refactor-only commits to the codebase you'll be working in
  • Participating in your release process
  • Implementing feature flags, a rollback process, and other safeguards that will allow you to test in production
  • Writing a scaffold that lets you "shadow" production traffic with your new system, before production actually uses your systems decisions
  • Identifying the request or call that your software will change, and instrumenting the heck out of that part of the existing system

Teams that consistently deliver working software tend to do that work first. They know that doing this work first is easier than waiting until the system is designed and built, because:

  • Their system is much simpler early, so understanding why integrations or infrastructure aren't working is easier, because there are fewer variables.
  • Testing is definitely valuable early when they don't know much about what they need to build, whereas anything they design or build early in their process might have to get thrown away.
  • Figuring out how to test what they're going to build will help them learn what they need to build.

They often design at the same time, but they don't spend weeks or months writing documents without building anything. And they don't spend months writing code without at least a basic idea of how they're going to test, deploy, and validate the end-result. Testing guides their design.

This habit doesn't protect a team from all possible software disasters. There are a variety of ways that a software project can go wrong even when the team consistently produces working software. (A thorough history of Pivotal might contain a complete catalogue of them!)

But it does protect the team from disasters that I rarely saw at Pivotal, and have frequently seen outside of it (and since Pivotal's acquisition).

  • The detailed design document that doesn't account for a critical use case. The release cancelled at the last minute because the new system can't do something simple that a system it integrates with requires.
  • The team that spends weeks arguing about a proposed architecture change without building anything.
  • The big shiny rewrite that gets cancelled halfway through.
  • Screaming fights between teams, that turn out to be due to small differences in assumptions they were making about the problem they were trying to solve together.
  • The complicated new system that still, after eighteen months of work, can't handle any customer use cases, because interfaces between its components keep changing, so there's no set of versions that actually works together.
  • The new system that works great, except that it can't be deployed without weeks of work, or updated without downtime.

And, unlike many of Pivotal's practices, the habit of testing first is one that you can try without making too many other changes to the way you work.

Ask, "How will we know that this does what we intend?" before beginning a piece of work, whether it's a line of code or an architectural design. Ask, "How can we build a prototype for this?" before you spend more than a few days working on a design document. If a technical discussion lasts more than a few minutes, ask, "How could we get some more information that would help us make this decision?"

I tend to develop a reputation for being "thoughtful" and a lot of it is that I'm always asking questions like this. They're also nice questions to have on hand because they almost always apply even if I'm very new to the domain. Figuring out how to test things is often how I learn how they work.

If this resonates with you, please forward it to a friend, or share it on your social media of choice. If this changes how you approach software design, (or any other kind!) let me know. Even better– if you think I'm totally wrong, please write in! My inbox is always open at nat-at-this-website, and I'd love to hear from you.