The true meaning of unit testing

You probably already know what “unit testing” means. So do I.

But—what if our definitions are different? Does unit testing mean:

  • Testing a self-contained unit of code with only in-memory objects involved.
  • Or, does it mean automated testing?

I’ve seen both definitions used quite broadly. For example, the Python standard library has a unittest module intended for generic automated testing.

So we have two different definitions of unit testing: which one is correct?

Not just unit testing

You could argue that your particular definition is the correct one, and that other programmers should just learn to use the right terminology. But this seems to be a broader problem that applies to other forms of testing.

There’s “functional testing”:

  • It might mean black box testing of the specification of the system, as per Wikipedia.
  • At an old job, in contrast, we used the term differently: testing of interactions with external systems outside the control of our own code.

Or “regression testing”:

  • It might mean verifying software continues to perform correctly, again as per Wikipedia.
  • But at another job it meant tests that interacted with our external API.

Why is it so hard to have a consistent meaning for testing terms?

Testing as a magic formula

Imagine you’re a web developer trying to test a HTTP-based interaction with very simple underlying logic. Your thought process might go like this:

  1. “Unit testing is very important, I should unit test this code—that means I should test each function in isolation.”
  2. “But, oh, it’s quite difficult to test each function individually… I’d have to simulate a whole web framework! Not to mention the logic is either framework logic or pretty trivial, and I really want to be testing the external HTTP interaction.”
  3. “Oh, I know, I’ll just write a test that sends an HTTP request and make assertions about the HTTP response.”
  4. “Hooray! I have unit tested my application.”

You go off and share what you’ve learned—and then get scolded for not doing real unit testing, for failing to use the correct magic formula. “This is not unit testing! Where are your mocks? Why are you running a whole web server?”

The problem here is that the belief that one particular kind of testing is a magic formula for software quality. “Unit testing is the answer!” “The testing pyramid must be followed!”

When a particular formula proves not quite relevant to our particular project, our practical side kicks in and we tweak the formula until it actually does what we need. The terminology stays the same, however, even as the technique changes. But of course whether or not it’s Authentic Unit Testing™ is irrelevant: what really matters is whether it’s useful testing.

A better terminology

There is no universal criteria for code quality; it can only be judged in the context of a particular project’s goals. Rather than starting with your favorite testing technique, your starting point should be your goals. You can then use your goals to determine, and explain, what kind of testing you need.

For example, imagine you are trying to implement realistic looking ocean waves for a video game. What is the goal of your testing?

“My testing should ensure the waves look real.”

How would you do that? Not with automated tests. You’re going to have to look at the rendered graphics, and then ask some other humans to look at it. If you’re going to name this form of testing you might call it “looks-good-to-a-human testing.”

Or consider that simple web application discussed above. You can call that “external HTTP contract testing.”

It’s more cumbersome than “unit testing,” “end-to-end testing,” “automated testing”, or “acceptance testing"—but so much more informative. If you told a colleague about it they would know why you were testing, and they’d have a pretty good idea of how you were doing the testing.

Next time you’re thinking or talking about testing don’t talk about "unit testing” or “end-to-end testing.” Instead, talk about your goals: what the testing is validating or verifying. Eventually you might reach the point of talking about particular testing techniques. But if you start with your goals you are much more likely both to be understood and to reach for the appropriate tools for your task.



We all make mistakes. You write some software that crashes in production, or accept a job offer with too little pay. You learn your lesson—but by then it's too late.

But what if you could skip straight to the learning?


You might also enjoy:

» What should you do when testing is a pain?
» The fourfold path to software quality
»» Get the work/life balance you need
»» Level up your technical skills