Testability is one of those feel-good concepts oft bandied about in programming lore but seldom precisely defined, so in my hubris, I thought I’d have a crack at enumerating the properties I’d expect testable code to exhibit:
It can be run through textual commands alone. This property stands in contrast to code that can only be executed by a human moving a mouse in a GUI, speaking into a cell phone microphone, or stepping into the death zone of a Terminator-era sentry gun.
It can be run with relatively little preparatory effort. For one, this means that testable code has a minimum number of collaborating objects required as arguments. It also means that these argument objects should be, whenever possible, lightweight domain-specific objects that are relatively easy to instantiate (e.g. data container objects), or, better yet, simple, generic objects (e.g. String and Integer). Case in point: Say you have functionality in the ContactInfoStripper model that strips out phone numbers and other contact details from the body field of a CustomerServiceEnquiry. If ContactInfoStripper accepts a CustomerServiceEnquiry object as a parameter, then you’ve got to prepare one of those bad boys in your tests first, which complicates the matter. A better implementation would be to settle on inputting a more basic value to the ContactInfoStripper, i.e a String. Now the ContactInfoStripper would interact not with a CustomerServiceEnquiry object but rather with its #body field, which happens to be a simple String anyway. In addition to improved ease of testing, this implementation causes your code to have greater reuse value, for it assumes less about its dependencies—good engineering 101.
It can be executed quickly. This is important because programmer patience is a finite resource, one that is itself tested when waiting for test output.
For deterministic code, it consistently produces the same output when run with the same arguments.
For non-deterministic code, like randomisers, it ought to accept a seed argument that will artificially cause the code to return the same result every time it is run.
It outputs its response in a sufficiently clear format, ideally one that is machine readable. This allows automated tests to be easily connected to verify the functionality’s correctness.
It can be tested with relatively few lines of code, thus saving programmer time and enabling the team to focus on writing new features.
Now that we have a definition for testability at hand, I’m going to share some of my programming tactics for achieving this goal. As with much in this series, the following is hackish… that’s a cue for the more squeamish and perfectionistic programmers out there to brace themselves. Be advised too, that the four tactics which follow aren’t intended to be mutually exclusive. The split and the corresponding titles is also motivated by my wish to leave you with evocative, easy-to-remember ideas.
1. Interface Hooks
This is all about manufacturing/surfacing invariants in your codebase, and then writing tests that exploit these shared invariants. Let’s say that every database-backed object in your system has a valid? method which checks whether all data validations pass. A spiffy way to test that all your various objects’ validation logic works would be to instantiate one of each object using factory/fixture data, then use a loop to test the result of valid? on each one. This testing code could be written in as few as five lines, yet might yield hundreds of useful and meaningful tests for as many objects. Thanks to the objects’ consistency from the outside, you can test more objects with fewer tests.
Say you want to test whether the many, mostly static pages of your web application render correctly. I’m thinking about the terms and conditions page, the about page, the pricing pages, etc. If you were thinking in terms of interface hooks, you’d write a test that visits every one of these pages and reports whether the page contains the "h1.title" selector, an invariant CSS marker that you purposefully place in every static page except the error one. Similar to above, a single test would meaningfully verify scores of pages. How many times have you had a static page fail for dumb reasons, like a misspelled variable name? A single, well-thought-out test can police for these errors forever more. That’s a big benefit.
All these shortcuts are afforded by upfront consistency, which, contrary to popular psychology, is actually a Good Thing. If I were the screenwriter for the Terminator movies, I would scrap this rubbish about knocking off John Connor and instead teleport an over-muscled, under-clothed Austrian back in time to wipe out that one buffoon who proclaimed that "consistency is the hobgoblin of little minds".
2. Backdoors
The main idea behind backdoors is to make it easier to test code which would otherwise be awkward—even nightmarish—to set up, reach, and verify. Example time: Most any commercially orientated website has a pantheon of conversion-tracking pixels, remarketing tags, and tracking scripts sending data out to the big players (Google AdWords, Google Analytics, Facebook, etc.). This stuff is critical to test—often more important than much of the core software—since tainted data can devastate your business intelligence and, by extension, your business. The thorough way to test this functionality would be with a squadron of HTTP integration tests that verify your implementation’s correctness by querying the APIs of your test accounts on Google Analytics, Facebook Ads, etc. But by God, that would be a monstrous amount of coding. Isn’t there an approach that’s less demanding?
Backdoors provide a scruffy, better-than-nothing solution. Here, you would rewrite your application code so that all these tracking requests happen via a newly introduced Javascript object. The purpose of this object would be to act as a man-in-the-middle that gathers and then relays information about which tracking requests were made to which services and with which data. With this man-in-the-middle object in place, your automated tests need only check that your man-in-the-middle’s methods were called with the right arguments. For radically less effort than our initial approach, backdoors give us a convenient way to check for errors like passing the wrong data to tracking platforms or failing to include the right tracking pixels on the right page. (But be advised that these tests won’t be any good at alerting you when you incorrectly used the third-party service’s API. That said, I find such errors to be rare, so I don’t bother testing them in the small applications I earn my keep with.)
Falling more or less into this category too is dependency injection. Roughly speaking, dependency injection is when you extract a blob of functionality into an external entity (object or function) that will then get passed to the original, now considerably lighter, object as an argument. This aids testability when the programmer creates additional, look-alike external entities which can stand in for the originals yet are easier to test. Complex or slow or difficult-to-monitor real-life functionality can thus be switched out for alternative versions more conducive to testing, such as the famous mock.
3. Maximise Reachability and Operability
Early into my coding career, I discovered that not all implementations of a feature are equally reachable—or operable—by tests. For example, about six years ago I needed to implement a file upload system that let the user select multiple files at once. The go-to, easy-to-use library for web developers back then was a Flash-based uploader (not remotely true anymore BTW), so I went with the flow and added the Flash to my application. But, as I later discovered, the browser test tooling for Rails couldn’t execute Flash code, meaning I’d have to jettison any integration tests that went through this application flow. This is how I ended up having absolutely no automated tests for the most critical flow of my web application…. In a fit of optimism, I swore to myself that I’d test this flow manually before every freaking deploy. Needless to say, this never happened. The predictable result of my folly was bugs, bugs, and more bugs. It was like the movie set of Beetlejuice in there. Two years later, I bit the bullet and replaced the reclusive flash code with a JavaScript uploader which was, mercifully, testable within the Rails browser-test harness. And boy oh boy, what a difference to stability this made! It was a revolution not only in code but in my peace of mind. This saga was, for me, an extended lesson in the value of testability as a goal in itself, even when its attainment comes at the cost of ease of initial implementation.
Another aspect of this principle in action is arranging your code so that anything remotely complex can be easily tested within the interactive console. This is desirable for a plethora of reasons. For one, quick feedback following toying around with objects in the console is fantastic during debugging and exploratory coding sessions. For another, anything that’s easy to handle within the interactive console is equally easy to handle in automated tests. What is a unit test, after all, but a short narrative taking place in a freshly instantiated console with a tightly controlled context? Every Rails programmer knows that controllers ought to be kept skinny and models fat. This best practice could have been predicted by the advice in this paragraph, since it’s startlingly difficult to test controllers from the interactive console.
4. Explanatory Dry Runs
A dry run is supposed to tell you exactly what running some code will do without actually doing it. Instead of sending emails, it lists out in a CSV file which emails will be sent to whom. Instead of paying out royalties, it prints out how much money will be paid to which author. Instead of backing up a folder, it describes which files will be moved and what effects these transfers will have on the backup destination folder. Dry runs are indispensable when developing features of the kind where there’s no going back if an error occurs. You can’t unsend an email, unsend royalty transfers, or undelete a clobbered file.
I assume you’re already writing suites of automated tests, so you might reasonably ask what extra good might come from writing functionality for dry runs. Here’s what: They help you explore a messy domain and inform you about what unit tests you ought to write. This is especially helpful when you have code whose output depends on interactions between a myriad of variables. Trying to reason about this kind of complexity by hand is treacherous, even reckless. Think about it—some of the worst bugs happen when programmers haven’t fully grasped the problem. A battery of unit tests asserting an incorrect understanding does nothing but solidify and propagate a misunderstanding. In contrast, dry run functionality, combined with intelligent eyeballing, helps you ensure that the code and its tests better match the lay of the land.
This has all been a bit abstract, so let me provide you with an example from my codebase. My logic for determining which automated emails go to which customers is labyrinthine, for it must please two gods—that of patience and that of time. The code needs to balance between making the ideal offer (given a set of previous purchases), and not oversaturating any particular customer with emails and thereby irritating them. To help me think this through, I gave my emailer object dry run functionality that prints a CSV sheet of which customers will receive which emails. The sheet also contains various columns of data about each customer’s history with us, including all the bits and pieces that would affect the decision of what emails to send (e.g. the customer bought a "philosophy" product; they last visited in May 2015 so are probably still in college, etc.).
One implementation tip: insofar as is possible, the code for dry runs should follow the same execution path as the real deal, thereby ensuring that its output matches its bigger brother as closely as possible.