Automated software testing part 3: writing tests, and refactoring

Previous: part 2 – Next: part 4

In this post, we will look at the process of writing unit tests for existing code. In the process, we will be refactoring the code to make the tests easier and clearer – and to improve the code.

The code is in node.js with the express framework. However, the approach we will take is not language specific. Reading this post can help you even if you have never written any node.js code.

This is a long post and would have been better as a screencast. When I have created that screencast, I’ll add a link to it in this post.

You can also find the code below in the src/unit-test-1  dir of my blog code repository.

Note: The code snippets contain a few things that should be done differently in production code. For example, using asynchronous fs calls (e.g. fs.readFile  instead of fs.readFileSync ); using  "use strict;"  ; and using modules like log4js and sinon.

Step 1

This is the code we shall write tests for:

This code starts an HTTP server that listens on localhost:64001 . It writes to a log file whenever it is started. On an HTTP request to http://localhost:64001/count/start , it returns a JSON structure that shows how many times the server has started. Like this:

We want to test if the correct count is returned by our URL – so the code that is the second argument of app.get('/count/start', function) . How can we do that?

If we load server.js  in our test file, the server is actually started and the real log file is being used. This is not an option. Besides, running the server and testing the HTTP call is a high level test – let’s test it on a lower level. To do this, we need to separate the count start function from the server itself.

Step 2

We’ll create a count.js  file to put the count start functionality in. It looks like this:

The app.get()  line in server.js  needs to be changed as well:

Excellent. Now we can create count.spec.js  to test the new countStart  function. Let’s give it a shot. We’ll use the mocha framework for testing and should to check actual results against expected results.

There are a few problems here.

  • This test will read the real server.log  file. Therefore, the number of “server start” lines can be different each time you run this test. Additionally, the file may not exist at all.
  • The countStart  function does not return a useful value; instead it call res.send  with the value to be tested. That makes it more difficult to test.

Let’s get it to work anyway, by creating a fake res  object with a send  method.

Now we have something that works, most of the time. In my opinion, this is not acceptable test code yet. So let’s fix the first problem, that the test code uses the same server.log  file as the server itself.

Step 3

The problem with the current approach, is that the countStart  function determines the log file by itself. It would be better if the log file was determined elsewhere, and then given to (or asked by) the countStart function. Ever since I saw Miško Hevery’s talk on Dependency Injection, I’ve been a fan of DI; and even if this is not be exactly DI, it’s certainly very similar.

We’ll move the log file related code to a separate file, log.js and we’ll make it possible to set and get the filename of the log file.

Additionally, we’ll need a few changes to server.js and count.js.

We’re getting somewhere.

In our test file, we can now use a different log file. We’ll have to create this file as part of the setup of our test, and remove it at the end. Here’s our modified count.spec.js :

Note that we can now actually test whether the number of lines is 2, because we control the log file in the test. In step 2, we could not predict how many lines there would be. That means that this test tests our function better. In step 2, the countStart  function could just as easily always have returned 42, and our test would not have noticed that.

Regarding our DI approach: we’re not injecting the log file name into the countStart  function yet, because this function is being called by the express framework. Let’s work on that now. This should also fix the second problem mentioned at the end of step 2.

Step 4

If we look at the countStart  function, we see that it is currently doing 2 things: it is handling the HTTP request and sending a response, and it’s counting the “server started” lines in the log file. In general, when a function does 2 things that are easily separated, it’s good to separate the two. This makes the code cleaner, easier to reuse, and easier to test.

Here’s the new count.js:

The function countStart  now only gets the number of “server start” lines from elsewhere, and returns a data structure. The _countStart  function (ok, the name could have been better) no longer has any knowledge about HTTP requests or responses, and just counts lines in a file.

This also means that we should test both functions. Both tests will be easier than the single test of step 3.

First, we’ll test _countStart . Currently, this function is not listed in the exports of count.js , because it’s a private function (maybe it should be, especially if we want to use this functionality in other parts of the cade). There is a trick to get access to this function in our test suite, and that trick is the rewire module.

This is how we test _countStart .

This looks very similar to the test in step 3. The difference is, that we don’t fiddle around with a fake response object anymore: _countStart  simply returns the count.

The other test is of countStart. In this test, we use a very important principle: we don’t need to test what has already been tested elsewhere. Therefore, we are going to replace the _countStart  function be a function of our own. This is called monkey patching.

That’s a lot of preparation for a single test! This actually happens often, that the preparation for a unit test is significantly more code than the call and verification. It’s just part of testing.

In the above test, we still need the fake response object, but we don’t need to write files anymore.

This is a reasonable final state for our code. There is one more refactoring improvement that can be done.

Step 5

The last thing that bothers me is the _countStart  function. This function does too much: it reads stuff from a file, and it counts stuff. Especially the counting part is a reasonably complex operation, and we’d like to test that separately.

Let’s split it into two parts.

There. Now it’s much easier to test the matching function.

The test for _readFile  is also simple.

Looking at these four functions, we can notice a difference between the two pairs.   _readFile  and _countMatchingLinesInString  are functions that do actual work. Let’s call them worker functions. The functions countStart and _countStart don’t do any work, but they link worker functions together. Let’s call these linker functions.

To test a linker function, all you need to do is to verify that it calls the right worker functions in the right order, with the right arguments. This can be done by mocking the worker functions; there is no need to call the actual worker functions. For example, for _countStart :

For this specific example, I’m not sure whether step 5 is overkill. It might be. I found it important to show you though, because for more complex situations, the separation into worker functions and linker functions can make your life much easier.

Conclusions

Even for a very simple server that only does one thing, writing unit tests will make you think about and improve the design of your code. If a function or module is hard to test, it is often an indication of a problem with the design.

When testing, keep in mind dependency injection and the difference between worker functions and linker functions. Also think about which parts of the code have already been tested elsewhere, so you know when it’s safe to mock or monkey patch that code. This will help you to restructure your code to make it easier to test – and hopefully also easier to reuse and maintain.

 

This entry was posted in software development, testing. Bookmark the permalink.

Leave a Reply

Your email address will not be published.