Testers, Code and Automation, Part 3

Working in our hypothetical developer-tester context over the last two posts, we’ve done some good work. We have a working implementation and we have some tests. Here we’ll finish up the work and close with some thoughts on the journey.

As mentioned in the first post, there is a GitHub repo that I’m providing in case someone wants to use that. It’s tagged at various points and I’ll indicate what tags you can checkout if you want to join in at a certain place. Case in point, if you want to start off where we last ended:

git checkout tags/secondpost

Getting a Number From the User

We closed the second post by indicating some areas we weren’t testing with our code-based tests. A good start to this post would be to write a test for the getNumber function. Here’s an initial attempt to be added to our growing list of tests in main_test.go.

Here we’re simulating user input. In this case, the test sort of “mocks” the user typing in “7” at our prompt. That simulating response, stored in a reader, is what gets passed to the actual function.

That works. And we now have partial coverage of getNumber. In fact, your overall coverage should be about 50% or so right now. As a reminder, you can get coverage by:


go test -cover .

If you want an HTML coverage report, you can do this:


go test -coverprofile="coverage.out" .
go tool cover -html="coverage.out"

You’ll likely see this:

Thus, we see what tests we’re missing: when the user quits and if they enter an invalid value. Clearly we’ll want to add conditions to our test and, equally clearly, similar to what we did with a prior test, we’ll want to parameterize our test. This means our new test can be framed as a table based test.

We now have a structure for our table test and this means we can add more conditions.

Notice here that we’re now doing those tests that originally we didn’t have to do because of static type checking? Specifically, we’re testing for decimal (floating point) numbers and textual input (such as “three”) rather than numeric. Without a user interface, we literally could not use the program with floating point numbers or text as the input, due to Go performing static typing checking on our code at compile time. But now that we allow user input, those situations can happen at run time.

This is a valuable area of discussion between developers and testers and can obviate a lot of “that can’t happen” conversation killers. Here what can and can’t happen is clearly context dependent. And the most important context is what the user can actually experience. If the user can experience it, we need to assume it can happen. Thus, we should test for what happens when it does happen and our application should respond appropriately.

Also notice that, as per the discussion in the previous post, while we have three tests that check for the “Enter a whole number” outcome, only one of those improves our code coverage. All three, however, improve our test coverage!

Also notice that we’re testing for both a lowercase and uppercase letter “q” when the user quits. That might not have been obvious to check and I purposely didn’t bring it up until this point.

With this, the coverage should be around 56.2% and the entire getNumber function is tested.

Getting the User Input

We are still missing feature coverage here because getInput remains untested. You might argue that, in fact, getInput is being tested, at least implicitly. If you think that, consider the test code we have. Is anything actually using that function?

If we use this prime calculator as a human would, we certainly would be testing all of the logic together. But our automation is not, as of yet, doing so. That’s what our coverage gap is telling us.

In looking at the getInput function, one line should stick out to you based on what we’ve done so far, in terms of sending user input via our tests.

The challenge here is that the standard input is hardcoded. This means that right now I can’t really pass anything in to this function. However, this is an easy problem to solve. The developers can make this change:

Notice the change here? The developers have added a new argument to the function and that argument is now what’s passed to the scanner. That input could be os.Stdin that we were originally using but it can also be something else. In fact, let’s change the call to getInput in our main function:

At this point, notice how it’s easy once again to run our existing suite of tests. And while we still aren’t testing the getInput function, those tests will give us confidence that we haven’t broken the world with our changes.

Yes, we could also run those tests by hand. But this is a case where having automation is really nice. Why some testers out there debate this is a little odd to me. And keep in mind that our automation was built up only based on human test design considerations.

Now let’s actually write a test for that function.

What this is doing is sending the equivalent of the user typing “7” and then typing “q”. Essentially, the test is making sure that the goroutine actually executes, takes some values when it does so, and quits when execution has completed.

With this you will see that your coverage is now about 81.2% and the getInput is entirely covered. The only thing not covered right now is the main function.

Okay, but should we test main? Certainly as a human executing the application, we do. The application literally can’t work without the function. But what about automation? Is it worth it? Well, let’s consider what it would mean.

Testing Main

This requires some creativity since main typically interacts with the environment: os.Stdout, os.Stdin, and the os package for program termination. To test the function effectively, we can restructure main slightly to make it more testable by isolating its core logic into a separate function. This is a common technique.

First, the developers could extract the core logic of main into a new function, say run, and parameterize it to accept dependencies like io.Reader and io.Writer.

Now the developers just have main call that new function:

You could run the program to test it yourself and see if it still works. (Spoiler alert: it does.) Now, with this work done by the developers, you could write a test for run:

That will fail, however. You’ll likely see something like this:


  7 is prime
> 7 is prime
> --- FAIL: Test_run (0.00s)
    main_test.go:143: missing expected output: "7 is prime"; got 'Is number prime?
        Enter a whole number; q to quit.
        > Exiting
        '
FAIL

It’s a little hard to reason about what’s happening there. But consider what execution of that looks like from a human perspective:


Is number prime?
Enter a whole number; q to quit.
> 7
7 is prime
> q
Exiting

The problem lies in how getInput and its outputs are being tested. The expected interaction from the human interaction example includes interleaved prompts (“> “) and responses (“7 is prime”). However, in the test setup, the standard output does not seem to capture all these outputs correctly, particularly the output after processing 7.

Thus, what’s happening is that the stdout in the test captures the output of run, but the interleaved prompts and results from getInput may not appear as expected because of two reasons:

  1. getInput writes directly to os.Stdout in some cases instead of the io.Writer passed to run.
  2. The prompt function uses os.Stdout, which bypasses the test’s output capturing mechanism.

This would mean we need to update the getInput function to use io.Writer for its output:

Then we have to update the run function to pass stdout:

With these changes, however, you’ll now find one of our existing tests — the one for Test_getInput no longer works. It wont’ even compile because we’re not passing in the correct parameters to getInput, based on all these changes. We have to update that.

With that, your test should pass. And you will find yourself sitting at 97.0% coverage. Ironically the only thing not covered at this point is still the main function! Yet, notice that we have reduced it to one line that essentially means it’s not worth spending time automating that further.

Yet, notice what else that means: even with all these tests we have not obviated the need for a human being to go in and actually try the program!

Doing Too Much?

Now, let’s ask this: was this last bit of change worth it? I would say, yes. The test-driven changes we made actually improved the code in meaningful ways. By introducing io.Reader and io.Writer as parameters in the functions, our code became more modular and flexible. It’s no longer tied to specific input (os.Stdin) or output (os.Stdout), making it easier to reuse in different contexts, which might be CLI tools, GUIs, or automated scripts.

Our logic is no longer hardwired to the console. If the developers ever wanted to redirect input from a file or pipe output elsewhere, such as to a log file, the code is ready without any further refactor.

Our combined refactors throughout these posts have allowed us to fully isolate and test each component. Initially, the functions were tied to side effects like printing to the console. Now, each function focuses on its core responsibility:

  • prompt writes a prompt.
  • getInput handles input and delegates tasks.
  • startup initializes and prints startup messages.
  • checkPrime focuses solely on the prime-checking logic.

Further, the abstraction of input/output — via io.Reader and io.Writer — is more aligned with Go idioms. Our functions now operate on streams, making the program composable and adaptable without changing the core logic.

This last point is important. In the previous post, I asked if this user input logic was too contrived. What I could have done is started with much more simple logic like this:

I then could have taken you through how that evolves into what we ended up with. A challenge is that this would have required many more posts, likely straining the readers’ patience.

Putting Pressure on Design

Here’s what I think is a really good takeaway from all this. Good tests often put pressure on design. They do this across a couple of different areas.

  • Revealing Coupling Issues: The desire to test main exposed that our functions were too dependent on os.Stdin and os.Stdout.
  • Driving Modularity: To make testing easier, we naturally split responsibilities and reduced dependencies.
  • Encouraging Extensibility: Our refactor support future changes, like adapting the app for new environments or use cases.

This pressure from testing led to a better, more maintainable codebase, not just for testing, but also for real-world usage. And this is even the case for an incredibly simple application like this! Imagine how much more this can be impactful on an actual, useful codebase.

git checkout tags/thirdpost

Making changes for the sake of a test might seem like extra work, but in this case, the tests always highlighted areas where our code could be improved. These changes made our code:

  • Easier to test.
  • More reusable.
  • More aligned with solid design principles.

So, while the tests often prompted the changes, the result is code that’s more robust, flexible, and maintainable. That’s a win!

With that, I’ll consider this series of posts a “win” if it convinces the people I most need to convince, which are some very vocal testers out in our industry that seem unwilling to engage at this level, particularly with people coming into the industry.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.