In this new series of articles, more focused on the technical side of the development process, François shares with us his encounter with and thoughts on testing.
This reflection is divided into two parts: the discovery of testing issues and the methodical approach to implementation, which is the subject of this article.
A fresh start for testing
As I said in my previous article, I had to find a solution adapted to the testing of my business layer. I had clearly made a mistake in choosing functional testing.
To begin this journey of initiation, I was going to have to go back to my origins.
The legendary unit tests!
Everyone talks about them, but what are they really?
They were popularized by Kent Beck and Erich Gamma with the creation of the JUnit framework in 1997. They have since been revived by eXtreme Programming and Test Driven Development.
Despite its apparent simplicity, unit testing is not easy to define. Many developers have their own idea of what they are.
However, everyone agrees thata test is a procedure to ensure that a treatment is working properly.
Now, here’s a Larousse definition of unity:
“Character of what is considered as forming a whole whose various parts contribute to constitute an indivisible whole.”
We can deduce from this that a unit test must ensure the correct operation of one and only one element.
This idea is directly opposed to functional testing, which tests the application in its entirety.
The hardest part is yet to come: we still have to identify this unit.
Composing a unit application
In object-oriented programming, what does an application consist of? From,
- Packages,
- Classes,
- Methods,
- Variables.
Obviously, we’re not going to test a variable, as it doesn’t embed any logic or processing. As for the package, it seems too vast. Its very existence is linked to the notion of grouping.
A unit can therefore be either a class or a method – it’s all a question of point of view.
And this debate is raging among developers.
According to Martin Fowler
“(…)Despite the variations, there are some common elements. Firstly there is a notion that unit tests are low-level, focusing on a small part of the software system. Secondly unit tests are usually written these days by the programmers themselves using their regular tools. Thirdly unit tests are expected to be significantly faster than other kinds of tests.
So there’s some common elements, but there are also differences.
One difference is what people consider to be a unit. Object-oriented design tends to treat a class as the unit, procedural or functional approaches might consider a single function as a unit. But really it’s a situational thing – the team decides what makes sense to be a unit for the purposes of their understanding of the system and its testing. However you define it doesn’t really matter.(…)” ref
Here are some of the highlights:
A unit test is a low-level procedure written by developers, running quickly and ensuring the correct operation of a small part of a system.
In the context I’m interested in, I’ve decided that a test class is associated with the testing of a business class (entity or service), and that a test checks the correct operation of only one of its methods.
The AAA model
Now that we’ve got a pretty good idea of what a unit test is, how do we write it properly?
Arrange/Act/Assert (AAA) is a model for organizing your code in a unit test method.
The idea is to develop a unit test following these 3 simple steps:
- Arrange
Setting up the prerequisites for executing the code under test: input data and dependencies. - Act
Run the code to be tested. - Assert
Verification of test success criteria.
Here’s a deliberately simplistic example, testing the pow (power) method in java’s Math class:
@Test
public void testPow() {
// Arrange
double base = 2d;
double exponent = 3d;
// Act
double result = Math.pow(base, exponent);
// Assert
assertEquals(8d, result);
}
The Four Phase Test model is also available:
- Setup
- Exercise
- Verify
- Teardown
The first 3 points repeat exactly the same ideas as the AAA model. The last point, “Teardown”, is a disassembly step which returns the system to the state it was in before the test was run.
This one bothers me a lot. I find it dangerous and complicated that a simple unit test should affect the overall state of my system. And if my test fails or crashes, the disassembly step may not be executed and may leave my system in an uncertain state.
If you are forced to reset the state of your system after executing a unit test, question your test, your input data and your code.
- Do I really have to do this?
- Am I not writing an integration test?
- Don’t I have a clumsy addiction?
FIRST principles
Here are a few best practices to follow.
These 5 principles can form the basis of a checklist for writing a good unit test.
1- Fast
A unit test should be fast (a few ms). However, a developer should never hesitate to launch tests because they are too long. The test suite must be able to be launched at each integration commit, or even at each development save.
2- Isolated
Tests must be able to be run in any order. A test should be able to be run individually. Avoid disassembly steps, which can mask a centralized dependency problem (such as a database).
3- Repeatable
A test must be deterministic. The result must always be the same, whatever the environment and time. There should be no need for a specific test environment. Each test should prepare its own input data. Create a utility method if you need to mutualize their creation, but don’t centralize them.
4- Self-verifying
No manual steps are required to determine whether the test has passed or failed.
5- Timely
In practice, you can write tests at any time, but the sooner the better.
What about my dependencies?
As I’ve already explained, a good test should have minimal dependencies. If a test has a lot of them, ask yourself whether they are legitimate and question your code.
When there is a justification for these dependencies, try to abstract from them as much as possible. A test should verify the correct operation of an object, not its dependencies. Keep your dependencies under control in your tests by simulating them through instances you create in your tests, or through mocks if this is not possible.
The cover
Test coverage is the percentage of code covered by tests. The higher the rate, the better. Of course, you’d like to chase 100%. But is it really necessary? And even if it is, have I really tested my code in every possible case? Of course not.
So what’s the right rate? Personally, I think it should simply increase over time. Don’t try to cover your entire application from the outset, do it in small, successive increments.
Initially, you can focus on the parts of your application that you consider to be the most critical or to have the greatest added value. When a bug comes up in your tracker, you can cover it with an additional test.
And if you reach 75%, you can already be proud of yourself!
When you write a test, you can use arbitrary input data, but don’t forget remarkable values (I’m thinking of division by 0 right now) and the bounds of your system.
Here’s what Kent Beck has to say on Stack Overflow:
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (…) ref
A final word
Don’t underestimate your tests. They should deserve the same attention as your code and should be expressive. If adding a test is not a trivial operation, don’t hesitate to question your code.
Your code must be easy to test: think “Design for testability“.