A little while ago, “Uncle” Bob Martin respun a little debate regarding code coverage (measurement of code executed at least once when testing).
According to Uncle Bob, 100% test coverage is a minimum requirement. In our industy, that is a pretty bold goal, as usually 80% coverage is seen as a practical goal.
His arguments in short: code not worth testing is probably not worth writing, 100% coverage should only be 25% more work over 80% coverage (and not exponentially more), and the metric is useless without this goal as it does not tell which 20% not to test.
I think Uncle Bob has a point. Code coverage does not tell much when it says “80% of the class is covered”. That being said, total coverage can be a difficult thing to achieve. Some parts of the system may seem too trivial to test, others may require a complicated environment setup to trigger edge cases like timeouts or I/O problems.
Also, code coverage by itself is a limited test quality metric (hence it is just a “minimum requirement”). It only tells you what part of the system is not tested at all. Code that is covered can still not be well tested at all, because of lack of test cases or lack of proper assertions. Other ways of measuring test quality are necessary as well, like code/test reviews or how much problems are found when the product is shipped to test/production.
In this blog post, I want to show you a little exercise I did to try to improve code coverage. First, let my explain my goals:
1. I don’t want to explicitly test trivial code (getters/setters, simple constructors, simple checks for null).
2. I don’t want to test certain code that I know it would be extremely expensive to test using Java code, because of environment setup or input/output validation. For instance: file-system access, UI (Swing, SWT, HTML like rendering, SOAP endpoints).
3. I want to make sure I test everything else!
Consider the following example:
class TechnicalProblemException extends RuntimeException {
private static final long serialVersionUID = 5882551335843321088L;
public TechnicalProblemException(String message, Throwable cause) {
super(message, cause);
}
public TechnicalProblemException(String message) {
super(message);
}
}
class Person {
private String name;
public void setName(String name) {
this.name = name;
}
public String getName() {
return name;
}
}
class HelloMessageHandler {
private static final Logger logger = LoggerFactory
.getLogger(HelloMessageHandler.class);
void writeHello(Person person) {
if (person == null) {
throw new NullPointerException("person may not be null");
}
String name = person.getName();
if (name == null) {
throw new NullPointerException("name may not be null");
}
name = name.trim();
if (name.length() > 20) {
name = name.substring(0, 16) + "...";
}
try {
Writer writer = new OutputStreamWriter(new BufferedOutputStream(
new FileOutputStream("message.txt")), "UTF-8");
try {
writer.write("Hello " + name);
} finally {
try {
writer.close();
} catch (IOException e) {
logger.warn("An error occurred while closing: "
+ e.getMessage());
}
}
} catch (IOException e) {
throw new TechnicalProblemException("Failed to write to file.", e);
}
}
}
It would be hard to test this 100%. I would need to do all kinds of I/O tricks and useless null input tests. However, not hitting 100% on this class means I need manually inspection to make sure I don’t miss anything critical.
First, let’s look at the input validation. There are great default helper objects out there (like Google Guava Preconditions or the new Java 7 java.util.Objects) that can help you with this, or you write your own. This looks like this:
public final class Preconditions {
private Preconditions() {
// No need to create this class
}
public static <T> T checkNotNull(T reference) {
if (reference == null) {
throw new NullPointerException();
}
return reference;
}
}
Next, let’s put the nasty I/O stuff in a separate class:
class TextToFileWriter {
private static final Logger logger = LoggerFactory
.getLogger(TextToFileWriter.class);
void writeMessage(String filename, String text) {
checkNotNull(filename);
checkNotNull(text);
try {
Writer writer = new OutputStreamWriter(new BufferedOutputStream(
new FileOutputStream(filename)), "UTF-8");
try {
writer.write(text);
} finally {
try {
writer.close();
} catch (IOException e) {
logger.warn("An error occurred while closing: "
+ e.getMessage());
}
}
} catch (IOException e) {
throw new TechnicalProblemException("Failed to write to file.", e);
}
}
}
Let’s clean up the code, inject the TextToFileWriter, so I can mock it during test:
class HelloMessageHandler {
private final TextToFileWriter writer;
@Inject
public HelloMessageHandler(TextToFileWriter writer) {
this.writer = checkNotNull(writer);
}
void writeHello(Person person) {
checkNotNull(person);
String name = checkNotNull(person.getName());
name = name.trim();
if (name.length() > 20) {
name = name.substring(0, 17) + "...";
}
writer.writeMessage("message.txt", "Hello " + name);
}
}
Finally the test to score 100% on the HelloMessageHandler:
public class HelloMessageHandlerTest {
private HelloMessageHandler handler;
private TextToFileWriter writer;
@Before
public void setUp() {
writer = createMock(TextToFileWriter.class);
handler = new HelloMessageHandler(writer);
}
@Test
public void shortNameGetsWrittenAsFull() {
Person person = new Person();
person.setName("test");
writer.writeMessage("message.txt", "Hello test");
replay(writer);
handler.writeHello(person);
verify(writer);
}
@Test
public void longNameGetsWrittenAsAbbreviated() {
Person person = new Person();
person.setName("123456789012345678901234567890");
writer.writeMessage("message.txt", "Hello 12345678901234567...");
replay(writer);
handler.writeHello(person);
verify(writer);
}
}
I am very happy! Better coverage and better readable code as well.
Scoring 100% on the TextToFileWriter is still really hard. I’d put this in a separate “infrastructure” package. When inspecting code coverage quality, I can focus on the packages I know contains my important business logic, while still quickly find the areas where I could improve, should the need arise.
Achieving 100% coverage should be very achievable this way, using minimal additional effort and with a better design. What do you think?
2 comments
One of the steps into achieving the 100% coverage rule is to start with a better design. E.g. many times setter and getters are considered trivial to test and thereby causing people to skip testing them and thus lowering the coverage. the first step however should be to eliminate as many public access paths into the class as possible and many times the (automatically?!) added getter and setter should not even be there.
In that regard I would love a language/compiler that allows me to annotate classes as Data Structures, DTO’s or Active Records and then would constrain me in adding to many or the wrong functions. That would (at least that is what I hope for) significantly reduce the effort it would take to achieve 100% coverage because the classes are better structured, lean and maintainable.
Erno de Weerd
Agreed, the best way to improve coverage is to write less code. For Java, there are frameworks like lombok or JSR 305 that allow adding certain constraints and automated code generation for dto, immutable, singleton patterns. I personally love 305 (but the spec is dormant right now) and am a bit weary about lombok because it requires a bit too much voodoo magic under the hood.
Getters/setters and constructors are usually implicitly covered in the tests. In my example the person getter and setter is still covered. Although not explicitly asserted, tests will still break if they don’t work correctly.
peterhe