Monday, July 20, 2009

org.mockito.exceptions.misusing.UnfinishedStubbingException

Mockito is a great tool for mocking. It also showed me how readable Java code can look if done right (readable for Java code that is, of course). But every once in a while Mockito produces one of those very informative error messages and I spend some time figuring out how to avoid the mis-behaving code. (Run-time code generation ftw)

Lets have a look at an example. It "works" with Mockito 1.7. We start with the code to be tested. Have a look at the following class definition, with Opera and Hero being dummy classes. (Full source code available below)

static class CoverCreator {
public Cover print(Opera opera) {
// magic
opera.getHero().prepareToPrint();
// magic
return new Cover();
}
}

Intuitively, CoverCreator produce a cover for a given opera. To be able to do that, it needs, among other things, call a prepare method which returns void.

To test this behavior we define a helper method that creates a mocked opera and stuffs it with another mock of type Hero. Since it is used by many tests, we don't stub any "real" methods here.

private Opera createMockedOpera() {
Opera opera = mock(Opera.class);

Hero hero = mock(Hero.class);
when(opera.getHero()).thenReturn(hero);

return opera;
}

The test in question creates such a mocked Opera object and stubs the prepareToPrint() Method (to do nothing, in this case).

public void testFails() {
// setup
Opera opera = createMockedOpera();
doNothing().when(opera.getHero()).prepareToPrint(); <--

// run
Cover cover = new CoverCreator().print(opera);

// assert
assertNotNull("Correct input must result in a cover object!", cover);
}

When run, this test produces the following UnfinishedStubbingException. The line pointed to by the exception is the highlighted one.

org.mockito.exceptions.misusing.UnfinishedStubbingException:
Unifinished stubbing detected!
E.g. toReturn() may be missing.
Examples of correct stubbing:
when(mock.isOk()).thenReturn(true);
when(mock.isOk()).thenThrow(exception);
doThrow(exception).when(mock).someVoidMethod();
Also make sure the method is not final - you cannot stub final methods.
at UnfinishedStubbingTest.testFails(UnfinishedStubbingTest.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at junit.framework.TestCase.runTest(TestCase.java:164)
at junit.framework.TestCase.runBare(TestCase.java:130)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:120)
at junit.framework.TestSuite.runTest(TestSuite.java:230)
at junit.framework.TestSuite.run(TestSuite.java:225)
at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)


This is not so great, especially since it is not immediately clear, why. The argument provided to the method is definitely a mock. This can be verified in the debugger or by extracting the result of opera.getHero() manually.

The problem is that opera in this case is a mock also. At this point, mockito is somehow confused by the mocked call, which is very similar to the usual stubbing pattern when(some_mock.call()).thenReturn(some_return_value).

So, what needs to be done is to extract the mock before. And everything is fine.

public void testSucceeds() {
// setup
Opera opera = createMockedOpera();
Hero hero = opera.getHero();
doNothing().when(hero).prepareToPrint();

// run
Cover cover = new CoverCreator().print(opera);

// assert
assertNotNull("Correct input must result in a cover object!", cover);
}

Full source code here: java, pretty printed html

Profiler4j fork

I've begun a fork of Profiler4j on github to add a few exporting and visualizing features I need.

Long live open source - as in free - software ...

Documentation Beyond Tests

Recently, I encountered an interesting, if twisted, interpretation of the "Tests as Documentation" principle. Reviewing some more-complicated-than-usual code, I asked the responsible colleague, if there was some documentation and, if not, that it would a be good idea to add some. He quickly responded that, of course, he'd provided good test coverage for the code at hand and since we worked under the "tests as documentation" principle it was all there. Indeed, he did write neat, compact tests, who will do their job, and still, in my opinion, he missed the point.

Coming from agile development methodologies, the idea behind this phrase is to minimize unnecessary and inflexible documentation in favor of tests. The latter are maintained anyways and ideally describe the code's behavior precisely. From my point of view, a more specific meaning depends on the kind of test we are talking about and what is documented by it.

Acceptance / customer tests belong to a class of checks, that verify on an abstract level that the system meets its business requirements. Here, the detailed implementation is irrelevant: The question is /if/ and not /how/ the business value is achieved. At this level, the documentation basically is a list of features that are hopefully of importance to someone. In case of larger systems, where there are tons of otherwise forgotton features, this is valuable knowledge.

In the area of unit tests the focus lies on documenting (and fixing) the behavior of a piece of code. By just looking at the tests, it should immediately follow what the code requires as input and what it will produce as output. This is even more so in the case of white-box tests that make the interface of the component precise. As a consequence, the test should be /readable/, in the sense that it ought to come as close to natural language as is feasible. This kind of low-level description is what my collegue meant, and he was right in the sense, that his tests did deliver this.

So, these two kinds of tests fix, define and therefore document behavior (and structures) of a system. Unfortunately, none of them considers /why/ something has been done in a particular way (or at all). Of course, this is by design - the tests do their jobs nicely. My point is, that there is knowledge, that - while absolutely essential - is not and cannot be provided by these tests alone.

Interestingly, on the business side this idea is well understood. If a company does something, there needs to be a good reason to do it and to do it this way. Each of the features that the acceptance tests verify, will be either part of a customer contract or a strategic plan from management. Requirements engineering has found nice solutions for this, e.g. user stories. Because, in the end, cost-effectiveness and accountability are of the essence.

At the level of code, this notion has not yet surfaced equally. If I see some complicated piece of code, I don't only want to know what it does, and if it does it. I want to know why we need it, why the developer chose this particular implementation, and why it must be so complicated.

I believe that the tests above are not capable nor intended to answer these important questions and plead in favor of good ol', natural language, classical documentation.