A recent podcast of Joel "On Software" Spolsky and Jeff "Stackoverflow" Atwood stirred up a bit of a debate. Joel goes into a big rant against Unit Tests and the SOLID design principles. Robert "Uncle Bob" C. Martin, author of both the SOLID design stuff and also professional TDD zealot already responded, attacking the points Joel made.
First of all, I do understand the point Joel and Jeff are trying to make. Both unit testing and architecture can be over-emphasized and that will do more harm than good. Like everything else, solutions sold in black and white are usually some shade of gray in the real world. However, to get the point across, you sometimes tell things a bit more simple and extreme.
Joel should be sensitive to this, as most of his articles, including the Joel Test, are structured exactly the same. Unfortunately, Joel seemed to lose patience when hearing about things like test everything and decouple everything. In the talk they use weird examples that have very little to do with the original ideas behind the concepts. I hear these so called Straw Man arguments (proving a point by using a wildly exaggerated example) quite often regarding testing and design. but I was just as shocked as Uncle Bob to hear them from someone like Joel Spolsky, who can be considered a very public software development figure.
Basically the argument against unit testing and SOLID made is: "it's too much extra work". Joel goes on and on about how many millions of interfaces he would need and that all of his thousands tests break when he makes a single change. Of course these are wild examples that have very little to do with the real world, but what I am really missing is: what do you have to do instead, when not doing tests and abstractions? Joel only evaluates the costs of unit testing and proper architecture, but not the benefits.
Not doing unit tests will save you the time to write and maintain unit tests. However, unit tests have a certain goal (to validate if a single piece of code works). If you no longer write unit tests, how will you achieve this goal? A manual test will likely take at least 10 minutes (fire up the modified system, navigate to the proper user interface, enter some data, evaluate response). Writing a single unit test, on average, takes the same amount of time. However, in the debug/retry cycle, unit tests really start winning out. Afterward, unit tests can be recycled over and over, to immediately detect when a code change causes something to break.
With these productivity increases and risk mitigation, I can understand why someone would like as much unit testing as possible. Unit tests are great for testing what a developer can reason about and only involve the isolated unit. Unfortunately, there are more problem classes in a system, such as "is this user interface pretty", or "if I put X and Y together, will they work together". Code that deals with these kinds of problems may be impossible to unit test, or doing so would be extremely verbose and non intuitive. I would agree that it would be more efficient to skip unit testing for this kind of code, and test these attributes using other forms of testing instead.
At the end Jeff closes with a Frank Zappa quote: "Nobody gives a crap if we're great musicians.", concluding that users really don't care about code quality. Somehow, this part seems to suggest that if you care about unit testing and proper design, you can't care about anything else. That strikes me as a bit narrow-minded: it's not like testing and caring for user needs are mutually exclusive? Also, even Frank Zappa probably would have checked the sound before a performance, and look into the mirror before he went on stage. Because Frank Zappa knows the quality of his mustache matters, and not looking in the mirror is not an option.
10 comments
I don’t think he was arguing about the benefits. I think he was talking about those “zealots” out there that are preaching that TDD is the holy grail.
Let me ask you this: who test the TEST?
“Too much” is bad and that goes for software development too, may it be in designing architecture or TDD in this case.
Chris 2
Well I for one agree with Joel on some grounds.
Too often have I seen some twisted application of unit testing. Ever seen a project where a unit test required a complete database and some other heap of infrastructure to be able to run? That’s NOT UNIT TESTING.
But then again I dislike SCRUM in practice too for various reasons. (If someone ever hand me some token to indicate I may talk during a “stand-up” I’m gonna toss it right into a garbage bin.)
Often the principles behind lots of practices are sound. But once people start implementing stuff on the grounds that it’s what the book tells us to do without considering the benefits and drawbacks for their circumstances… I start to get itchy all over.
Jeroen Leenarts
Hi Chris! You are right, the benefits were not really involved in the arguments on TDD and design. And that’s exactly the problem: I don’t think you can omit the benefits and just focus on the costs. Of course: too much of anything is bad. Too much of Joel Spolsky is bad too. But if you think like that, you can disqualify everything.
As for who’s testing the tests: the real code and the real world. When a test fails, you will need to investigate where the error is. Especially during initial construction, there is a fair chance that the error is in fact in the test itself. However, those errors are usually different from errors in the code, so the test will still fail. In a sense, by combining tests with production code, you also test that the test code matches with the real code.
Of course, not every error will be caught by tests (just the majority that are just stupid bugs). Finally, those bugs will manifest themselves in the real world and will need to be fixed. This means your code has a bug, and, apparently, so does your tests, because it missed the bug. Learning from real world bugs and improving the tests accordingly is an important part of the whole process.
Mind you: the above is true for any test method, not just unit testing. My question to you is, if you don’t use tests to test the code, how do you validate its quality?
peterhe
Jeroen: I understand your problem about unit tests that have too many dependencies. However, Joel attacks the SOLID principles, which are precisely about removing those dependencies! Just doing what the book tells you is sometimes bad, just doing what one chapter tells you is almost always wrong.
peterhe
I think the SOLID argument from Joel is understandable when you consider points of view. I do make a few assumptions in the next few lines. 😉
Joel is a product guy. His company created only a few server side products which they themselves host (mostly) and maintain with a very limited group of people. Uncle Bob wrote lots and lots of different products which seem to be more “corporate” kind of stuff.
I can understand where Joel is coming from. He needs software that works and can be swiftly addapted to change. The developers doing the work tend to work on the product for significantly longer periods of time and the number of codebases is limited.
Compare that to a typical corporate codebase where some random developer is tasked to fix some bug in some random system. After it’s initial release a corporate system tends to be used for extended periods of time and the developers working on it are very often lots of different people. And when lots of different people are involved who do not fully grasp a codebase, you better have a whole pile of triggers in place to warn the developer when a change he introduced might cause unwanted side effects. Unit tests are an example of such a trigger mechanism. Keeping a very clean OO design with very clear interfaces and responsibilities is another mechanism which helps in this context.
So I think they are both right in some ways. Yes, the principles in SOLID are sound. I tend to adhere to them as well. But sometimes a well implemented hack job is good enough as well. Do consider that a hack job does not imply that the code is extra buggy or anything. It’s just another approach to the way you implement some piece of functionality. Other principles besides being pure SOLID in design still apply. (Code cleanliness, proper commenting, good testing, etc.)
Jeroen Leenarts
Jeroen: I get your point, but I don’t agree with the rationale behind it. To paraphrase: something like SOLID design and testing means significant slower response to change and a significant lower velocity. I don’t think that’s really the case. A simple 10 minute change typically would not need a (re)design and would simply fit in the existing structure. A big change would really benefit from a proper design and test suite.
A “hack job” might save you the time of properly embedding it in the design, but in the grand scheme of things, this really is not where most of the time is spent. You need to figure out what to build, what code to write, how to test it, when to ship it, fix bugs on, document it, etc. Spending an hour extra on the design really is not that big of a deal. If you need more time than that, you probably haven’t got enough information to start coding yet anyway.
From a business perspective, Joel should absolutely be interested. He can maintain a higher pace of changes at a higher quality level. If a developer leaves, his or her replacement will be up to speed faster. He has a better opportunity to reuse his investments. This is not about pretty code. This is about real world business benefits.
I understand that with fundamental topics like TDD and SOLID, there is a serious risk to do it wrong. If you do it wrong, it will bring a lot more harm than good. In my opinion, this is the scenario where you can really lose out. If you start doing this, you’d need proper coaching, tooling and a healthy dose of common sense to keep this in check. If you go like “Uncle Bob speaks, I stop thinking now” and just blindly flail about, sure, you will get in trouble. No one ever said it was that easy.
peterhe
@Chris 2 said:
“Let me ask you this: who test the TEST?”
The answer is: You do.
If you’ve read up on TDD, you’ll know that just after you’ve written your test, you run it to check that it fails, that is, it fails for the correct reason.
Now you can write code that will make the test pass and run the test to check that it passes this time.
The up-shot is that you’ve seen the test both pass and fail for the correct reasons so you know that an accidental breakage in the future will be caught by this test.
Andrew Wall
@Andrew Wall: Yep I know TDD. I write a test then run NUnit to check if it fails or not. Point is, these so called “tests” are also written in C#/VB code and will, most likely, contain bugs too — the same with the actual code. And if a piece of this “test code” has bugs does that mean that I also have to debug it?
Chris 2
@peterhe: Before TDD became “commercialized”, the quality of every release of every software is being validated by the “QA” people.
Chris 2
@chris 2: I think QA still should do testing. They can do the more difficult testing, like specification based testing, usability tests, security tests, etc. To me, developer testing is about how you decide when something is good enough to ship to QA. Before TDD became accepted, this typically was done using the “it compiles, ship it” or clicking on a few buttons each time before sending it off to QA. TDD raises the bar here, and I think that’s a good thing, but in my opinion, it does not replace QA for most projects (unlike what some TDD zealots say).
peterhe