10 comments

  1. I don’t think he was arguing about the benefits. I think he was talking about those “zealots” out there that are preaching that TDD is the holy grail.

    Let me ask you this: who test the TEST?

    “Too much” is bad and that goes for software development too, may it be in designing architecture or TDD in this case.

    Chris 2

  2. Well I for one agree with Joel on some grounds.

    Too often have I seen some twisted application of unit testing. Ever seen a project where a unit test required a complete database and some other heap of infrastructure to be able to run? That’s NOT UNIT TESTING.

    But then again I dislike SCRUM in practice too for various reasons. (If someone ever hand me some token to indicate I may talk during a “stand-up” I’m gonna toss it right into a garbage bin.)

    Often the principles behind lots of practices are sound. But once people start implementing stuff on the grounds that it’s what the book tells us to do without considering the benefits and drawbacks for their circumstances… I start to get itchy all over.

    Jeroen Leenarts

  3. Hi Chris! You are right, the benefits were not really involved in the arguments on TDD and design. And that’s exactly the problem: I don’t think you can omit the benefits and just focus on the costs. Of course: too much of anything is bad. Too much of Joel Spolsky is bad too. But if you think like that, you can disqualify everything.

    As for who’s testing the tests: the real code and the real world. When a test fails, you will need to investigate where the error is. Especially during initial construction, there is a fair chance that the error is in fact in the test itself. However, those errors are usually different from errors in the code, so the test will still fail. In a sense, by combining tests with production code, you also test that the test code matches with the real code.

    Of course, not every error will be caught by tests (just the majority that are just stupid bugs). Finally, those bugs will manifest themselves in the real world and will need to be fixed. This means your code has a bug, and, apparently, so does your tests, because it missed the bug. Learning from real world bugs and improving the tests accordingly is an important part of the whole process.

    Mind you: the above is true for any test method, not just unit testing. My question to you is, if you don’t use tests to test the code, how do you validate its quality?

    peterhe

  4. Jeroen: I understand your problem about unit tests that have too many dependencies. However, Joel attacks the SOLID principles, which are precisely about removing those dependencies! Just doing what the book tells you is sometimes bad, just doing what one chapter tells you is almost always wrong.

    peterhe

  5. I think the SOLID argument from Joel is understandable when you consider points of view. I do make a few assumptions in the next few lines. 😉

    Joel is a product guy. His company created only a few server side products which they themselves host (mostly) and maintain with a very limited group of people. Uncle Bob wrote lots and lots of different products which seem to be more “corporate” kind of stuff.

    I can understand where Joel is coming from. He needs software that works and can be swiftly addapted to change. The developers doing the work tend to work on the product for significantly longer periods of time and the number of codebases is limited.

    Compare that to a typical corporate codebase where some random developer is tasked to fix some bug in some random system. After it’s initial release a corporate system tends to be used for extended periods of time and the developers working on it are very often lots of different people. And when lots of different people are involved who do not fully grasp a codebase, you better have a whole pile of triggers in place to warn the developer when a change he introduced might cause unwanted side effects. Unit tests are an example of such a trigger mechanism. Keeping a very clean OO design with very clear interfaces and responsibilities is another mechanism which helps in this context.

    So I think they are both right in some ways. Yes, the principles in SOLID are sound. I tend to adhere to them as well. But sometimes a well implemented hack job is good enough as well. Do consider that a hack job does not imply that the code is extra buggy or anything. It’s just another approach to the way you implement some piece of functionality. Other principles besides being pure SOLID in design still apply. (Code cleanliness, proper commenting, good testing, etc.)

    Jeroen Leenarts

  6. Jeroen: I get your point, but I don’t agree with the rationale behind it. To paraphrase: something like SOLID design and testing means significant slower response to change and a significant lower velocity. I don’t think that’s really the case. A simple 10 minute change typically would not need a (re)design and would simply fit in the existing structure. A big change would really benefit from a proper design and test suite.

    A “hack job” might save you the time of properly embedding it in the design, but in the grand scheme of things, this really is not where most of the time is spent. You need to figure out what to build, what code to write, how to test it, when to ship it, fix bugs on, document it, etc. Spending an hour extra on the design really is not that big of a deal. If you need more time than that, you probably haven’t got enough information to start coding yet anyway.

    From a business perspective, Joel should absolutely be interested. He can maintain a higher pace of changes at a higher quality level. If a developer leaves, his or her replacement will be up to speed faster. He has a better opportunity to reuse his investments. This is not about pretty code. This is about real world business benefits.

    I understand that with fundamental topics like TDD and SOLID, there is a serious risk to do it wrong. If you do it wrong, it will bring a lot more harm than good. In my opinion, this is the scenario where you can really lose out. If you start doing this, you’d need proper coaching, tooling and a healthy dose of common sense to keep this in check. If you go like “Uncle Bob speaks, I stop thinking now” and just blindly flail about, sure, you will get in trouble. No one ever said it was that easy.

    peterhe

  7. @Chris 2 said:
    “Let me ask you this: who test the TEST?”

    The answer is: You do.

    If you’ve read up on TDD, you’ll know that just after you’ve written your test, you run it to check that it fails, that is, it fails for the correct reason.
    Now you can write code that will make the test pass and run the test to check that it passes this time.

    The up-shot is that you’ve seen the test both pass and fail for the correct reasons so you know that an accidental breakage in the future will be caught by this test.

    Andrew Wall

  8. @Andrew Wall: Yep I know TDD. I write a test then run NUnit to check if it fails or not. Point is, these so called “tests” are also written in C#/VB code and will, most likely, contain bugs too — the same with the actual code. And if a piece of this “test code” has bugs does that mean that I also have to debug it?

    Chris 2

  9. @peterhe: Before TDD became “commercialized”, the quality of every release of every software is being validated by the “QA” people.

    Chris 2

  10. @chris 2: I think QA still should do testing. They can do the more difficult testing, like specification based testing, usability tests, security tests, etc. To me, developer testing is about how you decide when something is good enough to ship to QA. Before TDD became accepted, this typically was done using the “it compiles, ship it” or clicking on a few buttons each time before sending it off to QA. TDD raises the bar here, and I think that’s a good thing, but in my opinion, it does not replace QA for most projects (unlike what some TDD zealots say).

    peterhe

Comments are closed.