Investing in Automated Tests

I remember reading a while back about system administration professionals being fired because they clearly weren’t doing anything because there was nothing happening. It was a bit tongue in cheek, but the “no news is good news” philosophy applies to System Adminstration (and network security).
After watching and reading about the behaviour of non-test infected developers regarding testing, I’ve come to a similar conclusion. They view regression testing, automated testing and unit testing as “a waste of my time”. They don’t remember all the times it’s saved their asses, or the risk mitigation that goes along with knowing what and where something is broken. All they see is the “constant” investment required in running the tests, and keeping the tests up to date. Just like the organisations see the constant investment required in keeping the System Administrator sitting on their asses, drinking coffee and playing World of Warcraft.
The worst thing is the negative feedback cycles that occur. When non-test infected developers change code, they don’t care about the tests, so the tests break, and then it’s a hassle for somebody else to clean them up. These developers see the effort keeping the tests up to date, and as a result it reinforces the amount of time required to keep the tests working. Of course, if things were kept synchronised then this would all just be a trivial exercise.
I’ve worked on projects with and without good test suites. I’ve walked into projects that are too scared to change any code because they don’t know if things are going to break. I’ve seen corporations put newly compiled, untested code into production because “this change couldn’t affect anything else”, to have it not work, and to corrupt production data and other things best left unmentioned.
At the end of the day, I like to work with good test suites. I get to go home early, and not work weekends. That’s good enough reason for me.


5 thoughts on “Investing in Automated Tests

  1. One thing – among many – I like about TDD is, that you can memorize fine-grained requirements by just implementing a test case. This test case in most cases is more easy to implement than the logic required to let the test (and other tests in the same group) pass. That means: You can work from top to bottom without hazzling around with semi-finished or stubbish code, or even notes taken on paper (I cannot read my damn handwritten text, and where the heck is that damn paper I put yesterday on my table on the big stack?).
    Write your test, that fails currently. Go home. Come back next day. Let your tests run and know *exactly* where you stopped the day before.

  2. *cough* Straw Man *uncough*
    Ahem. Sorry about that, musta had something caught in my craw.
    On the one hand, the TDD crowd or ‘test infected’ can be very very myopic. It is a very strange little religion, tends to go hand in hand with XP, and every single person I’ve run into that goes on about TDD and JUnit writes bad code.
    On the other hand, if you try to point out their shortcomings then you get labelled as ‘anti-testing’, which in IT is like being labelled as ‘baby-eating’. I’m actually 110% pro testing, but much like the question ‘have you stopped beating your wife’, there is no right way to argue for the deficiencies of (specifically) JUnit tests.
    Now I’ll be the first to admit, its a fairly small sample I’m drawing on of TDD advocates. Most recently though I’ve worked with some guys from *Thoughtworks*, so I’d have to assume that however small the sample is, it is at least somewhat representative.
    I love unit testing. However, reminicent of the phrase from the Jim Beam ad, “this (JUnit) ain’t unit testing”.
    At least, not unit testing like our managers and the ‘old timers’ know. The kind of unit testing they know is a lot like if you take a scenario (or in UML jargon ‘use case’) for a particular (coarse grained) code module, which might exercise it in a bunch of different ways, usually in a specific order (ideally reminicent of an actual business process). For the old time unit tests, its a fairly small jump up to integration tests.
    The new style of unit tests are on the other hand, extremely fine grained. They are about as fine grained as it is possible to get. They do not tend to exercise the module as a whole, but rather you tend to end up with one test per method, and in between tests the object is often ‘reset’. Consequently it is a huge leap up to integration testing.
    And on the other hand, there is a blatant logical fallacy. You often hear (or read) the TDD advocates saying things like: “I used to write crap code, but then I found TDD and got saved and I write really good code now (so long as I do TDD)”.
    The logical fallacy is that these new unit tests are themselves code. If you write crap code (and most TDD fanatics themselves testify that about themselves), then how is it that writing *more* code is going to solve the problem?
    Unfortunately, all the TDD people I’ve met have become so overconfident in their new found TDD prowess that they have all stopped doing any of the other kinds of testing that you mention. Integration testing? Can I automate it? If not then forget it. Actually run the application and look at the freaking UI with my honest for goodness flesh and blood eyeballs? Can’t be automated, not going to do it.
    As good as TDD may be, by creating this enormous blindspot in its adherents it has actually become counterproductive.
    And don’t get me wrong, there’s some good stuff in there:
    + A way to get around ‘coders block’
    + In order to write tests you need to understand requirements (so theoretically the developers will not go off half cocked)
    + It helps clarify what parts of the system which have vague requirements
    + (In theory) you can get the customer involved in coming up with new tests
    On that last point though, because JUnit tests seem to be gravitationally attracted to becoming as fine grained as possible, they often seem to end up just being an exact mirror of the code. So unfortunately the customer isn’t going to be able to participate, refactoring becomes a pain in the butt, any change to your test data throws a massive spanner in the works etc. Now, I don’t know if this is a necessary consequence of following TDD, or if its just because the people writing the tests are crap coders (their words šŸ™‚
    What I do know, is that it is becoming apparent from the various articles about testing frameworks, that when you whole heartedly adopt TDD and it becomes the most important thing in your code, that that effects the design.
    Lets assume for a moment that there is an ‘ideal’ design. Maybe its a zen thing, maybe its unobtainable (like in Plato’s cave, our design can only be a shadow of the ideal design). Lets assume though that there is a ‘natural’ design. If you can find this groove, get into the zen moment, then everything will flow effortlessly and naturally. There will be a natural number of classes. There will be a particular way for those classes to interact in a natural fashion. Each of those classes will handle those responsibilities (may be more than one) which come naturally to it. There will be a certain natural number of methods per class, and those methods will have a natural size (which will of course vary from one method to another).
    What it seems that TDD does, is it ‘encourages’ us to program in a certain way. If for instance we are going to do our tests with a lot of mocking, then that will greatly influence the way in which our classes get coded to interact with each other. If we are doing TDD, then because we write very fine grained tests that very closely mirror each method, we will want our methods to be as small as possible. And consequently we will end up with a lot of methods, or a lot of classes. I’ve heard most of these people saying that each class should only have one responsibility, which will again effect the design (and often causes the mocking (or equivalent) to become quite a large part of the testing task).
    In each case, by applying the blanket rules which TDD leads us towards, we end up with a design that is potentially less than ideal. We start piling up layer upon layer upon layer of indirection, framework upon framework (and of course each framework making the testing that much harder).
    Where does the madness end? What would it take to actually get the TDD people to actually run the application once in a while?
    I find their overwhelming faith that all they need to do is run the JUnit tests… disturbing.

  3. Hi Rick,
    Thanks for the well thought out response. It appears that most of your topic has been covered by Hani previously, but with more Bile, and less eloquence.
    It’s *not* a straw man. I didn’t raise this just to prove my own point. I observe behaviour and want to try and help people avoid things that I think are bad.
    I was very keen not to enter the TDD debate, because as I’ve banged on about before, TDD isn’t about testing. So, I’ll sidestep your comments, which I tend to agree with, but that’s not the point. I also don’t give a flying duck whether you want to write code using TDD. I find that it’s a great way for me to write well structured OO code, and much better for me than other alternatives. My code is much better for having learnt this technique, and that’s something that I’m grateful for. Anyways, back to the point of my blog entry.
    I mentioned “regression testing, automated testing and unit testing” because I care about what the customer thinks their getting.
    In fact, my blog was all about functional testing, and regression suites of functional testing. Maybe I should have been clearer, but what the heck, it opens a dialog for people to discuss.
    Here’s my take on unit tests and functional tests. Unit tests prove the code works the way the developer intended. Functional tests prove the code works the way the customer intended.
    You regression test both because a change in customer functionality will (may) require changes to how the code works. In many cases it generally changes the way code is assembled. Conversely, changing code and the unit tests require ensuring that the customer focussed functionality isn’t compromised.
    I’ve been delivering code to satisfied customers for years. I’d been doing it before I found TDD, and before I joined ThoughtWorks. The one overriding aspect of my delivery has been a fairly comprehensive set of functional regression tests that approximates my understanding of what the customer has been asking me to build.
    As for your concerns about TDD, and how it has helped me, then send me an email, and I’ll be happy to expand further on your comments.

  4. Thanks Jon. Less bile than Hani eh? Well, at least I’ve got something to aim for then. šŸ˜€
    I am aware that I am deeply biased against ‘the whole XP thing’. Partly its because of a (non Thoughtworks) contractor I ran into in England during the dotcom bubble, and partly its because practically every experience I’ve had with XP has been negative (again, note that this is a fairly small sample).
    On my current project we’d tried doing the XP/Agile/TDD thing and it hadn’t worked. It had, by all objective measures, failed dismally. (Especially the practice of ‘oral documentation’ – that one really turned around and bit us on the bottom when it came time to deliver).
    So, on the recomendation of an outgoing fan of XP/Agile/TDD we got in some Thoughtworks consultants/contractors. On their first day, I told them that I’d had this bad experience with XP, and that it had been very traumatic, and couldn’t we all just get on with the job at hand. Yes the unit tests were in a shambles, no, noone could give them a coherent picture of the entire requirements etc. It would be great if they could help us fix up the tests and get them into better shape etc.
    Unfortunately, that seems to have been either completely ignored, or was actually a red flag to a bull. Because since then just about every interaction I’ve had with them seems to have been viewed as a challenge to ‘win me over’ to ‘unit testing’, or an opportunity for them to snidely put down our practices, eg
    “I found problem X with unit test Y”
    Me: “thats great, well spotted, how are you going to fix it?”
    “Oh, I’m not going to fix it”
    “of course, if you’d been doing *proper* [unit testing|other XP practice] this never would have happened”.
    I’m thinking – Yes, I know its broken, now for goodness sake stop *whining* about it and help us fix it.
    I guess that strategically speaking its their ‘job’ to preach ‘the XP thing’ to the poor beknighted natives. After all, thats Thoughtworks revenue stream. But its so incredibly annoying that I’d rather chew my own arm off than work with anyone from Thoughtworks ever again.

    Now the relevance of the chap in England. He and I were contractors on a small-medium project. It was his first time as a contractor, and he tried to get the client (a large IT services company) to change their methodology (which took normally six months of training just to get up to speed on, so you can imagine how eager they all were to throw it away (not).
    He ran around saying things like “the project is doomed”, “if we don’t use my methodology I guarantee it will fail”. I tried to give him some coaching, ie that I’d worked for dozens of different places, everybody has their own methodology, I’d been successful each time despite the various bizarenesses of the methodologies in question, just take responsibility for your piece of the puzzle and everything will work out in the end. Eventually he tried taking over the team leader’s role, which of course failed, and he broke his contract and left.
    Now everytime someone makes an extravagant claim about hwo a methodology is ‘the only way’, I flashback to that guy.
    I’ve been on my share of ‘death march’ projects, where everyone else gives up almost before you’re out of the gate. I’ve overcome huge odds, turned them around, and made (at the very least the bits within my sphere of influence) them work. My observation is that the methodology is the least important part of that process.

    While any kind of mindless fanaticism gets a negative reaction, I think there are two things in particular that many people react strongly against:
    #1 The frequent suggestion by XPers (or TDDers (or insert fanatic of your choice)) that anyone that hasn’t leapt onto their particular bandwagon must be a ‘below average’ programmer.*
    Its not hard to see why that sort of suggestion is received as though it were trolling and flamebait.
    #2 The frequent suggestion by XPers (or TDDers, or anyone else for that matter) that it is practically impossible to deliver good code without whatever methodology it is that they’re into this month. The ‘oldsters’ (ie been in the industry at least 5 yrs) amongst us, or at least those not suffering from self induced amnesia, will immediately take umbrage at any and all such claims, because we all have massive amounts of experience which directly contradicts the claims being made.
    I suppose a possible resolution to #2 would be to observe that perhaps the methodology fanatics have just as much anecdotal evidence that supports their claims.
    Ie that before they ‘found XP’ they consistently failed to deliver good code … in which case I’d have to take a good hard look at #1, because I’d be thinking that perhaps it was the fanatics that were below average.
    *I have strong evidence that I am a ‘super’ programmer, quite a lot more productive than the average programmer. Most recently we measured progress against estimates:
    The first time I’d completed 30 out of 48 estimated days of work, the second time (18 working days later) I’d completed 64 out of 102 estimated days of work.
    So in 18 working days, I’d completed 34 more days of work, whereas the other three full time programmers had added 38 estimated days of work.
    Well, maybe not ‘super’ maybe just ‘great’. šŸ˜€

  5. I’m a ‘super’ programmer and I believe in regression testing.
    When a project becomes large (ideal right) You must have a method for testing. The tests should be closely related to the important “critical” spots, or, related to code which is “changing frequently”.
    Without it, on some projects – its simply not possible to produce a result which you can say “Yes, I believe its doing the right thing”. Regression testing gives you that ability.

Comments are closed.