Curing fake testing with education

There’s a plague upon the testing craft, and it’s called “fake testing”. You could also call it “ineffective testing”. Here’s James Bach’s guide to faking a test project.

I believe fake testing is generally practiced by testers who just don’t know any better. We can’t really blame them. Most testers “fall in” to the profession. There’s a widely held belief that testing is an unskilled profession that can be done by anyone. So when it comes to adding testers to a project, often “anyone” is hired.

We could try to educate those “anyone” testers, but there’s not much point. They are the symptom, and not the problem. The problem is a lack of understanding of testing by the people who are hiring these testers in the first place. The problem is that testing is easy to fake, and its effectiveness is difficult to measure.

This is why we end up with testing certification. Hiring managers don’t know enough about testing to recognise good testers, so they’re just turning to some kind of authority to tell them what a good tester looks like.

If we want to attract more skilled professionals to software testing, then we need to work on educating the developers, the project managers, the product managers, the recruiters and the other people involved in hiring testers in the first place. If they recognise good testing, then they’ll know how to recognise good testers. Understanding how to effectively test software is something that the whole software development team should know about to some extent anyway.

So how do we do it? Well, for starters, maybe we should stop siloing ourselves as a community. We should be networking with the rest of the IT community, doing talks at developer and project management conferences, submitting articles to a broader audience. I’m sure some of us already are. It would be great to introduce more testing-focused coursework in universities. I know it was an area that was definitely lacking in my computer science coursework.

It’s been so good to see non-testers coming along to the Sydney Tester Meetups. I think we need more online resources, so that it’s easy to direct people to good sources of information about software testing. Something like “good testing starts here”. Dot com.

What do you think? How can we help educate the greater IT community about software testing?

7 thoughts on “Curing fake testing with education

  1. Ben Kelly

    Hiya Trish.
    I heart this post.
    I consider it a call to arms for all testers of merit.

    You’ve nailed a bunch of points that I think are really important.

    Fake testing is a symptom of a larger problem.
    People making decisions about testing often have no idea about testing beyond the sad, outdated Taylorist practices that we all know and loathe.
    Certification has a vested interest in propagating this self-same bullshit because the net result is a fantastic revenue stream for them.

    To change this, we need people making hiring decisions to have enough nous to pick decent testers from the fakes. We need the people that testers interact with to understand what good testing looks like and demand nothing less.

    I agree with you that we need to look beyond testing and educate the people we work with to make sure the demand for fake testing withers and dies. Therein lies the challenge. Getting people who aren’t testers to give enough of a shit about testing to make a difference. Many seem to want it to be simple; to believe in the fantasy that a couple of people writing test cases for monkeys to churn through is good enough. It’s simpler. It’s more ‘measurable’. It’s a complete and utter lie, but it’s a comforting one.

    I have spoken with project managers, with recruiters, with HR people, with programmers, with tech-savvy C-level execs. Anyone that will listen. Some get it. Some don’t. I don’t know how much difference I’ve made – some I think. I’m just one person though. There are a lot of us out there. Like-minded, intelligent testers who hate fake testing with a fiery passion. Being able to mobilise such a group and direct them at those that most need to hear their message – this makes a great deal of sense to me.

    No one likes bad news, so it’s a matter of being able to get our message across in such a way that it looks like a more attractive alternative to the misapprehension under which they are currently labouring. The trick will be to tailor the message to the audience in such a way that it matters to them. Why does fake testing hurt them and why is what we bring so much better? How do we get them to grok testing? I think the first step will be to help a few passionate souls in these groups to understand good testing. The next will be to get them to tell their peers in their own words why this stuff matters. If you have feet in both camps, then you’re probably well placed to do this.

    It’s a gargantuan task, but a most worthy one.

  2. Vesna

    *Thank You*!!!!! Yes yes yes yes yes and yes! I do not disagree with testing conferences and learning/collaborating with the testing community at large. But, I really think we are bandaging the problem by focusing *only* on testers. I have narrowed my scope of who my clients are so that I am working with like-minded business owners and helping them orchestrate testing that is needed for their specific project, industry, and culture. I work mainly with start ups and small businesses right now, but you are 100% correct in working with BA’s/HR/Management Conferences, etc. etc.

    This is what I do and is what I am passionate about….educating people on how testing will shoot their business forward. In the end – it’s about the business and how the entire team (dev, testers, etc.) pull their stuff together to move the business farther ahead.

    Ves

  3. Simon Morley

    Good post, Trish!

    I have the fortune/misfortune of talking to some different levels of management – where I usually state that testing in general (and the understanding of it as a whole) needs some re-work/education/awareness. It can appear to be a gloomy picture – but the picture I paint is to describe how difficult ‘good testing’ is.

    I tend to contrast ‘good testing’ with something they call ‘testing’ . I usually highlight ways that ‘traditional management views of testing’ are problematical – showing examples where test case or fault/bug report counting falls over, how entry/exit criteria get gamed, how explicit requirements are fallible, the hidden implicit requirements that abound, how the framing of questions and decisions by different stakeholders affects the effort in teams, etc, etc.

    Then there’s the problem of talking about quality – that it’s not an absolute – ‘we produce high quality’ – the sort of half-statements that sound good to certain people but don’t say very much, and invariably do more harm than good. This is part taylorism, part ignorance, part something else (that I haven’t worked out).

    Taylorism is rife – I’ve looked at some parts of this before – but I think the idea of ‘scientific management’ has drifted in and been interpreted as ‘good management’.

    General test ignorance is also widespread. The time when you meet anyone that thinks “some testing is better than no testing” (where they are thinking of a re-execution of a pre-defined set of tests) then there is a problem – if they’re not thinking about what they want to test, and for what purpose, to arrive at some decision about what to test – then they’re not testing. (In this case, they’re ‘fake testing’.) I usually counter these types of statements with saying, replace “testing” with “random testing” (the implication being that if you haven’t thought about what to test, what not to test, and how, then it’s random ‘testing’ and so you’ll get ‘random’ information out). So, I ask, does “some random testing is better than no random testing” sound okay, or is there a possibility that we’re wasting effort and ultimately not doing ‘good’ testing. Then they start asking what ‘good testing’ is…

    Another thing I also talk about is that good communication is key to good testing – this is on many different levels. Without good reporting & communication it almost makes your test design and execution worthless – unless you have a pair/partner that can interpret everything for you (including all the assumptions you’ve made or avoided….). Communication problems can be two-way! Poor communication from stakeholders (or anyone else interacting with testers) can impact the ‘quality’ of the testing (although excellent testers can help with this too – or at least understand the limitations it imposes on the testing of the product).

    So, yes, lots of work to do!

    I asked the question a while back on twitter, “is there a taxonomy of ‘good testing’ patterns?” – not sure it’s needed, but could be useful in educating stakeholders – although we should avoid the certification-type trap! Making it open-source and freely available maybe avoids part of that.

    But, yes, education is needed! The problem is – that understanding (or learning) that ‘good testing’ is difficult is actually difficult – it’s a form of cognitive fluency -> you carry out believing what you already believe (about testing) because it’s easier…

    So, it’s a lot of work. But I have stopped looking for ways to make it easier.

  4. Simon Morley

    Ha!

    Just noticed a typo in the last line – it should read:

    But I have not stopped looking for ways to make it easier.

  5. Andrei

    In Sweden, the industry is putting a lot of pressure on Academia to introduce Advanced Software Testing courses.
    Unfortunately everyone thinks they know how to test, because anyone can prove that a system works, but few can prove that it doesn’t work.
    It’s unfortunate that we end-up learning testing from blog posts, articles and not from academia where a certain standard is kept and things are not doubt-able or less doubt-able

  6. Jim Holmes

    Great post! This sort of education has been a passion of mine for a long time, although I’ve been mostly on the development side of things.

    I think you’re writing about a topic that’s much broader: lack of professional skills in many aspects of software development. We hear a lot of noise around *DD, software craftsmanship, blah blah blah, but the harsh fact is every community be it .NET, PHP, or Ruby is showing strains around basics such as good software construction and testing. Folks come out of university ill-prepared for the real world, and the IT industry is horrible about mentoring.

    This situation is exacerbated in the testing domain for a number of reasons, but your “anyone” bit is spot on.

    I love your idea of breaking down the silos. I help run a conference (CodeMash) that does that across development platforms, but we’ve not done a good job of getting testers involved. That’s something I’ve been trying to work on, and I’d love to see more of it at other conferences, too!

  7. Fake Software Tester

    Well, being a Fake tester, I did a lot of research on this. What was funny was, similar to every person having their own definition for testing, every individual has his own perception of “fake testing” :).

    And to be frank, I am seeing the number of fake testers increase every day. The numbers are going up and not coming down; that’s when I decided, if you can’t beat them, join them :) !!!

Leave a Reply