Checklists > scripts

Earlier this year, I started work testing a system that had been developed over about 5 years and there were no documented test cases. There was a huge suite of automated tests, but no test cases whatsoever. I didn’t have time to write 5 years’ worth of test scripts for a system I knew nothing about, so I decided to try something radical – checklists instead of scripts. For an excellent description on the difference between checklists vs scripts, see  Cem Kaner’s presentation on the value of checklists (an article I only found just now, thanks to Quick Testing Tips).

I have to say, I had a few doubts about the approach, and at first thought maybe it could only be a temporary solution, but I have to say it has proven to be quite effective and efficient.

Some benefits are:

  • Lots of time saved upfront not needing to write test scripts
  • Quick and easy maintenance – checklists take much less time to update than detailed scripts
  • Checklists force me and other testers to think about how the system works, and find out how to do things if we don’t know how (the users might not know how either – how easy is this information to find?)
  • Checklists allow a different path to be taken each time (ie. different things are done to achieve the same goal), which results in greater coverage, and more bugs found

Some disadvantages:

  • If non-testers have to use the checklists, they may not cover as much ground as experienced testers, and test coverage may decrease (compared to using test scripts)
  • Some information that might have been included in scripts is lost. I supplement this by keeping some reference information about the application in a wiki (Fitnesse actually, so automated tests can one day live on the same page as the relevant information)

If anyone else has had experience using checklists in place of scripts, I’d be very interested in hearing about the results – good or bad.

5 thoughts on “Checklists > scripts

  1. When I was in the interactive agency world, I used both: checklists for look and feel things and basic types of forms (data collection) and test cases for more complex logic involving user registration and security.

    Now, as a consultant, I amalgamate the two. My test plans start out as checklists and get fleshed out into test cases as time permits.

  2. The Director, thanks for sharing your experience. I’ve found myself drifting towards that approach as well.

    I have found that checklists are more difficult to hand over to new testers because they may have less knowledge of the system under test. However because I work in product development with full time testers only, this may not be a problem for this context because it is in the best interest of new testers to acquire a very thorough knowledge of the system under test, even though that will take some extra time. If checklists force new testers to ask questions about the system, maybe this is a good thing? Something for me to think about.

  3. I’m not sure that’s a valid argument against scripting. For one thing, writing test scripts without any sort of traceability to requirements is horrible. That’s not a tick against scripting, that’s a tick against scripting done incorrectly.

    I don’t see the checklists and scripts as being mutually exclusive. Traditionally, you have test cases which serve as a checklist. The scripts are generated from these. If someone is writing scripts without mapping the test cases out, well that’s just poor execution on their part.

    I’m unaware of any process that demands scripts be used 100% of the time. You have to consider the time and cost to determine if it’s worth it. The same holds true for automation.

    One of your pros might also be considered a con. If you run a well documented script and you fail it, I would typically have someone come behind you and trace your steps. We don’t typically open defects unless we can repeat the scenario. If you’re working from a checklist, I have no way of knowing if I can repeat the scenario short of you fully documenting what you did, but guess what? That’s a script :)

    As far as the time saved, that’s really situational. Regardless of how or when you do it, you’re going to have to spend time understanding the system. If you spend that time during the design/construction phases, why not document ways to test what you know? Even if you use a checklist to create a transaction, at some point you have to figure out how to create that transaction. At least with a script, anyone that comes behind me will know how without having to spend the same amount of time that I spent figuring out something that’s already known. You did touch on this in your disadvantages.

    Also, I wouldn’t say that a checklist guarantees that you’ll have more coverage. Scripts are only good for what you know. If you’re in a script based shop and you uncover something that falls outside your original scope you typically:

    1. Missed that requirement
    2. The requirement didn’t exist
    3. The dev team implemented undocumented functionality

    In either case, I would take the time to write a script for that scenario. I don’t see why you would uncover this working from a check list and not find it working from a script unless you’re not paying attention. But that’s a flaw of the tester, not the process.

    I like the way you set this up listing pros and cons of each. Too often, people make up scenarios that suite their own cause. In reality, no project will be exactly like another. It’s good to understand a particular situation, compare the tools and processes and list why each would or wouldn’t be beneficial.

  4. Hi Chad,

    Thanks for commenting, you make some really excellent points. Since writing this post I have had time to try out combinations of checklists and scripts and I agree with you – checklists and scripts are definitely not mutually exclusive.

    Having no traceability to requirements was pretty horrifying to me as well, but when there are no documented requirements then the alternatives are to either put on your BA hat and retrospectively document the requirements of the system before even starting to work on test cases, or let the test cases double as documented requirements. Like you say, the best option really depends on the situation.

    As you say, when a check fails, repeatable steps need to be documented in the bug report. So in this way, a failed check would generate the creation of a script in the bug report. In past projects I’ve seen this turn the bug tracker into a handy script repository, and the testers will go through all of the fixed bugs as part of regression testing and follow the scripts.

    The approach I outlined was an approach for dealing with the following situation:
    1. Requirements are either undocumented, or most requirement documents are out of date.
    2. The dev team has a history of implementing functionality without documenting it.
    3. The system is already a large, complex system.
    4. The test team is under-resourced

    As you say, you probably would discover new things by working from scripts, but you would have to be paying attention. One advantage I have found from checklists is that it can force you to pay attention because it requires more thinking power than just following steps. As you say that is a flaw of the tester, but as the test team is under-resourced in this situation, non-testers are often called in to lend a hand with regression testing, and each of these non-testers has different knowledge of the system. In this way, checklists may encourage them to explore areas outside of the regular script paths, which may reveal more bugs and deliver more system information back to the tester/s, who can then create scripts. Perhaps this is a way for checklists to evolve into scripts?

    Thanks again for commenting, it was really good to read your perspective and it’s given me a lot to think about.

  5. Given your situation:

    The approach I outlined was an approach for dealing with the following situation:
    1. Requirements are either undocumented, or most requirement documents are out of date.
    2. The dev team has a history of implementing functionality without documenting it.
    3. The system is already a large, complex system.
    4. The test team is under-resourced

    I agree completely and I feel by detailing that scenario, you further validate your point. Scripts work with known variables. For things that are unknown, you have to explore. That’s the definition. I enjoyed the read. Keep up the good work.

Comments are closed.