Extreme Testing (XT)

I find that most test methodologies today don’t fit very well with agile development methodologies. Whenever requirements change, the developers adapt fairly easily but the testers are still gritting their teeth because they have to update a huge backlog of test cases for regression testing.

Most test orgs do try to do similar tasks at the same time, like write test cases while developers are doing design. I think the problem is that many people think test cases are equivalent to design documents, but they’re not – they’re more equivalent to pseudocode or Z-specification, which says exactly what you’re going to do before you do it. Imagine if you wrote pseudocode for all your apps before you wrote actual code, then had to go back and update all the pseudocode after anything changed. You’d think “what a waste of time, why aren’t I updating the actual code?”

The test APPROACH is more equivalent to the design documents – it outlines how you will test, not the items you will click in order to do it.

Why remove test cases?:
From Lessons Learned in Software Testing
Lesson 46: One important outcome of a test process is a better, smarter tester.
We often hear arguments against any form of testing that results in minimal or no documentation, as though the only value of testing is what comes from writing down our tests. This ignores a profoundly important product of testing: the tester herself.

Good testers are always learning. As the project progresses, they gain insight into the product and gradually improve their reflexes and sensibilities in every way that matters in that project. An experienced tester who knows the product and has been through a release cycle or two is able to test with vastly improved effectiveness – even with out any instructions – compared to an inexperienced tester who is handed a set of written instructions about how to test the product.

Some consultants and writers in the fields seem to believe that an ineffective tester can be transformed into an effective tester merely by handing her a test procedure. In our opinion, this is a bad practice. It reflects a fundamental misunderstanding of testing and the people who do it well.

Goals of XT:
– Turn the main defect-finding activities up to maximum, cut out all the non-essential bits.
– Keep up with agile processes without a ton of maintenance required.
– Encourage tester thinking and creativity during the actual testing process.
– Highlight important information, when it matters.
– Increase test coverage.
– Eliminate risk related to out of date test cases.
– Encourage greater understanding of product requirements.
– Reduce focus on CRUD functionality (basic functionality which is expected to work), more emphasis on testing integrated system components and complex scenarios

Concepts of XT:
– Less time spent documenting means more time for testing. More time for testing means more defects found.
– Without defined test script steps, testers can use their own skills and judgement to decide the best way to test the requirement when the product is presented to them.

Process of XT:
1. Read requirements, familiarise yourself with them. Break them down into smaller chunks and add assumptions and implied requirements if necessary.
2. Conduct a risk analysis to identify which requirements are highest priority.
3. Receive application.
4. Test against requirements. Log pass/fail results against them. Relate defects to them.
5. Create a regression suite from high priority requirements and high priority defects.

Metrics from XT:
– Map defect clusters to requirement areas
– % pass/fail requirements
– number of defects

Advantages:
– Testing directly against latest requirements eliminates time spent updating test cases. Eliminates risk of testing against out of date test cases.
– Saves time writing test cases, the majority of which will be discarded later anyway.
– Makes requirement coverage easier to track.
– More exploratory testing is encouraged.
– More tester creativity is utilised at a time when the most information is available to the tester.
– Reduces the false sense of security created by a sea of low priority test cases that pass
– More agile – can keep up more easily with changed requirements

Disadvantages:
– Cannot get non-testers to run scripts, as the methodology requires a medium to high level of tester experience.
– Hard to track specifics of passed tests, as details are only recorded for failed tests (in defects)
– Hard to produce proof of testing when most of the tests pass.
– Managers and customers may find the results alarming and overly negative, which may lead to undesirable consequences.

Thoughts? Flames? Rapturous cries of agreement?

2 thoughts on “Extreme Testing (XT)

  1. If you need to document the tests that are passing there are a number of options. You can either manually document (in summary form) as you test (which I find to be tedious and breaks the rhythm). You can use screen capture tools to record the tests and use that as documentation (or review later to make actual written documentation). Probably the best option though would involve ensuring trace logs are used for the application under test that can be used as documentation. If the logs are well formed the potential exists to get an overview of what sort of code coverage the manual testing is achieving and help identify what areas may need further testing.

    If the test cases are considered pseudo-code to the actual code, perhaps extracting some sort of pseudo-code or visual diagram from the the code itself might help identify issues in the implementation (obviously any extracted pseudo-code could not be used as the test case). Perhaps by looking at the implementation issues may become evident that may not be as obvious in straight code or even in the application itself.

    I think the trick is to still keep reporting the stuff that you’ve tested and how you know things to work. (i.e. Requirement A works for B). Essentially you need to annotate the requirements and include those annotations as part of the reporting to managers/clients.

  2. Agreed, yes I find documenting as you go breaks the rhythm, as you put it. I thought about screen capture or even using automation record tools, but it seems too messy. Yes you could review it later and make documentation out of it, that might be the neater option, but sometimes half the test is in your head so you’d have to talk all the way through it at all, and that’s not a good idea in open plan offices.

    Trace logs could work in terms of coverage, but the bigger concern is proof of testing in case of production fail. ie “The product failed in some catastrophic way, costing the company millions of dollars – did you even bother to test this??”

    You raise a good point about extracting pseudo-code from the code, but it sounds a lot like reviewing the design. I guess it brings it down to a lower level though. It is difficult to find complex errors simply by reviewing code, maybe bringing it up to a level partway between design and implementation would reveal to us a new method of finding bugs.

    As you say, it would be easiest just to justify your pass results with annotations, e.g. “I tested requirement A in these 3 browsers and the message appears in all of them”. Perhaps this would be enough documentation to deal with an audit of some kind. It’s no worse (and maybe even better) than many of the test cases I have seen.

    Thanks Rhys, you’ve given me food for thought.

Comments are closed.