Tests are supposed to fail

How do you tell the difference between an automated test written by a tester and one written by a developer? The difference is usually something like this:

Developer-written test:
Step 1: Set up test
Step 2: Do test
Step 3: Verify result
Step 4: Clean up test, assuming everything went right

Tester-written test:
Step 1: Clean up whatever mess might be lingering from the previous tests
Step 2: Set up test
Step 3: Make sure setup worked
Step 4: Do test
Step 5: Cater for a few different failure possibilities
Step 6: Verify result
Step 7: Clean up test, assuming everything went right
Step 8: Clean up test, assuming everything went completely wrong

Developers are trained to write stuff that will work correctly. If they’re a good developer, they shouldn’t be accepting failure as a part of life, they should be striving to make stuff that works. On the other hand, testers expect failure on a regular basis, so they will expect the test to fail at some point, despite everyone’s best intentions.

The differences in how the test is written will become apparent when an application failure does happen. If the test was written by an appropriately paranoid tester, then there’s a good chance that the test will produce an appropriate failure message and clean up the ensuing mess. If the test wasn’t quite paranoid enough, then one or more of the following things may happen:

1. The test will fail with some godawful error message like “Could not find object on page”, resulting in much time wasted debugging the test to figure out what went wrong.
2. The test will have changed some vital setting in the application that it didn’t clean up, resulting in a chain reaction of failure for the rest of the test suite.
3. The QA department will spend all day trying to figure out the one little thing that caused so much failure, and will have wasted time they could have spent finding more important bugs (automation != efficiency!).

So here are the golden rules of writing robust test scripts:

  1. Always expect failure.
    Use if-else, try-catch…whatever you can to anticipate failure and cater for it accordingly. 
  2. If you change something vital, clean it up.
    Cleanup scripts must execute even if an unexpected failure occurs. There is always the possibility that unexpected failure will occur before cleanup, and the remainder of the test will not be executed.
  3. Tests should be as atomic as possible.
    This means that tests should not be dependant upon other tests. Preferably, they should not even be using the same data. Make tests create their own data wherever possible instead of relying on existing data within the system.

2 thoughts on “Tests are supposed to fail

  1. Hi,

    Agree with your golden rules, but not sure about the road-trip getting there.

    I think there are several shades of test developers – whether done by “testers”, developers or “test automation developers”.

    It usually works better to have specialist test developers working with the scripting/test case implementation – so that the scripting/automation is run more as a software development activity in it’s own right – rather than being embedded in the test design activity.

  2. Hi Simon. True, I am kind of over-generalising. Agreed that it’s better to have specialist test developers working on automation.

    However, it depends on the team’s resources. If there aren’t many testers but automation is still a part of the team’s test process, then the developers may pitch in and help out with the automated test effort. It’s not ideal, but it can work.

    The other scenario I’ve encountered is when a developer (or wannabe developer) is shunted into test automation as either a stepping stone towards development, or just due to lack of test resources. Again, not ideal, but it can work with some education about test automation.

Comments are closed.