How many times has this happened to you? You write an automated (functional GUI) test, you run it, it works, so you check it in and add it to a suite. The next day, you check the suite results and find that your brand new test has failed. What went wrong?
The best way to prevent this is to run the test twice. If you try this, you may find that the test modified the state of the application in some way (by changing data or settings). So when the test is run a second time, things don’t work like the test expects.
Simple example – you have a test that adds items to a shopping cart and checks the cart to make sure the item is added. You run the test once, and it works fine. You run the test twice, and it fails because now there are TWO items in the shopping cart, and the test is programmed to verify the first item in the cart. so it is looking at the wrong item.
So, put cleanup in your test, you say? An excellent suggestion. So you add a method to remove all items from your cart when you’re done. But this can be easily thwarted in the following ways:
– The test could break after the cart item was added, but before the cleanup is supposed to happen. So the cleanup will not be executed.
– The cleanup itself could fail, which would make the entire test fail, even if the “add item to shopping cart” functionality is working correctly.
So my advice is:
– Do cleanup, but don’t rely on it. Try to code your test so that if cleanup doesn’t happen, your test can still cope to a reasonable extent.
– Don’t let your whole test fail if cleanup fails. I usually surround cleanup in a try-catch statement where the catch part is empty (I just heard developers cringing, but trust me this is okay for this kind of thing). Basically if cleanup fails, I don’t want to know about it.
Why? Because if 20 tests are using the “remove item from cart” feature during cleanup, and that feature breaks, I don’t want 20 tests to fail. I want 1 test to fail – the test verifying whether the “remove item from cart” feature works.