Automated tests – everybody wants them, right? I’ve seen the hunger in people’s eyes when they hear about a suite of tests that run themselves. It sounds so high-tech and glamorous – machines running all your tests for you so you just have to check the results at the end of the day.
But it’s never as simple as “One Automation Please!” “Sure, here you go sir.” Test automation is an incredibly tricky thing to get right. After some lengthy research on the topic, I’m not even sure if anybody has managed to get it completely right. It’s a new concept, so it’s still evolving, but it seems like a few organisations have managed to get to a place where they are reasonably satisfied with their automation approach.
Before I discuss a few techniques, I’d just like to point out a few things about automation.
1. Automation is not a product. Do not deliver it to your customers. There are so many reasons not to do this. First of all, what are they going to do with them? Run them against their production system? Are you expecting the production system to change over time so much that it will need testing again? And if it does run and tests do fail, will the customer know how to interpret the results? Will your automated tests produce test data in their production system? Automation is a tool, a technique, a means to an end.
2. Automation is an investment. Do not use it unless you know you will profit from it. It takes a long time to automate a test. It takes even more time to maintain a suite of automated tests while the application under test is still under construction. Work out a way to decide whether automation is really saving you time, and check on it at regular points in the project lifecycle. If you are not getting any return on your automation investment, cut back.
3. Automation should not be a project deliverable. Automating tests is a technique designed to benefit the testers. If you list it as a project deliverable, you risk committing your test team to arbitrary goals, such as “Automate 100 test cases”. It should be a test team prerogative to decide when it is appropriate to use automation and how much to use, just like any other test technique or tool.
The most important thing to decide with regards to test automation is the purpose of it. What is your goal? How will you know when you are done? Here are a few goals I’ve seen in action:
Automate x% of all test cases.
Advantages: An easy target to measure.
Limitations: No prioritisation of test cases and goal may be unrealistic. When the team is under pressure to meet this goal, the testers usually automate the “easy” test cases in order to reach the target, which are usually trivial tests. In addition, x% of all test cases may not be automatable, which will result in testers writing “near enough is good enough” tests that may validate the wrong thing. The danger of this is that you will see a large amount of passing results, but they may not be accurate.
Automate all primary or high priority test cases.
Advantages: Ensures the most important areas are tested. An easy target to measure.
Limitations: Goal may be unrealistic for the available resources. If the whole test team is spending their time trying to reach the automation goal, who is looking for defects? It is difficult to scale back this goal because all of the targeted tests are high priority.
Automate smoke tests only
Advantages: An easy goal to achieve and easy to maintain. Will be used often enough to guarantee good return on investment.
Limitations: May not be enough to justify spending $x,000 on automated testing tool licenses. While suitable for very small teams, larger teams could benefit from automating more tests.
There is no definitive “right” approach, unfortunately. Here are a few approaches I am trialling at the moment.
1. Automate all smoke tests. Categorize regression test cases by product risk. Assuming a team of x testers automating for y hours per day, a maximum of z amount of test cases can be automated. Therefore, the goal is to automate the top z amount of highest risk test cases.
2. Automate all smoke tests. Identify the test cases that will take the longest time to execute manually. Rank them in order of longest to shortest time. Timebox time spent on automation to x hours per week per tester. The goal is to automate as many of the identified test cases as possible within the allotted time, in order of ranking.
Further reading: Just Enough Software Test Automation by Mosley and Posey.