This is a story about my experience in using a wiki to manage test cases.
Over the past few years, I have been evaluating different test case management approaches and tools. At first I was looking for a one-fits-all solution, but it quickly became apparent that such a dream was impossible to achieve. At Campaign Monitor, we are constantly adapting and improving our test approach to fit each release cycle. So I started focusing on finding a tool that supports our current test approach, but is flexible enough to adapt when the test approach changes to suit a new context.
I have found that the way in which many test tools are designed can force the testers into taking a certain test approach. A wiki is somewhat like a series of blank canvases all linked together, so it seems like a very flexible solution. In practice, this proved to be the case, but was flexibility enough?
We had more freedom in our test cases
We began with a repository of regression test cases in TestLink and a suite of 1000+ automated GUI tests that ran every night. When we decided to try out FitNesse, we stopped adding new test cases to TestLink and added them to a wiki page in FitNesse instead.
The format of our test cases changed as well. TestLink’s interface encourages the user to enter test cases in the format of “Title” “Steps” and “Expected results”. With a blank wiki page, we could write tests in whatever format we desired. So we used Given-When-Then, which is a very concise and easy-to-read format. For example:
As a paying customer
Given that I have added an item to my shopping cart
When I choose to checkout
Then I should be shown a summary of my order
And I should be prompted to pay for my purchase
One thing I like about this format is that it gives the tester a lot of freedom in the way they can run the test. In the example above, it doesn’t specify *how* the paying customer adds items to the cart, or even in what way they are prompted to pay for their purchase. Given the flexible nature of feature requirements on our projects, this suits us very well. In addition, it increases the likelihood that different testers will run the test in entirely different ways.
Everyone was on the same page
This was the first experience I had ever had where developers not only read the test cases without needing to be coaxed into it, but also edited and added to the scenarios. This was probably due to the easy-to-edit nature of the wiki, the easy-to-understand format of the test cases and our good relationship with our developers.
The first time we tried this approach, we wrote a page of test cases that were relevant to a particular developer’s assigned feature. We added a series of questions that we had about the feature to the same wiki page. Then we sent the URL for that page to the developer. The developer read the test cases, answered the questions and added a few more test cases of his own. We had some follow-up questions to some of his responses, so we went to his office to discuss them in person. While discussing, we were able to quickly update the test cases in the wiki page from his computer, and add additional notes that we could expand into new test cases later on.
In a later release, a different developer had not seen the Given-When-Then format before and was initially confused. I went to his office to explain it, and after about 30 seconds of explanation he easily figured it out. So we went through the test cases together, and he modified them and added new ones as we discussed them. Many of the test cases were based on assumptions and he was able to quickly validate them and correct them as necessary.
Using a wiki for test plans had unexpected benefits
I used to write detailed, 20 page test plan documents that nobody ever wanted to read. After realising this was a pointless exercise, these test plans were eventually whittled down to two-paragraph summaries on the company intranet noticeboard. When we started using FitNesse to store test cases, we needed a central document to tell us which tests we would be using for each release, and where they were located. So creating a test plan document in the wiki to hold this information seemed like a sensible thing to do. As it was easily editable, it encouraged me to update it with important information as the release progressed.
The test plans became a guide to the testers for what was testable, what work we had done and what work was remaining. We added a list of high-risk feature areas to the plan and it became a central battle plan for regression testing. The latest version is easily shared with the team just by sending the URL.
Scalability could be an issue
We made a manual test fixture to mark manually run test cases as passed or failed. However, each of these results has to be individually reset so it wasn’t really a scalable solution. It’s worked okay so far due to the small amount of tests and the fact that we only use a subset of the tests for regression, but I expect that it could become a problem as more tests accumulate.
Automation integration was surprisingly disadvantageous
FitNesse is designed to be hooked into an automation tool, such as Selenium. The idea is that tests written in decision tables or even written in sentence form can be run as automated tests. At first it’s a bit tough to get your head around the concept that plain text written in a wiki can magically turn into an automated test. Basically what’s happening is that the automation is using keywords from the wiki text as method names and parameters in the code behind. So the wiki text is like a series of commands and data inputs that is fed into the automated test code. For more information, check out FitNesse’s two minute example.
We tried using it with Selenium and ran into a few issues. First of all, it was a real pain to set up. This was mostly due to lack of clear information about how to set up FitNesse and Selenium with a C# codebase. Second of all, writing tests to suit the FitNesse model turned out to be pretty time consuming. We did get tests working in the end, but I don’t think it suited our style of testing very well. But at least now we have the capability of running automated tests this way, and we can use that if we ever find a situation where it could be an advantage.
More plus than minus
Overall I’ve been pretty happy with this wiki experience and I’m going to stick with it and keep evaluating. Our current plan is to take test cases out of the wiki and add them to the our automated test suite, which runs nightly (independent of FitNesse). For now, I think this may suit us better than running tests from FitNesse itself. This may help address the scalability issues too.