“That’s one of the reasons we have testers. A great tester gives programmers immediate feedback on what they did right and what they did wrong. Believe it or not, one of the most valuable features of a tester is providing positive reinforcement. There is no better way to improve a programmer’s morale, happiness, and subjective sense of well-being than a La Marzocco Linea espresso machine to have dedicated testers who get frequent releases from the developers, try them out, and give negativeand positive feedback. Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody cares. Boo hoo.” – Joel Spolsky (full article)
There’s something that’s been bugging me about this quote for the past couple of weeks, and I think I finally figured out what it is. As a tester, it’s important to establish a certain dynamic between oneself and each developer. This is because our job is essentially to find flaws in the stuff they create. Nobody really likes having their mistakes pointed out to them. If care is not taken, it’s easy for developers to resent testers for doing their jobs. Imagine if a tester said:
“This feature you’ve made is full of bugs. You’ve done a terrible job.”
Obviously this is going to make the developer upset. The main issue here is that the tester has passed judgement on the developer – “You’ve done a terrible job”. If a tester is to work well with developers, the tester cannot pass judgement on the code. Testers report what they see, they don’t say if it’s good or bad. They report expected and actual results, not correct and incorrect results.
If we flip this around and the tester says:
“This feature you’ve made is brilliant. You’ve done a fantastic job!”
This may make the developer happy, but the tester is still passing judgement on the code. What happens when the developer creates another feature and asks the tester “what do you think?” Will the tester say “well it’s okay, but there are a lot of bugs here, do better next time”? This is not a good dynamic between the tester and the developer – it’s not the tester’s place to say whether the feature is good or bad. The feature is not made for the tester. The feature is made for the customer.
Furthermore, GUI-level testers will rarely see whether a developer has done a great job or not. Developers rarely write the specifications for the product they build, and many do not even design the interfaces. If a developer comes up with some brilliant solution to a really challenging technical problem, it’s unlikely that the tester will ever know about it – they don’t see this process, they only see the end result. The only measure a tester has is how many bugs are found. And this isn’t necessarily an indication of a job well done either – it doesn’t take into account the complexity of the task or the time allocated to the task. And it’s not a tester’s job to research these factors in order to tell the developer whether they have done a good job.
This kind of positive reinforcement is best left to the development manager, not the test team.
So what do developers think of this? Seasoned developer and self-professed IT tinkerer Rob Sanders says:
An interesting topic. My first thought is that a developer has the wrong mindset if they are looking for credit for a specific formula or process that they write or implement. You’re right, what the rest of the world cares about is: quality code which produces the expected result within the scope of “acceptable performance”.
Therefore, what a developer should look for in their output is defect-free code (or as close to it as humanly possible) which meets specifications and runs as fast as they can tune it. Credit for actual implementation specifics should come from other developers (let’s face it, in most cases only other developers normally would understand and appreciate it anyway).
So, I would expect kudos from QA Engineers if I produced a feature with lots of functionality (and good performance) and a low bug count. I wouldn’t expect credit because I used design pattern X, Y or Z. That should come from my supporting team of developers and architects. On the flip side, I’d expect to have defects and performance issues pointed out. The area which seems to generate the most friction seems to be (from my experience) when QA, Business Analysts and Developers have different interpretations of the stated requirements.
What do you think?