“That’s one of the reasons we have testers. A great tester gives programmers immediate feedback on what they did right and what they did wrong. Believe it or not, one of the most valuable features of a tester is providing positive reinforcement. There is no better way to improve a programmer’s morale, happiness, and subjective sense of well-being than a La Marzocco Linea espresso machine to have dedicated testers who get frequent releases from the developers, try them out, and give negativeand positive feedback. Otherwise it’s depressing to be a programmer. Here I am, typing away, writing all this awesome code, and nobody cares. Boo hoo.” – Joel Spolsky (full article)
There’s something that’s been bugging me about this quote for the past couple of weeks, and I think I finally figured out what it is. As a tester, it’s important to establish a certain dynamic between oneself and each developer. This is because our job is essentially to find flaws in the stuff they create. Nobody really likes having their mistakes pointed out to them. If care is not taken, it’s easy for developers to resent testers for doing their jobs. Imagine if a tester said:
“This feature you’ve made is full of bugs. You’ve done a terrible job.”
Obviously this is going to make the developer upset. The main issue here is that the tester has passed judgement on the developer – “You’ve done a terrible job”. If a tester is to work well with developers, the tester cannot pass judgement on the code. Testers report what they see, they don’t say if it’s good or bad. They report expected and actual results, not correct and incorrect results.
If we flip this around and the tester says:
“This feature you’ve made is brilliant. You’ve done a fantastic job!”
This may make the developer happy, but the tester is still passing judgement on the code. What happens when the developer creates another feature and asks the tester “what do you think?” Will the tester say “well it’s okay, but there are a lot of bugs here, do better next time”? This is not a good dynamic between the tester and the developer – it’s not the tester’s place to say whether the feature is good or bad. The feature is not made for the tester. The feature is made for the customer.
Furthermore, GUI-level testers will rarely see whether a developer has done a great job or not. Developers rarely write the specifications for the product they build, and many do not even design the interfaces. If a developer comes up with some brilliant solution to a really challenging technical problem, it’s unlikely that the tester will ever know about it – they don’t see this process, they only see the end result. The only measure a tester has is how many bugs are found. And this isn’t necessarily an indication of a job well done either – it doesn’t take into account the complexity of the task or the time allocated to the task. And it’s not a tester’s job to research these factors in order to tell the developer whether they have done a good job.
This kind of positive reinforcement is best left to the development manager, not the test team.
So what do developers think of this? Seasoned developer and self-professed IT tinkerer Rob Sanders says:
An interesting topic. My first thought is that a developer has the wrong mindset if they are looking for credit for a specific formula or process that they write or implement. You’re right, what the rest of the world cares about is: quality code which produces the expected result within the scope of “acceptable performance”.
Therefore, what a developer should look for in their output is defect-free code (or as close to it as humanly possible) which meets specifications and runs as fast as they can tune it. Credit for actual implementation specifics should come from other developers (let’s face it, in most cases only other developers normally would understand and appreciate it anyway).
So, I would expect kudos from QA Engineers if I produced a feature with lots of functionality (and good performance) and a low bug count. I wouldn’t expect credit because I used design pattern X, Y or Z. That should come from my supporting team of developers and architects. On the flip side, I’d expect to have defects and performance issues pointed out. The area which seems to generate the most friction seems to be (from my experience) when QA, Business Analysts and Developers have different interpretations of the stated requirements.
What do you think?
2 thoughts on “Positive reinforcement”
One thing Joel’s articles seem to do is provoke thought and comment, which I’m thankful for.
Apart from the typically Joel-esque inflamatory language in his post, I have a couple of thoughts:
Firstly, I have to feel sorry for Joel’s poor programmers who are embroiled in “long-cycle shrinkwrap software” delivery. I would argue that waiting a year to get any customer feedback is just a little on the long side in 2010.
That said, there are more effective ways to shorten programmer feedback loops than waiting for a pat on the back from a tester: Automated Acceptance tests, created in coordation with the product owner, customer and the testers, for example? Talking to the customers?! Radical!
Secondly, I take issue with the notion of human beings having “features”: Did Joel really mean to compare testers to company perks (like coffee machines)?
Anyway, in my experience (and to paraphrase Michael Bolton), testers are most effective when they clearly communicate quality-related information about the software, from the perspective of “someone who matters”: That someone might be a customer, but doesn’t have to be…
If the team decided to implement something to help me test the software then at that point I’ve become a stakeholder and can certainly judge whether value has been delivered to me.
Positive or negative reinforcement given directly to programmers should perhaps be seen as a side-effect of testers doing their job and not something to stake the personal development of your programmers on.
On the flip side, with my programmer hat on, I’m seeking ways to shorten the feedback loop /myself/, which might include, but isn’t limited to:
* Talking with project stakeholders (customers, product owners, testers, anyone I can get my hands on!)
* Writing end-to-end acceptance tests and implementing features to make the tests pass (pairing with a good tester at this point can be really useful)
* Writing unit tests before writing production code to verify my implementation of my code (I think Joel disagrees with me here)
* Pairing with the testers when they’re exploring the software (I might just learn something!)
I’m actually kind of okay with the comparison of testers to company perks, mainly as a means to an end. If developers see testers as a perk, then they’ll at least appreciate what it is that they do, and take negative feedback as a thing they’re lucky to have. Of course that can all backfire if the developers start seeing testers as an excuse for sloppy programming because they have that excellent tester safety net to catch all their bugs.
I think you hit the nail on the head – positive reinforcement should be seen as a side-effect, not part of the testers’ actual jobs. I’d hate to see the day when “programmer morale” became a tester KPI.
In my experience, the best programmers look for feedback themselves, instead of sitting at their desks moping about feeling unappreciated. They’ll monitor the amounts of bugs produced as a direct result of their work, they’ll give private builds to testers for early feedback, and they’ll get code reviews even if they’re not mandatory. Like you say, they shorten the feedback loop, and they find their positive reinforcement that way.
Comments are closed.