Thoughts from #telsum

A few weeks ago I one of a lucky few to have the chance to go to the Telerik Testing Summit (#telsum). It’s an invite-only annual peer conference held in Austin, Texas and I was in good company with fellow attendees Paul Carvalho, Adam Goucher, AlanPage, Matt Brandt, Selena Delesie, Matt Barcomb, Jeff Morgan, Marlena Compton, Steven Vore, Chris McMahon and of course our excellent host Jim Holmes.

I’d wanted to meet many of the other attendees in person for a long time so it was awesome to finally do so! What followed were several days of intense discussion about a range of topics. Here are my highlights:

Testing is an activity, not a role:

No matter what we discussed, we always seemed to come back to this. Whether it’s testers writing more code, or developers doing more testing, teams continue to benefit from multi-skilled individuals. I hear a lot about this approach from other Google Test Engineers as well. Some describe the Test Engineer role as more of a Test Management role, even though there may only be one Test Engineer on a team. They are often leading the development team towards better quality testing activities rather than doing all of the testing activities themselves all of the time.

All test metrics suck:

There’s a long history of measuring testing-related metrics in isolation from whole-team metrics, which can lead to undesirable results. Ultimately it comes down to measuring the outcomes of our testing (delivering better features more quickly) rather than the means of our testing (how many test cases have we automated) in order to track more valuable and meaningful results.

Good test design and mind maps:

I was reminded of the usefulness of mind maps for test planning. Alan said that he uses mind maps to plan his testing initially, shares the map with the development team, then they work on testing the feature together, automate whatever makes sense and then never have to look at the map again.

I think the best thing about this is that it forces you out of the all-too-common process of writing a series of manual step-by-step test scripts and translating them directly into automated tests. Instead, it really highlights what you should actually be doing with “automated testing” – building a program that is designed to give you important information about the system under test. The information that you really want to know will drive the features that you add to this program.

What I took away:

I felt that the most important point that we kept coming back to was that we’re not in the software testing business – we’re in the software development business. We’re skilled at testing as an activity, but our goal is to deliver software. The testing activity is an important part of that process, but it shouldn’t be isolated to select individuals within a team. It’s something that all team members should be able to do. Alan Page wrote a great post about it which you should go and read now, and a second one which you should read right after that. It would be somewhat revolutionary, if it weren’t already happening. I’ve talked about blurring the line between testers and developers before, in terms of how having skills that are typically associated with a “software engineer” role (for example, programming) can benefit testers. It definitely goes both ways – having skills that are typically associated with a “software tester” role can also benefit developers.

When I shared this view with my team at work, I had an interesting discussion with Marco DeLaurenti who works as DevOps on my team. He said that there are similar discussions happening in the world of DevOps – an increasing demand to blur the line between DevOps and Developer. If you’re interested, there’s a fascinating article here by Mantas Klasavicius about Metrics-Driven Development. Like testing, systems monitoring is an activity that benefits the whole team and it’s easier to achieve useful results if the whole team knows something about it and is willing to make it part of their software development process. If you think about it, software testing is like another kind of systems monitoring – they both ask a question of the system and provide more insight to the team. Sometimes they’re even the same questions!

Many thanks to Telerik, Jim Holmes and the team for making a fantastic event, and to my fellow Telerik attendees for giving me so much inspiration.

One thought on “Thoughts from #telsum

  1. Pingback: Walls on the Soapbox

Leave a Reply