Test to Prevent Bugs – Neotys Testing Roundup

1. Don’t Find Bugs, Prevent Bugs by Eric Jacobson

It’s a cliché, I know. But it really gave me pause when I heard Jeff “Cheezy” Morgan say it during his excellent STAReast track session, “Android Mobile Testing: Right Before Your Eyes”. He said something like, “instead of looking for bugs, why not focus on preventing them?”

Cheezy demonstrated Acceptance Test Driven Development (ATDD) by giving a live demo, writing Ruby tests via Cucumber, for product code that didn’t exist. The tests failed until David Shah, Cheezy’s programmer, wrote the product code to make them pass.

(Actually, the tests never passed, which they later blamed on incompatible Ruby versions… ouch. But I’ll give these two guys the benefit of the doubt.)

Now back to my blog post title. I find this mindshift appealing for several reasons, some of which Cheezy pointed out and some of which he did not:

  • Per Cheezy’s rough estimate 8/10 bugs involve the UI. There is tremendous benefit to the programmer knowing about these UI bugs while the programmer is writing the UI initially. Thus, why not have our testers begin performing exploratory testing before the Story is code complete?
  • Programmers are often incentivized to get something code completed so the testers can have it (and so the programmers can work on the next thing). What if we could convince programmers it’s not code complete until it’s tested?

2. Testers Are Like Fact Checkers

Per one of my favorite podcasts, WNYC’s On the Media, journalists are finding it increasingly more difficult to check facts at a pace that keeps up with modern news coverage. To be successful, they need dedicated fact checkers. Seem familiar yet?

Journalists depend on these fact checkers to keep them out of trouble. And fact checkers need to have their own skill sets, allowing them to focus on fact checking. Fact checkers have to be creative and use various tricks, like only following trustworthy people on Twitter and speaking different languages to understand the broader picture. How about now, seem familiar?

Okay, try this: Craig Silverman, founder of Regret the Error, a media error reporting blog, said “typically people only notice fact checkers if some terrible mistake has been made”. Now it seems familiar, right?

The audience of fact checkers or software testers has no idea how many errors were found before it was released. They only know what wasn’t found.

3. Risk Based Testing

Life is full of risks, and so is a software project. Anything can go wrong anytime. We are always on our toes to make things right – but what about making sure that nothing goes wrong and that when it does we know exactly what to do? Enter Risk management – this is a portion of a software testing project that prepares us to prevent, understand, find and get over risks.

A risk is simply a problem that is likely to occur and when it does occur, will cause a loss.

The loss could be anything- money, time, effort or a compromise in quality. Loss is never good. No matter how much of a spin we give it, it’s not positive – and never will be. Therefore risk management is an integral part of software projects to make sure we handle risks and prevent/reduce losses.

4. Get out of the Testing Game

In this interview with Bill Matthews, you’ll learn how to get out of the testing game.

Is my testing good enough? How do I improve my testing processes?

Earlier in my career, these were questions that I’d frequently ask of myself and others; the answers often came in the form of deliverables such as standards, plans, test cases, schedules and endlessly trying to optimise processes – I was in the Testing Game. It’s a game where we become overly focused on testing as an end in itself rather than a means to an end; we lose sight of the wider context. At some point I realised that projects were not really interested in testing only what testing gives them so I knew I had to get out of the Testing Game and into the Information Game.

This shift is sometimes difficult for people to grasp and a model I’ve found helpful is a variant of the Business Model Canvas; while typically used to describe and communicate a business model it can also be used to describe how testing fits within a wider context and focuses attention on aspects such as interactions, relationships and the flow of value.

Leave a Reply

Your email address will not be published. Required fields are marked *