1. Documenting Tests – How much detail? by Eric Jacobson
Here is what I ask of my testers:
Document enough detail such that you can explain the testing you did, in-person, up to a year later.
Before I came up with the above, I started with this: The tester should be present to explain their testing. Otherwise, we risk incorrect information.
If the tester will be present, why would we sacrifice test time to write details that can easily be explained by the tester? In my case, the documentation serves to remind the tester. When we review it with programmers, BA’s, users, other testers, auditors, or if I review it myself, the tester should always be present to interpret.
When a new tester joins your team – or if you are the new tester joining – the question is how to get this new tester up to speed as effectively as possible. He or she needs to learn about the application, the way of working within the team, the project, etc. In a way it’s quite similar to learning how to navigate a new city.
By exploring this analogy we shall see that the most common ways to get a new tester up to speed fall short. Luckily Joep Schuurkes from Software Testing Club has also encountered some better ways which he shares in this video. And as it turns out, those alternatives have some shared properties that are also relevant to good testing in general.
One team meeting turned a discussion point about bug rejection. After some discussion, the team all decided to do a simple exercise to save ourselves from the humiliation of bug rejected, in future.
Each one of us started taking out notes as the reasons for bug rejection for last 10 bugs, reported and rejected. The list of those rejection notes proved useful to understand future track of bug reporting and what was the wrong assumption made.
Rather than revealing the list, I would like to share the outcome bullet points of the list. Here is the first reason–
#1. Misunderstanding the requirements:
For any reason, if you did not understand the requirement properly, you would definitely look out for the misinterpreted requirement in actual implementation and when you would not find it, it would be a bug according to you, which will finally get rejected.
In short, redistributed testing is a shift in the emphasis and responsibility for testing. Testers are reassigned to work closer to the business with users or business analysts or are embedded in the development team. By being involved in story and scenario writing, the testers help to refine requirements and improve their quality. How could your systems benefit from redistributed testing? Paul Gerrard explains redistributed testing.
Paul Gerrard: In 2011, I started using the term redistributed testing to describe a change in the emphasis and responsibility for testing that some companies were embarking on. I talked about it in a blog post in which I suggested forces outside testing are causing companies to rethink how they test. Preceding this, the alarmist death-of-testing meme brought people’s attention to the fact that things were changing. It looks like in the UK and Europe, at least, the change to redistributed testing is gaining momentum.