When it comes down to it, a tester’s primary responsibility is to test an application or project and report back on the issues. But it isn’t here that the responsibility ends, from here, the real work begins. It’s absolutely essential for testers to understand why their bugs are being rejected or being marked as “not reproducible” and how to react in these situations.
You always want to know that your website is operating at its best, but how do you know that’s actually the case? It’s not so easy to see behind the curtain when it comes to your web infrastructure. We’ve long used proxy metrics like CPU load or server availability to ensure that a server is “up,” but these measurements don’t provide enough data. In fact, as websites become more complex and change more frequently, these measurements become less useful.
There are two kinds of people in this world – tea drinkers and coffee drinkers, PC users and Mac users, those who believe in the use of formal test design techniques and those who believe those same techniques cause rigid thinking and limit creativity. This post address the latter.
If you haven’t been exposed to the concept of “Bumping the Lamp,” you’ll want to take a look at this article in order to understand the context of this post from Jeff Nyman. In his write-up, Nyman describes an experience in which he was able to keep this kind of thinking front and center.
As a performance tester, does the word “cloud” scare you?
I hope not. I hope it doesn’t wake you up in a cold sweat in the middle of the night. I hope the cloud isn’t responsible for that chill you feel down your spine when you think you’re all alone…yet somehow you know you aren’t.
It’s not like we’re talking about clowns here.