There’s a lot going on in the world of load and performance testing and it seems there’s more to keep up with every day. With that in mind in this issue we’ve collected some of the most interesting links from around the web. This week’s Roundup focuses on how to actually do testing, meaning how important are best practices? What can you learn about how to test better from playing chess? How can, and should, testing and programming work together? And, finally, just to throw a wildcard in there we harken back to last week’s Testing Roundup to follow up on Scott Barber’s talk about why executives often see testing as an #EpicFail and one way you can help to dispel that notion. But first, a question inspired by our third article, “Exploring Testing & Programming”:
In light of a recent provocative tweet from consultant Rex Black, the testing community has been raucously discussing the merits of best practices. Author and software testing consultant James Christie jumps on in to the fray with this blog post, espousing that best practices hinder experts and foster mediocrity. He follows this up by stating that “context comes first, practices come second,” and so, without context, practices can’t be best. Check out the article to see his full argumentation then let us know what you think!
What do chess and software testing have in common? If you ask Albert Gareev, author of the blog Automation Beyond, the answer is a lot. This blog follows up nicely on our above link to the best practices discussion, as Gareev says, software testing and chess are parallel in the need to “have a strategy and be ready to change it…it is about goals, not ‘steps to perform…’know the context…requirements cannot cover everything.” He gives a lot more detail to this analogy than we can do justice to here, so take a few minutes to read the whole article and learn more about taking a smart and adaptive approach to testing.
If you want someone who knows about software testing, there aren’t many people you could go to better than 20 year veteran Alan Page from Microsoft. Check out this post about how many teams at Microsoft, including his, have moved to a “whole team approach.” What this means is that testers make changes to product code and programmers create and edit test code. Read his account of how this works and why they do it. As this wall between development and test continues to fall, where do you think it’s going to end up?
Should you measure the value of software testing? How would you go about doing this? More importantly, why? Well, if Scott Barber is to be believed from last week’s Testing Roundup, measuring value seems like a great idea for a function that executives often see as a cost center! Tom Gilb explains why and how to measure value in software testing after speaking at UNICOM’s 8th Next Generation Testing Conference. He wraps it up by explaining who should do this measurement. We don’t want to spoil what he’s going to say, so you’ll just have to listen to this three minute video yourself.