There are lots of great ideas about testing. But there are also some silly ideas out there – how to do as little of it as possible, or not to do it at all. In this blog post, the author discusses three main categories:
- Don’t do product testing at all – just get it done right in the first place
- Don’t do product testing at all if you tested the components
- Don’t do product testing at all – your customers will tell you about the problems
Do you think there is merit to these philosophies? Dive a little deeper here.
Eric Jacobson writes an interesting set of posts about seeding bugs. He asks his lead programmers to secretly insert a bug into the most complex area of the system under test.
Written just before the Thanksgiving vacation, this was the perfect time to “seed a bug”. Theoretically, this new automated check should help put their largest source of regression bugs to rest and Eric was going to test it.
The programmer hid the needle in the haystack… read what happened next.
Did you ever experience a problem with a website, inquire about it, and get the response, “try it in another browser?” You may feel like an idiot – like it’s your fault – but don’t. The truth is that the website you are accessing has not been tested extensively with respect to cross browser compatibility testing and as an end user you have just found a bug.
This post dives into the subject of cross-browser testing, answering the who, what, where, when, and how. Read more here.
Here’s an interesting discussion when it comes to testing within specific industries. The user asks folks to comment on:
- How testing these websites is different than testing any other complex (non-banking/insurance) website like Gmail, facebook or other nicely coded reputed websites.
- Don’t they follow testing methodologies, agile, scrum, write-execute test cases, talking to Dev team, evening morning calls with stake holders, and so so many other things.
A few folks have already jumped in to comment. What do you think? Join the discussion.