Let’s say you are heading out on a little adventure – a trek. You’ve got a map to guide your way, showing you where to go and, more importantly, where to avoid. Just before you leave, you find out that the mapmaker had only covered 25% of the terrain.
Do you trust the map?
Of course not. There’s no way that document could give you confidence in what you are about to do. If the ground hasn’t been properly evaluated, it’s nothing more than guesswork. The same is true with software testing. Your test coverage tells you how much of an application is actually being exercised by your test plan and gives everyone confidence in what’s being delivered.
For performance testers, test coverage can be a very difficult metric to determine. It’s much easier with functional test coverage as you can straightforwardly evaluate how many of your features – or even lines of code – are being exercised through the course of your testing. Performance testing, however, depends largely on the user’s experience. For example, if you only have one user on your system, you won’t really have to worry about performance test coverage.
But that’s not reality.
You’ve got lots of users exercising lots of different functions in different ways and at different times. Code changes, infrastructure upgrades, and configuration settings can all cause your application to behave in wildly different ways under the same usage conditions. The law of unintended side effects runs rampant in the world of a performance engineer.
Naturally you’ve built perhaps thousands of test cases to help give you confidence in the performance of your system. Maybe you are even stringing them together in a modular/unit test fashion, leveraging the power of an automation-capable performance test library.
But you can’t run everything, every day.
So how do you ensure that you are getting good performance test coverage?
The Approach: Assessing Value And Risk
We actually touched on this in a previous post about planning and executing last-minute performance tests. In that post we described that if you are limited on time, the most valuable tests you can run are those that focus on pathways that:
- Users are likely to traverse
- Are directly tied to KPIs like revenue or ad impressions
- Involve new or recently changed code
- Exercise known bottlenecks
We’re going to use these four criteria to help make a chart that will help us assess the risk of code that we want to test. We can look at test scenario, evaluate it in this chart, and determine if the portion of the application we are testing is high-risk, medium-risk, or low-risk. Then, we test the high-risk stuff first and go down.
If you do this methodically, you can plot all your test scenarios on a single chart, organized by degree of risk. Working off this chart, you’ll have much more confidence that you are covering the most important performance hotspots first – a pretty good measure of performance test coverage.
So here’s how to start. First, we’ll split the four criteria above into the axes of our chart:
Impact to Business: The first two criteria are measures of how any test scenario impacts the overall business. They tell you the volume and/or frequency that users access the function, along with its role in generating revenue. This is a measure of the value of the code being tested, and will serve as our vertical axis on our graph.
Likelihood of Performance Issues: The second two criteria are measures of the actual code that’s involved in the test scenario. If the code is changing, or if it is known to have problems, it is more likely to cause a performance issue. We’ll use this as our horizontal axis.
We end up with a chart like this:
Now, take a look at your test scenarios and score each one on these two axes. If they are popular pages that impact revenue, they’ll get high vertical scores. If the code being exercised is under active development, they’ll get high horizontal scores. Place that scenario on the graph, and you can see how important it is to test in a given cycle.
A Simple Example
Let’s say that we are finishing a development cycle and are ready to look at our performance test coverage for this round of testing. We are launching a new marketing promotion to drive business before the holidays. Our application has five key functions:
- Marketing Promotion
- Log In
- Edit Profile
- Leave Feedback
My website analytics tell me the role that each of these functions plays in generating revenue, and I know from my dev system that the code changes that have taken place have largely been to support this new marketing promotion. From that information I put this table together ranking each test scenario:
|Test Scenario||Impact To Business||Likelihood of Performance Issue|
Now, I can plot these scenarios on my chart:
Looking at the result, it’s pretty clear that I should start my testing with the marketing promotion and checkout functions, followed by feedback and login. I can save profile editing for last.
Automating This Process
If you’ve only got five test scenarios in your app, you may not need to go through an assessment like this – but who’s got only five test scenarios?!
This process can be automated, supporting thousands and thousands of test cases. The most important part of automating performance test coverage is the scoring of the test scenarios in your library. You may want to look at creating a simple taxonomy that lets you easily tie a test case to the code it’s exercising, so you can automatically evaluate how much it is changing through your source code control system.
Once your scoring is in place, your automated system should evaluate scores on a regular basis, and then select an ordered group of tests to perform that maximizes the coverage of the high-risk scenarios, working its way down.
Performance problems can come from almost anywhere – a poorly optimized query, a sub-system upgrade in the production environment, or a marketing promotion that did way better than anyone expected. However, just because there are so many variables at play in the performance of an online application, that doesn’t mean you have to fly blind. Ensuring you have good performance test coverage will help you build confidence that you are putting the right level of effort in the right places.
Photo Credit: Rick McCharles