It’s All In The Delivery: Tailoring Performance Testing Results For Each Department

There’s no question that performance testing data is valuable to lots of different functions at your company. It helps makes the application better and more robust. It sheds light on user behavior and preferences. It can even help tell you when hardware is on the fritz.

However, just because so many groups can use performance testing data doesn’t mean that they all need the same data. In fact, if it isn’t immediately obvious how the data that gets presented in a performance testing report helps the individual reading it, there’s a good chance the report may be ignored altogether – and the potential benefits of that valuable information you’ve collected lost forever.

If you want to make sure your test results mean something, it’s important to deliver them in a way that makes them accessible by each department. You may think that sending around a single report is a time-saving shortcut, and people will just find what they need within the report. However, this is rarely the case. Spend some time thinking about what your data means for other departments, and organize it accordingly. I’ll bet you find a lot more people utilizing your findings, making you indispensable.

In this post we will explore how the performance data and results that you obtain when conducting load tests should be framed and distributed to various departments.

Developers: Finding Root Cause

As a performance engineer you may be authorized to make improvements to algorithms or SQL queries on your own. However, for more complex performance issues, it makes sense to bump it up to the broader development team.

Your development team uses the results from a performance test to make changes that improve the product. They rely on you, because it’s not uncommon for developers to code without thinking too much about performance optimizations. Often they make code functional first, and then they rely on performance tests to identify the places where optimizations can really improve the experience.

So what do developers look for in load test results? They are trying to identify problems with code – typically scale-induced bottlenecks or performance cliffs. Charts can really help here, as they make it easy to see inflection points where performance degrades rapidly and suddenly. When you encounter a situation like this, the more you can isolate the function area where it’s happening and demonstrate repeatability, the more helpful it will be for developers.

So when it comes to your test results, structure and repeatability is key for the development team. If you see something happening on a live system, try and recreate it in the QA environment. You may also want to work with a deep-dive diagnostics tool such as AppDynamics to provide rich code-level detail on performance problems for developers. The more your results can be organized to help your developers pinpoint the source of an issue, the better your relationship with them will be.

User Experience: Task-Based Performance

Ultimately, the whole purpose of performance engineering is to make sure that your users have a great experience. Lots of UX focuses on the layout of a website or app, as well as the wording. Performance tends to come later, but it is still critical. In fact, 40% of visitors will leave a site if it takes more than three seconds to load.

UX people tend to think in terms of the overall tasks users are trying to accomplish when they use your software, and this fits perfectly with simulated users. Find out what tasks are key for UX and build load tests and real-time monitors around those tasks. If your UX team creates user journey maps, you can align your load tests directly with them.

Don’t forget to organize information from the user’s point of view. Include the following key information for a complete picture of what performance testing really means for UX:

  • Time to complete each task and obstacles along the way
  • Information about geographic distribution of users – for simulated tests, indicate how realistic the test setup is
  • Information about platform distribution – primarily desktop, laptop, and mobile devices
  • Comparison of load test results (from the QA environment) and virtual user results (from the production environment)

Functional QA: What’s New? What’s Changed?

As a performance engineer, you’re likely a part of the QA team. However, because QA organizations tend to revolve around functional testing – often automated – performance testers are frequently feel a bit distanced from the rest of the team.

That’s not ideal – you want to be close to your team. Remember that QA has a pretty straightforward job to do. You can frame your test results in a similar way – that’s going to keep you and the rest of your team joined at the hip:

  • Testing new functionality to see if it works as expected
  • Running regression testing to see if anything is broken
  • Improving the overall process

Regression load testing can concentrate on specific user paths that get tested under load. Plot the data in a graph to show how performance is behaving over time with each new build. If your team produces regular regression reports, have your performance data included right within them.

Remember that new functionality is going to require new tests. Work with developers to define these tests and call special attention to them in reports to QA. Establish a baseline as early as possible.

Finally, look for ways to plug load testing into your continuous integration process and find ways to automate. This may not always be easy, but a little can go a long way.

Integration Team: Piece By Piece

If you have an integration team, their job is to make sure that large systems work well together. Integrations can be very complex, and of course you can’t measure the performance of everything. So when you work with this team, you’ll want to focus on edge cases.

Start by understanding the key handoff points between systems. You can then build specific load tests for integration points and deliver targeted results on these points. That’s what your integration team will most want to see. You can structure your work around component level load testing to make sure that individual parts are functioning well. Then you should also run a load test on everything together.

When you are delivering results to this team, organize them by the results of each individual component, then by the system results. This way, the integration team can troubleshoot problems at the component-level or system-level.

The Operations Team: Real-Time

The operations team owns all system monitoring functions for live production environments. For modern performance testers, that’s where you want to you’re your synthetic monitoring and virtual users. These can supply important information to your operations team.

The operations team identifies specific metrics that convey the health and efficiency of the live, running production system. They define thresholds and make sure that those thresholds are not exceeded. In a nutshell, the operations team is making sure that:

  • CPUs aren’t maxed out
  • Hard drives still have adequate storage space
  • The overall network isn’t clogged
  • The database isn’t deadlocked
  • There aren’t any memory leaks or anything else that could bring the site to its knees.

Thus, performance reports for the operations team should be largely focused on live data. Operations primarily wants to know what part of infrastructure needs attention. Real-time reporting is crucial so operations can jump into action, and synthetic users are a critical part of this. Your performance testing results will be incredibly valuable if they can point operations in the direction of the equipment they can fix, boxes they need to reboot, or configuration issues that they need to address.

An important implication of this is that your operations team won’t find much value in yesterday’s performance data, except for post-mortem purposes. Work together to integrate test results as a feed into their real-time dashboards, so when you spot problems, they know about them.

Putting Teams To Work

Now that you’ve got a bird’s eye view of where the value of performance testing is for each team, use this as a guide for organizing test results. Knowing what each team finds interesting will allow you structure results in a way that benefits each department.

For more ideas on how to make an impact with performance testing results, check out our tips for talking it all over with high-level execs and business managers.

Image Credit: Steven Depolo

One Response
  1. June 30, 2015
  2. July 29, 2015
  3. September 9, 2015
  4. September 30, 2015

Leave a Reply

Your email address will not be published. Required fields are marked *