[By Stijn Schepers]
Performance Testing is not an average game!
“Data is gold!” This is definitely the case for performance engineering. When we execute a load test, it is the data (measurements) which provides the engineer with insights required to understand the behavior of the systems. Business Intelligence (BI) tools are a must-have tool for a performance engineer. This blog post explains what raw data is and why averages and aggregated results are limiting. In a second blog post I will explain in depth with examples how to analyze the raw data with BI tools.
Focus on analytics
Often performance engineers are spending a lot of time creating and improving load test scripts. Not enough time is spent executing and analyzing results. Scripts do not need to be 100% perfect to create value. The sooner we can execute a script, the quicker we can add value by smart analytics. A performance engineer should always analyze the raw data and don’t limit the analytics to averages and aggregation. The raw data can be defined as every measurement of the response time of every single request made by any user (or thread). As an example: when you simulate 1000 users who submit one login request you want to see the response time of all of these 1000 logon submit requests.
Raw Data versus Averages
Below an example of the results of a load test based on raw data. Every blue dot is a response time of a specific HTTP request.
The same results but this time based on averages.
From these graphs, you can clearly see that the graph based on raw data provides you with a much clearer view of what is happening. Performance spikes (vertical lines) are happening at regular intervals. The patterns that you see are an absolute must for tuning and optimizing ICT systems.
During my career, I often have seen banding issues. This means that one single type of transactions has different levels of response time. The below graphs visualize the performance of one single type of transaction. The top graph is based on raw data, the bottom graph on averages. The raw data graph clearly shows 2 performance bands: 0.68 sec and 0.065 sec. The graph-based on averages doesn’t provide you with these details and make it impossible to find these types of issues.
Customize, customize, customize
The biggest value BI tools provide to a test engineer is the ability to do – whatever you want to do – with the data. You can compare test execution runs; you can zoom in, include or exclude transactions, play with the dimensions of a graph and visualize the data in such a way that it tells a story. If we look at the first graphs, would it not be great if we could easily zoom into these performance spikes (build-up of response time)? The graph below (zoom-in) shows that every type of transaction (every color is a different type of transaction) was delayed and that it looks like the complete system freezes. Note that the gap on the x-axis is of similar size as the slowest response time of these transactions (28 seconds).
A picture is worth a thousand words
When you integrate a load test tool like NeoLoad with a BI tool like Tableau you uplift you test tool framework and accelerate testing. In times that DevOps, Agile, Rapid Delivery, Lean are embedded in almost every project, this is crucial for performance engineering. The quicker you can pinpoint the issue the better the quality you can deliver faster in production. Personally, I prefer to keep my report as short as possible. The value that BI tools bring to me is that is that with a smart visualization of test results with one picture I can tell a complete story. A picture is worth a thousand words!
The key to performance engineering is to detect patterns – based on raw data – quickly and provide solutions to performance bottlenecks. Extending load test tools with BI tools provide you with a more powerful test framework which enables a test engineer to do smart analytics. After all Performance Testing is not an average game!
In a follow-up blog post, I will explain how to extend a load test tool with a BI tool.