Performance testing is a QA practice that has multiple goals and benefits. One of them is to mitigate the financial impact on the organization through the method. The results may come directly or indirectly from the system tested.
It’s about Moolah
Some ways to mitigate this monetary impact is by spending money on the performance testing effort. In other words, invest now should you don’t have to pay later. This money loss may happen either by losing upset customers with the slowness or annoying system failures or by plainly not allowing the customer to conduct business with the organization due to the unavailability of the system. Because of this, multiple organizations are willing to invest budget in avoiding these outcomes.
After investing in the performance testing effort, it’s often difficult for a company to recognize the return on the investment. Unfortunately, this can be common for those organizations that cannot foresee or don’t incorporate best practices. It can be tricky and complex to develop this monetary impact vision when multiple teams are used to technical processes, or from management who ignores the value of sound practice. On top of this, these difficulties are increased by several practice misconceptions and hidden biases.
A typical example is when an organization requires a load test to measure the performance of their solution under different scenarios. As mentioned, this performance testing effort commonly falls under multiple misconceptions on the best approach to achieve the best result based on the investment. One of the most common mistakes is when companies try to automate too many processes to generate hyper-realistic load tests or include methods deemed as business-critical by the organization. This approach ensures that the monetary resources are not well spent and that there may be a lack of understanding of impact in the present and future exercises.
Enter the Pinto
Money mismanagement and narrow future vision is a frequent misfortune on performance testing, testing, IT, and even general organizational tasks. A good example of this is the one that happened in the ’70s to the Ford motor company with the release of the Ford Pinto. A model that, due to rushed design and production, contained many flaws. One of those flaws was outstanding, as it received considerable attention. The car had a severe defect on the fuel tank placement, leaving the vehicle vulnerable to fire from fuel leakage.
In this case, Ford had the choice of embracing repercussion cost of the flawed design or investing in the fix of existing cars and those still on the assembly line. To aid in the decision, the company did a now-infamous cost analysis on which, according to them, would have the smallest impact.
History proves that their choice was indeed the worst one in terms of vision, ethics, and monetary analysis. The car passed as history’s worst performer, a most massive recall, and a stereotypical case study for business schools on what not to do and how many things can go wrong when not enough vision is applied.
It Happens in the Best of Families
In the same way, performance testing can be mishandled. Similar to the above, there’s a misconception associated with the need to automate everything involved in the load test. This approach may prove to be somewhat efficient on functional testing practices (although it is not true either on functional, for more info google the automation pyramid), but for load test automation is hugely inefficient, money and time-wise speaking.
Automation for load testing can be costly, tough to maintain, and require lots of work. For these reasons, it is not recommended that you burden your organization by “over” automating your processes. Consider this. Let’s assume it takes an average of 10 hours of work to automate a standard business process, and let’s say it costs $60 per hour to manage. This is a total of $600/load automation. Note this does not account for the pre-work required for each process, which would be much more money. These costs are calculated on low consultant rates dated in November 2019.
If we choose to automate a process that does not occur often, or one not executed by several people/instances at the same time (assuming we have selected one that happens once), each automated trigger/click/event action will cost $600 each. Regardless of the importance of the process (businesswise), it is an expensive effort.
On the other hand, if we request a manual execution by an SME while the load test is being executed, this could cost upwards of $30 (calculation based on an employee earning $5K/month using a process taking anywhere from a few minutes to an hour to execute). A well-paid employee proves to be cheaper, especially once it’s automated. Likewise, cheaper if the instructions were given to a junior resource.
Now, in the case where we choose a highly demanded process that is executed by multiple people and multiple times. Let’s say one that 100 people concurrently run it about ten times/hour; we would have a process that, in total, is executed 1000 times per hour. This is an excellent candidate for automation.
Why? Well, as we established earlier, automation costs $600, and if we are going to use it 1000 times in execution, each click will cost $0.60 (600/1000). A considerably cheaper spend for the measurement that we will gather from the process — proving that the low-frequency executions that are business-critical should be executed manually rather than through automation.
By now, you may be convinced that the best candidates for automation in a load test are the most frequently executed processes. But how can we select the best choices to get the best cost-efficient ones?
In this instance, we will use the work from famed Italian economist Vilfredo Pareto (Pareto principle). Most associated Pareto with the 80-20 rule. As is the case with most ideals, this principle applies to load testing as well.
To translate into load testing speak: 80% of the load on a system is commonly generated by 20% of the total available processes. This implies that the best possible effectiveness for the money invested is attained by automating only 20% of the available business processes. With 20%, you will be able to simulate 80% or more of the total (also assumes that your spend is 20% of what the initial cost expectation).
How can I do it?
An easy way to do this is by listing all the business processes together with associated utilization metrics for each (list the number of hits each one gets or is expected to have in the period that is subject to be simulated. IE: 1000 hits per hour on process A.). Then sort the processes in descending mode using the number of hits as the sorting criteria. With that, you will have the most frequent ones at the top.
Next, add up all the utilization metrics so that you can get a global number of system hits; divide each utilization by the global, so you will know what percentage each process represents of the total load.
Armed with percentage for each, sum the processes at the top (one at a time until you reach at least 80% of the load). Once you reach that point, the number of business processes selected will be the best candidates for a load test automation effort. Most probably, the number of items that you will have chosen will represent only 20% of the total business processes that the system can execute. With that list, you will ensure the best bang for the buck invested in the testing process.
Similarly, many other misguided practices could benefit from being viewed with the future state in mind — knowing how and what will be impacted by a given course of action is not easy. The example given above is just one of many blunders that can happen in the world of performance testing. Many cases are there, like rushed release dates that can lure away potential customers, careless performance testing on elastic cloud environments, revenue loss on unavailable services due to slowness or plain denial of service, and many others.
Performance testing is a topic that can have multiple and severe monetary impact to the organization. Many organizations are not aware of the effects that can bring down the revenue numbers for them. I implore you to set the course for awareness expansion — improving your vision and projection capabilities to know when you’re about to see the next Pinto.
Learn More about the Performance Advisory Council
Want to see the full conversation, check out the presentation here.