Blog

Establishing a performance testing strategy

Performance testing

Learn more about continuous performance testing and how to deliver performance at scale.

Author:

Tricentis Staff

Various contributors

Date: Apr. 05, 2021

Establishing a performance testing strategy is the first and most important step of performance testing. It helps you define:

It’s never possible to test everything, so conscious decisions about where to focus the depth and intensity of testing must be made. Typically, the most fruitful 10-15% of test scenarios uncover 75-90% of the significant problems.

Risk-based testing

Risk assessment provides a mechanism by which you can prioritize the test effort. It helps determine where to direct the most intense and deepest test efforts and where to deliberately go lighter (to conserve resources for the more intense scenarios). Risk-based testing can identify significant problems more quickly/earlier in the process by helping focus on testing the riskiest aspects of a system.

For most systems, problems related to performance and robustness occur in these areas:

  • Resource-intensive features
  • Timing-critical/sensitive uses
  • Probable bottlenecks (based on internal architecture/implementation)
  • Customer/user impact, including visibility
  • Previous defect history (observations noted across other similar systems during live operation)
  • New/modified features and functionality
  • Heavy demand (heavily used features)
  • The complexity of feature set
  • Exceptions
  • Troublesome (e.g., poorly built/maintained) system components
  • Platform maintenance

Here is a list of questions prepared by industry expert Ross Collard to help you identify different performance risks:

Situational view

  • Which areas of the system’s operation, if they have inadequate performance, most impact the bottom line (revenues/profits)?
  • Which uses of the system are likely to consume a high level of system resources per event, regardless of how frequently the event occurs? Resource consumption should be significant for each event, not high in aggregate simply because the event happens frequently and thus the total number of events is high.
  • What areas of the system can be minimally tested for performance without imprudently increasing risk, to conserve the test resources for the areas which need heavy testing?

Systems view

  • Which system uses are timing-critical/sensitive?
  • Which uses are most popular (e.g., happen frequently)?
  • Which uses are most conspicuous (e.g., have high visibility)?
  • What circumstances are likely to cause a heavy demand on the system from external users (e.g., remote visitors to a public website who are not internal employees)?
  • Are there any notably complex functions in the system (e.g., exception handling)?
  • Are there any areas in which new and immature technologies have been used, or unknown/untried methodologies?
  • Are there any other background applications sharing the same infrastructure, and are they expected to interfere or compete significantly for system resources (e.g., shared servers)?

Intuition/experience

  • What can we learn from the behavior of the existing systems that are being replaced, such as workloads/performance characteristics? How can we apply this information to testing the new system?
  • What has been your prior experience with other similar situations? Which features, design styles, sub-systems, components, or systems aspects typically have encountered performance problems? If you have no experience with other similar systems, skip this question.
  • What combinations of the factors you identified by answering the previous questions deserve a high testing priority? What activities are likely to happen concurrently, causing heavy load/stress on the system?
  • Based on your understanding of the system architecture and support infrastructure, where are the likely bottlenecks?

Requirements view

  • Under what circumstances is heavy internal demand likely (e.g., by the internal employees of a website)?
  • What is the database archive policy? What is the ratio of data added/year?
  • Does the system need to be available 7 hours, 24 hours, etc.?
  • Are there maintenance tasks running during business hours?

The answers to these questions will help identify:

  • Areas that need to be tested
  • Test types required to validate app performance

Component testing

Once the functional areas required for performance testing have been identified, de-compose business steps into technical workflows that showcase technical components.

Why should business actions be split into components? Since the goal is to test the performance at an early stage, listing all important components will help to define a performance testing automation strategy. Once a component has been coded, it makes sense to test it separately to measure:

  • Response time of the component
  • Maximum calls/second that the component can handle

Moreover, component testing supports JMS, APIs, Services, Messages, etc., allowing scenarios to be easily created and maintained. Another major advantage of this strategy is that the components’ interfaces are less likely to be impacted by technical updates. Once a component scenario is created, it can be included in the build process, and feedback on the performance of the current build can be received.

After each sprint, it is necessary to test the assembled application by running realistic user tests (using several components). Even if the components have been tested, it is mandatory to measure:

  • System behavior with several business processes running in parallel
  • Real user response time
  • Sizing/availability of the architecture
  • Caching policy

The testing effort becomes more complex with the progression of the project timeline. In the beginning, the focus is on app quality, then concentrated on the target environment, architecture, and network. This means that performance testing objectives will vary depending on the project’s timeline.

Performance testing environment

It is imperative that the system being tested is properly configured and that the results obtained can be used for the production system. Environment and setup-related considerations should remain top-of-mind during test strategy development. Here are a few:

  • What data is being used? Is it real production data, artificially generated data, or just select random records? Does the volume of data match production volume forecasts? If not, what is the difference?
  • How are users defined? Are accounts set with the proper security rights for each virtual user, or will a single administrator ID be re-used?
  • What are the differences between the production and the test environments? If the test system is just a subset of production, can the entire load or just a portion of that load be simulated?

It is important that the test environment mirrors the production environment as closely as possible (some differences may remain, which is acceptable). Even if tests are executed in the production environment with actual production data, it would only represent one moment in time. Other conditions/factors would need to be considered as well.

Devise a test plan

The test plan is a document describing the performance strategy. It should include:

  • Performance risk assessments highlighting performance requirements
  • Performance modeling: explanation(s) of the logic to calculate the different load policies
  • Translation of the primary user journey into components
  • Description of all other user journeys with specific think time/business transaction metrics
  • Dataset(s)
  • SLA(s)
  • Description of each test being executed to validate the performance
  • Testing environments

The test plan is a key artifact of a well-designed and executed performance testing strategy, acting as evidence that a team has satisfactorily accounted for the critical role performance plays in the final end-user experience.

In many cases, project teams ensure the delivery of performance test plans as part of feature work during planning and development cycles by requiring them as part of their Definition of Ready. Though each feature story or use case may not require the same level of performance testing, making the thought process a hard requirement for completion leads to better systems and an improved mindset over the end-to-end quality of what the team delivers.

Next steps

This is the second article in a four-part series focused on practical guidance for modern performance testing:

Part 1 – A practical introduction to performance testing

Part 2 – Establishing a performance testing strategy

Part 3 – Modeling performance tests

Part 4 – Executing performance tests

This blog was originally published in January 2018 and was refreshed in July 2021.

Performance testing

Learn more about continuous performance testing and how to deliver performance at scale.

Author:

Tricentis Staff

Various contributors

Date: Apr. 05, 2021

Related resources