Whitepaper
Practical Guide to Performance Testing

Introduction to Performance Testing

Applications are becoming more and more complicated with shorter development cycles. This requires new, agile development and testing methodologies. Application performance as part of the global user experience is now the key aspect of application quality. “Old school,” sequential projects with static qualification/implementation/test phases that put off performance testing until the end of the project, may face a performance risk. This is no longer acceptable by today’s application quality standards.

This whitepaper will provide practical information on how to execute efficient performance testing in this new and more demanding environment.

Name
Practical Guide to Performance Testing
Type

The State of Complexity in Modern Apps

 

One of the primary drivers behind this shift to modern load testing is the growing complexity of the IT landscape:

  • Most users are using mobile devices, thin clients, tablets, and other devices to reach the information
  • Complex architecture(s) that are shared by several applications at the same time is/are being built
  • New technologies offer a range of solutions (AJAX framework, RIA, WebSocket, and more) that improve applications’ user experience

Historically, applications have been tested to validate quality in several areas: functional, performance, security, etc. These testing phases answer to user requirements and business risks. However, the dialogue has changed; the conversation is no longer about quality, but about user experience. The user experience is a combination of look-and-feel, stability, security, and performance.

Performance: Imperative to a Successful User Experience

 

Performance is a key factor in the success of the user experience. This is due to advances in technology, the complexity of the architecture, and the locations and networks of the users. Load testing was a nice addition to the development process, but now it has become an essential testing step.
Load and performance testing answers the following questions:

  • Is the application capable of handling a certain number of simultaneous users?
  • Is the average response time for pages acceptable under this set load?
  • Does the application revert to normal behavior after a load peak?
  • How many users can the application handle while maintaining an adequate response
    time?
  • What is the load threshold above which the server(s) begins to generate errors and
    refuse connections?
  • Does the server(s) remain functional under high load or does it crash?

Like any testing activity, performance testing requires proper methods and logic.
When an application passes performance testing but then fails in production, it is often due to unrealistic testing. In these cases, it is easy but misguided to blame the testing itself or the tools used to execute it. The real problem is usually test design without the correct basis. It is necessary to ask, “what did we need to know which, if we had known it, would have allowed us to predict this failure before production?” In other words: “how can we deliver efficient performance testing?”

Phases of a Load Testing Project

 

Current development methodologies such as Agile and DevOps allow for the creation of applications that quickly answer to customers’ needs. These methodologies involve updating the project organization and require close collaboration between teams.
In these methodologies, the project life cycle is organized into several sprints, with each sprint delivering a part of the application.

In this environment, the performance testing process should follow the workflow below

A performance testing strategy should be implemented at an early stage of the project life cycle. The first step: Performance Qualification. This defines the testing effort of the whole project.

An “Old School” approach to performance testing would force the project to wait for an assembled application before performance validation would begin. In a modern project life cycle, the only way to include performance validation in an early stage is to test individual components after each build and implement end-to-end performance testing once the
application is assembled.

Try NeoLoad, the most automated performance testing platform for enterprise organizations continuously testing from APIs to applications.

 

 

Establishing a Performance Testing Strategy

 

This is the first and most crucial step of performance testing. It defines:

  • The Performance Testing Scope
  • The Load Policy
  • The SLA (Service Level Agreements) and Service Level Objectives (SLOs)

It is never possible to test everything, so conscious decisions about where to focus the depth and intensity of testing must be made. Typically, the most fruitful 10% to 15% of test scenarios uncover 75% to 90% of the significant problems.

Risk-based Testing

Risk assessment provides a mechanism with which to prioritize the test effort. It helps to determine where to direct the most intense and deep test efforts and where to deliberately test lightly, to conserve resources for intense testing areas. Risk-based testing can identify significant problems more quickly and earlier on in the process by testing only the riskiest aspects of a system. Most system performance and robustness problems occur in these areas:

  • Resource-intensive features
  • Timing-critical or timing-sensitive uses
  • Likely bottlenecks (based on the internal architecture and implementation)
  • Customer or user impact, including visibility
  • Prior defect history (observations of other similar systems in live operation)
  • New and modified features and functionality
  • Heavy demand: heavily used features
  • Complex features
  • Exceptions
  • Troublesome (poorly built or maintained) portions of the system
  • Platform maintenance

Automate API and end-to-end application performance testing with NeoLoad, the continuous performance testing platform.

 

 

Here is a list of questions presented by industry-expert Ross Collard to identify the different performance risks:

Situational View

  • Which areas of the system operation, if they have inadequate performance, most impact the bottom line (revenues and profits)?
  • Which uses of the system are likely to consume a high level of system resources per event, regardless of how frequently the event occurs? The resource consumption should be significant for each event, not high in aggregate simply because the event happens frequently and thus the total number of events is high.
  • What areas of the system can be minimally tested for performance without imprudently increasing risk, to conserve the test resources for the areas which need heavy testing?

Systems View

  • Which system uses are timing-critical or timing-sensitive?
  • Which uses are most popular (e.g., they frequently happen)?
  • Which uses are most conspicuous (e.g., have high visibility)?
  • What circumstances are likely to cause a heavy demand on the system from external users (e.g., remote visitors to a public website who are not internal employees)?
  • Are there any notably complex functions in the system, for example, in the area of exception handling?
  • Are there any areas in which new and immature technologies have been used, or unknown and untried methodologies?
  • Are there any other background applications that share the same infrastructure and are they expected to interfere or compete significantly for system resources (e.g., shared servers)?

Intuition/Experience

  • What can we learn from the behavior of the existing systems that are being replaced,
    such as their workloads and performance characteristics? How can we apply this
    information to testing the new system?
  • What has been your prior experience with other similar situations? Which features, design styles, subsystems, components or systems aspects typically have encountered performance problems? If you have no experience with other similar systems, skip this question.
  • What combinations of the factors, which you identified by answering the previous questions, deserve a high testing priority? What activities are likely to happen concurrently, and cause heavy load and stress on the system?
  • Based on your understanding of the system architecture and support infrastructure, where are the likely bottlenecks?

Requirements View

  • Under what circumstances is heavy internal demand likely (e.g., by the internal employees of a website)?
  • What is the database archive policy? What is the ratio of data added/year?
  • Does the system need to be available during 7 hours, 24 hours, etc.?
  • Are there maintenance tasks running during business hours?

The answers to these questions will help identify:

  • Areas that need to be tested
  • The type of tests required to validate the performance of the application

Start testing with NeoLoad, the fastest, the most realistic, and the most automated continuous performance testing platform.

 

 

Component Testing

 

Once the functional areas required for performance testing have been identified, decompose
business steps into technical workflows that showcase technical components.
Why should business actions be split into components? Since the goal is to test the
performance at an early stage, listing all important components will help to define a
performance testing automation strategy. Once a component has been coded, it makes sense
to test it separately and measure:

    • The response time
    • The maximum call/s that the component can handle

Moreover, component testing supports JMS, API, Service, Messages, etc. allowing scenarios to be easily created and maintained. Another major advantage of this strategy is that the components’ interfaces are less likely to be impacted by technical updates. Once a component scenario is created, it can be included in the build process, and feedback on the performance of the current build can be received.
After each sprint, it is necessary to test the assembled application by running realistic user tests (involving several components).
Even if the components have been tested, it is mandatory to measure:

      • The behavior of the system with several business processes running in parallel
      • The real user response time
      • The availability of the architecture
      • The sizing of the architecture
      • Caching policy

The testing effort becomes more complex with the progression of the project timeline. In the beginning, the focus is on the quality of applications and then concentrated on the target environment, architecture, and network. This means that performance testing objectives will vary depending on the timeline of the project.

Test Environment

 

It is imperative that the system under test is properly configured and that the results obtained can be used for the production system. Environment and setup-related considerations should remain top-of-mind during test strategy development.
Here are a few:

  • What data is being used? Is it real production data, artificially generated data, or just a few random records? Does the volume of data match the volume forecasted for production? If not, what is the difference?
  • How are users defined? Are accounts set with the proper security rights for each virtual user, or will a single administrator ID be reused?
  • What are the differences between the production and the test environment? If the test system is just a subset of production, can the entire load or just a portion of that load be simulated?

It is important that the test environment mirrors the production environment as closely as possible, but some differences may remain. Even if tests are executed against the production environment with the actual production data, it would only represent one point in time. Other conditions and factors would also need to be considered.

Test API and application with NeoLoad, the most automated continuous performance testing platform.

 

 

Devise a Test Plan

 

The test plan is a document describing the performance strategy. The test plan should include:

    • Performance risk assessments highlighting the performance requirements
    • Performance modeling: explaining the logic to calculate the different load policy
    • The translation of the main user journey into components
    • The description of the different user journeys with the specific think time per business transaction
    • The dataset(s)
    • The SLA
    • The description of each test that needs to be executed to validate the performance
    • The testing environments

The test plan is a key artifact of a well-designed and executed performance testing strategy, acting as evidence that a team has satisfactorily considered the critical role performance plays in the final end-user experience.
In many cases, project teams ensure the delivery of performance test plans as part of feature work during planning and development cycles by requiring them as part of their “Definition of Ready.” Though each feature story or use case may not require the same level of performance testing, making the thought process a hard requirement for completion leads to better systems and an improved mindset over the end-to-end quality of what the team delivers.

Modeling Performance Tests

 

 

The objective of load testing is to:

simulate realistic user activity on the application. If a nonrepresentative user journey is being selected or if the right load policy is not being defined, the behavior of the application under load will not be able to be properly validated. Performance test modeling doesn’t require any technical skills, only time to fully understand the application:

 

  • How the users work on the system
  • What are their habits?
  • When and how often are they using the application? And from where?
  • Is there any relation between external events and activity peaks?
  • Is the company business plan related to the activity of my application?

You have read 30% of the article