Performance testing, best practices, metrics, & more
What is performance testing?
Performance testing is the practice of evaluating how a system performs in terms of responsiveness and stability under a particular workload. Performance tests are typically executed to examine speed, robustness, reliability, and application size. The process incorporates “performance” indicators such as:
- Browser, page, and network response times
- Server request processing times
- Acceptable concurrent user volumes
- Processor memory consumption; number and type of errors that might be encountered with app
Performance testing gathers all the tests that verify an application’s speed, robustness, reliability, and correct sizing. It examines several indicators such as a browser, page and network response times, server query processing time, number of acceptable concurrent users architected, CPU memory consumption, and number/type of errors which may be encountered when using an application.
Why should you test the performance of your system?
In short, to ensure that it will meet the service levels expected in production, as well as deliver a positive user experience. Application performance is a key determinant of adoption, success, and productivity.
As it can be cost prohibitive to have to solve a production performance problem, continuous performance testing strategy optimization is the key to the success of an effective digital strategy.
Before starting the performance test process, it’s important to think about the following such that you shape a forward-thinking plan:
- Why is system performance testing important?
- When is the right time to conduct performance testing?
- What are the different types of performance tests?
- What does performance testing measure?
- What is the process for performance testing?
- What are the characteristics of effective performance testing?
- What are the Performance Testing Success Metrics?
- Why automate performance testing?
- How to automate performance testing?
- Why might it be useful to use specific performance testing tools – e.g., those adapted to a DevOps structure?
Why is system performance testing important?
The performance tests you run will help ensure your software meets the expected levels of service and provide a positive user experience. They will highlight improvements you should make to your applications relative to speed, stability, and scalability before they go into production. Applications released to the public in absence of testing might suffer from different types of problems that lead to a damaged brand reputation, in some cases, irrevocably.
The adoption, success, and productivity of applications depends directly on the proper implementation of performance testing.
While resolving production performance problems can be extremely expensive, the use of a continuous optimization performance testing strategy is key to the success of an effective overarching digital strategy.
When is the right time to conduct performance testing?
Whether it’s for web or mobile applications, the lifecycle of an application includes two phases: development and deployment. In each case, operational teams expose the application to end users of the product architecture during testing.
Development performance tests focus on components (web services, microservices, APIs). The earlier the components of an application are tested, the sooner an anomaly can be detected and, usually, the lower the cost of rectification.
As the application starts to take shape, performance tests should become more and more extensive. In some cases, they may be carried out during deployment (for example, when it’s difficult or expensive to replicate a production environment in the development lab).
What are the different types of performance tests?
There are many different types of performance tests. The most important ones include load, unit, stress, soak and spike tests.
Load testing simulates the number of virtual users that might use an application. In reproducing realistic usage and load conditions, based on response times, this test can help identify potential bottlenecks. It also enables you to understand whether it’s necessary to adjust the size of an application’s architecture.
Unit testing simulates the transactional activity of a functional test campaign; the goal is to isolate transactions that could disrupt the system.
Soak testing increases the number of concurrent users and monitors the behavior of the system over a more extended period. The objective is to observe if intense and sustained activity over time shows a potential drop in performance levels, making excessive demands on the resources of the system.
Spike testing seeks to understand implications to the operation of systems when activity levels are above average. Unlike stress testing, spike testing takes into account the number of users and the complexity of actions performed (hence the increase in several business processes generated).
What does performance testing measure?
Performance testing can be used to analyze various success factors such as response times and potential errors. With these performance results in hand, you can confidently identify bottlenecks, bugs, and mistakes – and decide how to optimize your application to eliminate the problem(s). The most common issues highlighted by performance tests are related to speed, response times, load times and scalability.
Excessive Load Times
Excessive load time is the allotment required to start an application. Any delay should be as short as possible – a few seconds, at most, to offer the best possible user experience.
Poor Response Times
Poor response time is what elapses between a user entering information into an application and the response to that action. Long response times significantly reduce the interest of users in the application.
Limited scalability represents a problem with the adaptability of an application to accommodate different numbers of users. For instance, the application performs well with just a few concurrent users but deteriorates as user numbers increases.
Bottlenecks are obstructions in the system that decrease the overall performance of an application. They are usually caused by hardware problems or lousy code.
What is the process for performance testing?
While testing methodology can vary, there is still a generic framework you can use to address the specific purpose of your performance tests – which is ensuring that everything will work properly in a variety of circumstances as well as identifying weaknesses.
1 – Identify the Testing Environment
Before you begin the testing process, it’s essential to understand the details of the hardware, software, and network configurations you’ll be using. Comprehensive knowledge of this environment makes it easier to identify problems that testers may encounter.
2 – Identify Performance Acceptance Criteria
Before carrying out the tests, you must clearly define the success criteria for the application – as it will not always be the same for each project. When you are unable to determine your success criteria, it’s recommended that you find a similar application as the benchmark.
3 – Define Planning and Performance Testing Scenarios
To carry out reliable tests, it’s necessary to determine how different types of users might use your application. Identifying key scenarios and data points is essential for conducting tests as close to real conditions as possible:
- Set up the testing environment
- Implement test design
- Run and monitor tests
- Analyze, adjust and re-do the tests
After running your tests, you must analyze and consolidate the results. Once the necessary changes are done to resolve the issues, tests should be repeated to ensure the elimination of any others.
What are the characteristics of effective performance testing?
Realistic tests that provide sufficient analysis depth are vital ingredients of “good” performance tests. It’s not only about simulating large numbers of transactions but anticipating real user scenarios that provide insight into how your product will perform live.
Performance tests generate vast amounts of data. The best performance tests are those that allow for quick and accurate analysis to identify all performance problems, their causes.
With the emergence of Agile development methodologies and DevOps process practices, performance tests must remain reliable while respecting the accelerated pace of these cycles: development, testing, and production. To keep pace, companies are looking to automation, with many choosing NeoLoad – the fastest and most highly automated performance testing tool for the design, filtering, and analysis of testing data.
Performance Testing Success Metrics
The critical metrics you should be looking for in your tests must be clearly defined before you start testing. These parameters generally include:
- Amount of time the processor spends running non-idle threads
- Use of a computer’s physical memory for the processing
- Number of bits per second used by the network interface (bandwidth)
- The time the disk is busy with read/write requests
- Number of bytes used by a process that cannot be shared with others (used to measure memory leaks)
- Amount of virtual memory used
- Number of pages written or read to disk to resolve hardware page defects
- The overall processing rate of faulty pages by the processor
- The average number of hardware interruptions the processor receives/processes each second
- Average read/write requests queued for the selected disk during a sampling interval
- Length of the output packet queue
- Total number of bytes sent/received by the interface per second
- Response times
- The rate at which a computer/network receives requests per second
- Number of user requests satisfied by pooled connections
- Maximum number of sessions that can be simultaneously active
- Number of SQL statements handled by cached data instead of expensive I/O operations
- Number of access requests to a file on a Web server every second
- Amount of data that can be restored at any time
- The locking quality of tables and databases
- Maximum wait times
- Number of threads currently running/active
- The return rate of unused memory in the system (garbage collector)
Why automate performance testing? For more agility!
Digital transformation is driving businesses to accelerate the pace of designing new services, applications, and features in the hope of gaining/maintaining a competitive advantage. Agile development methodologies can provide a solution.
Despite the adoption of Continuous Integration by Agile and DevOps environments, performance testing is typically a manual process. The goal of each performance tester is to prevent bottlenecks from forming in the Agile development process. To avoid this, incorporating as much automation into the performance testing process where possible can help. To do so, it’s necessary to run tests automatically in the context of Continuous Integration and to automate design and maintenance tasks whenever possible.
The complete automation of performance testing is possible during component testing. However, human intervention of performance engineers is still required to perform sophisticated tests on assembled applications. The future of performance testing lies in automating testing at all stages of the application lifecycle.
How to automate performance testing with NeoLoad?
NeoLoad is the performance testing platform developed by Neotys to automate the execution, design, update, and analysis of test tasks.
When designing performance tests, NeoLoad automates correlation and randomization tasks, enabling you to create tests ten times faster than other tools. It also allows you to import existing functional Selenium test scripts for use in performance testing.
For Continuous Integration, NeoLoad integrates with all leading CI servers, such as Jenkins, Bamboo, and TeamCity, via an API tool. At this point, custom integrations with various tools in the continuous deployment chain are possible.
One of the main challenges for performance engineers is updating test cases when the application changes. This is especially true when testing assembled applications. NeoLoad provides a near-wholly automatic update feature for scenarios created in the load testing tool. Automation is complete for situations imported from Selenium.
Performance testing analysis
The most challenging phase to automate, especially for complete tests, is on an assembled application – since performance issues can be due to several factors. Identifying their cause often requires human intervention. NeoLoad offers a wide range of performance testing capabilities helping you to quickly/accurately identify performance issue root causes by isolating critical data.
It’s now possible to automate much of the analysis in an Agile environment. NeoLoad lets you define expected service levels (SLAs) used to assign a pass/fail status to a test. This allocation can be fully automated to allow for full automation across the entire performance test cycle with a CI server. For example, an Agile team can schedule automated performance tests, without regression, that will run overnight. The comparison between the results of expected service level performance tests and test results from the previous version will be completed automatically – enabling the automatic integration process to continue if the test is successful.
Collaborative Performance Engineering to Fit the DevOps Mindset
To meet the needs for agility and faster release cycles, IT departments implement DevOps structures. This way of working is particularly suited to performance engineering because it enables performance validation from the early stages of the application development cycle through to production.
A supporter of DevOps organizations, making performance the responsibility of the entire team (and not just a few specialists), NeoLoad offers a collaborative platform called NeoLoad Web This is accessible to development, quality assurance, and operations teams to transform DevOps in DevTestOps.
Want to learn more? Read our article about how to choose a load testing tool.