Blog

5 best practices for performance testing at the speed of Agile

  • 24 August 2023
  • 1 reply
  • 197 views
5 best practices for performance testing at the speed of Agile
Userlevel 4
Badge +2

By Larry Loeb, Veteran Technology Editor and Author

Behold, the five best practices that can help maximize advantages while assisting with load testing in an Agile environment.

1. Make performance SLAs a focus area

Performance needs to be somewhere on the task board if the team is going to give it attention. Otherwise, it will get ignored. One effective way to ensure inclusion of performance is using your performance service level agreements (SLAs) as acceptance testing for each story. That means the story cannot be “done” if changes will cause the application to fall short of the SLAs.

This discipline works well if the changes made to the story will affect a relatively small section of the overall code. The performance issues would, therefore, be confined to a portion of the application.

For SLAs that are general across the entire application, tests should be added to a more extensive list of constraints (which may include functional tests) that will be tested for every story to determine if it meets a minimal “definition of done” without breaking any of the constraints.

2. Work closely with developers to anticipate change

Testers should always be thinking about how the stories that are currently being coded will ultimately be tested. They can stay ahead of the curve if they keep engaged with the team, especially the developers. A tester will typically learn about updates occurring on development tasks during the daily stand-ups or the scrum-like meetings that an organization uses to signal progress.

Specific questions like “Will a change require new load tests?” or “Will this cause errors in the current test scripts?” will keep the tester focused on the changes that will be coming down the pike in any case. The ability to deal with these changes in a proactive manner can only add to a positive outcome.

3. Integrate with a build server

In the same way that performance goals need to be attached to the task board, performance tests should be among the recurring tests with every build. This could be done by having the build server initiate the test and including the test results generated within the build tool. It lets the person who kicked off the build see the results and, at the same time, know which changes went into that build. This means that they can be fixed if there is a performance issue that shows up.

4. CI + nightly build + end-of-sprint load testing

The difference between continuous integration nightly and post-sprint builds can be significant. It can be the difference between a single change made in a day versus all the changes committed during a sprint.

So, a performance test for these kinds of builds should start small and use the internal loads that are available. Running a small performance test with the most common scenarios covered by use of a typical load on the application that is produced from your internal load generators will run the fastest.

The CI builds, and tests, should be run quickly so that one may get the results about how the changes in the build affected the system. These results must get back to the developer that kicked off the CI to be of any practical use.

5. Realistic tests

Emulating real-world network conditions is one critical part of a test. Look for a test situation that can provide WAN emulation which limits bandwidth and simulates latency and packet loss. This enables a test in which virtual users will download the content of the web application at a realistic rate. This capability is particularly important when testing a mobile application because mobile devices typically operate with less bandwidth than laptops and desktops and can be significantly affected by changes in latency and packet loss (especially when signal strength is weak).

The methodology should be to record from any browser or mobile device and then simulate them back during the load tests. Simulating devices is important because of the need for the number of parallel connections for realistic response times and server load. These parallel requests require more connections with the server and can lengthen response times.

Test globally, outside your firewall. To truly understand how location will affect the performance for your users, you need to look at a solution that is also capable of generating load from cloud servers around the world.


1 reply

Userlevel 1

By Larry Loeb, Veteran Technology Editor and Author

Behold, the five best practices that can help maximize advantages while assisting with load testing in an Agile environment.

1. Make performance SLAs a focus area

Performance needs to be somewhere on the task board if the team is going to give it attention. Otherwise, it will get ignored. One effective way to ensure inclusion of performance is using your performance service level agreements (SLAs) as acceptance testing for each story. That means the story cannot be “done” if changes will cause the application to fall short of the SLAs.

This discipline works well if the changes made to the story will affect a relatively small section of the overall code. The performance issues would, therefore, be confined to a portion of the application.

For SLAs that are general across the entire application, tests should be added to a more extensive list of constraints (which may include functional tests) that will be tested for every story to determine if it meets a minimal “definition of done” without breaking any of the constraints.

2. Work closely with developers to anticipate change

Testers should always be thinking about how the stories that are currently being coded will ultimately be tested. They can stay ahead of the curve if they keep engaged with the team, especially the developers. A tester will typically learn about updates occurring on development tasks during the daily stand-ups or the scrum-like meetings that an organization uses to signal progress.

Specific questions like “Will a change require new load tests?” or “Will this cause errors in the current test scripts?” will keep the tester focused on the changes that will be coming down the pike in any case. The ability to deal with these changes in a proactive manner can only add to a positive outcome.

3. Integrate with a build server

In the same way that performance goals need to be attached to the task board, performance tests should be among the recurring tests with every build. This could be done by having the build server initiate the test and including the test results generated within the build tool. It lets the person who kicked off the build see the results and, at the same time, know which changes went into that build. This means that they can be fixed if there is a performance issue that shows up.

4. CI + nightly build + end-of-sprint load testing

The difference between continuous integration nightly and post-sprint builds can be significant. It can be the difference between a single change made in a day versus all the changes committed during a sprint.

So, a performance test for these kinds of builds should start small and use the internal loads that are available. Running a small performance test with the most common scenarios covered by use of a typical load on the application that is produced from your internal load generators will run the fastest.

The CI builds, and tests, should be run quickly so that one may get the results about how the changes in the build affected the system. These results must get back to the developer that kicked off the CI to be of any practical use.

5. Realistic tests

Emulating real-world network conditions is one critical part of a test. Look for a test situation that can provide WAN emulation which limits bandwidth and simulates latency and packet loss. This enables a test in which virtual users will download the content of the web application at a realistic rate. This capability is particularly important when testing a mobile application because mobile devices typically operate with less bandwidth than laptops and desktops and can be significantly affected by changes in latency and packet loss (especially when signal strength is weak).

The methodology should be to record from any browser or mobile device and then simulate them back during the load tests. Simulating devices is important because of the need for the number of parallel connections for realistic response times and server load. These parallel requests require more connections with the server and can lengthen response times.

Test globally, outside your firewall. To truly understand how location will affect the performance for your users, you need to look at a solution that is also capable of generating load from cloud servers around the world.

Thanks for this as performance testing is sometimes almost forgotten during test planning

Ignoring performance testing can have serious implications:

1. User Frustration: Slow or crashing software leads to unhappy users.

2. Revenue Loss: Poor performance can mean abandoned transactions and lost sales.

3. Reputation Damage: Bad user experiences tarnish a brand's image.

4. Unpredictable Failures: Unexpected issues can disrupt operations.

5. Costly Fixes: Resolving post-release problems is expensive.

6. Missed Scaling Opportunities: Without testing, you might overlook optimization chances.

Investing in performance testing is essential for a smooth user experience, cost savings, and safeguarding your reputation.

Reply