This white paper is intended to outline some of the challenges of load & performance testing in an Agile environment, as well as provide key best practices like prioritizing performance goals and automation of Continuous Integration server testing. Testing managers, practitioners, and/or anyone involved with Agile or “Agile-like” development testing can benefit from these tips.
Let’s face it: Agile is a fact of life. Perhaps you’re not “fully Agile” and maybe you’re not executing Continuous Integration or even talking about DevOps yet, but the reality is that pressures are increasing to realize many of the benefits like quality and speed inherent in Agile/“Agile-like” development methodologies.
When you start becoming more Agile, developers churn out code at a rapid rate in an attempt to move as many user stories or tasks to “done” before the end of the sprint, while many testers struggle to keep up with this pace. Furthermore, testers in Agile teams often have responsibility for automated, unit, and regression testing in addition to load and performance testing. In this environment, you need to be able to keep up with the speed of development while also meeting increased expectations of quality.
While the technical benefits of Agile development are well documented (faster time to market, adaptation to changing user demands/new technologies, constant feedback loop, etc.), the business benefits of superior application performance resulting from load and performance testing focus rigor are also significant
Benefits of Continuous Load and Performance Testing
Avoid Late Performance Problem Discovery
When load and performance testing is pushed off until the end of a development cycle, there is often limited or no time for developers to make changes. This can cause teams to delaying the release, preventing timely delivery of customer-needed features. Alternatively, if the issues are minor, teams may decide to proceed and launch the application into production with a willingness to accept the risks associated. If the performance problems are more fundamental, they could even require painful architectural changes that could take weeks or months to implement.
Make Changes Earlier When They Are Cheaper
By including load and performance testing in Continuous Integration testing processes, organizations can detect performance issues early when fix costs are much more manageable. If developers can instantly know that the new feature in the latest build has caused the application to no longer meet Service Level Agreements (SLAs), they can fix the problem before it becomes exponentially more expensive. This is especially true with Agile teams when discovering a performance problem weeks later (for something occurring several builds prior) – it makes pinpointing root cause a nightmare.
Guarantee Users Get New Features, Not New Performance Issues
In some Agile organizations, change happens quickly. It’s possible for a new feature or functionality to be checked into source control, run through a Continuous Integration build, pass all of the automated tests, and get deployed to the production server in minutes. However, if the code wasn’t optimized to handle the number of simultaneous users seen at the highest peak times, it could cause system failure. Integrating load testing into the process before these changes are deployed to production can ensure that your users experience all the goodness your brand/technology offers without any sacrifice. This can save your company thousands or even millions in lost revenue from users switching to competitors’ applications or bashing your brand because of the problems they experienced with your app.
Challenges of Performance Testing in an Agile Environment
In the same way that combining Agile with load testing can provide unique benefits, it can also present unique challenges your teams may not have experienced in the past.
Shorter Development Cycles Require More Tests in Less Time
Load and performance testing is usually pushed off until the end of a development cycle. With Agile development, cycles are much shorter, and load and performance testing can get delayed until the last day of a sprint or sometimes conducted in an every other sprint fashion. This can often result in the pre-mature release of insufficiently tested code and/or push of user stories to the next release once tested. Conceptually, the solution is to incorporate the testing earlier in the development cycle, but that’s easier said than done with many teams lacking the resources/tools to make it happen.
“Working” Code Does Not Always Perform Well
So much focus for developers on Agile teams is put on “working” code delivery, but is code really “working” if it fails under load? Should user stories/tasks be marked as “done” if the associated code causes the application to crash at 100 users? What about 1,000? 100,000? The pressure to get the code out the door is high, but so is the cost of having an application crash in production.
Developers Need Timely Feedback… Fast!
Agile developers need to know more than just the fact that their code is causing performance issues: they need to know when the problems started and what story they were working on at that time. It’s a huge pain for developers to be forced to go back and fix code for a story worked on weeks or months prior. It also means they can’t spend time working on getting new features to market. Detecting performance issues early in the cycle so you can deliver important feedback to developers quickly is crucial to saving costs.
Automating the Handoff from Dev to Ops Can Feel Risky
While DevOps and Continuous Deployment are still fairly young practices, the fear felt by operations teams that new changes in code will slow down or even crash the application when it is deployed in production has been around forever. Automation of some of the testing in the Continuous Integration process can help to ease some of this fear, but without adequate performance testing included, the risk is still real. Ops teams know well the huge impact application downtime can have on the business.
The following best practices can help you maximize the advantages while helping you overcome the challenges – of load testing in an Agile environment.
Make Performance SLAs a Key Focus Area
Every application has minimum performance service level agreements to meet, but Agile teams are often more focused on adding features/functionality as opposed to optimizing the application’s performance. User stories are typically written from a functional perspective (E.g. “As a user, I can click the ‘View Cart’ button and view the ‘My Cart’ page”) without specification of application performance requirements (E.g. “As a user, I can click the ‘View Cart’ button and view the ‘My Cart’ page in less than 1 second when there are up to 1,000 other users on the site”).
This may not be the best way to write a user story, but it illustrates the point: Performance needs to be somewhere on the task board if the team is going to give it attention.
One way to get performance on the board is to use your SLAs as acceptance testing for each story so that a story cannot allow it to be “done” if the changes cause the application to miss those SLAs (this may require new SLA(s) definition for new features/applications, E.g. all searches for a new search feature must return results in less than 2 seconds). This approach works well when the changes made to the story affect a relatively small section of code and performance issues would, therefore, be confined to a portion of the application.
For SLAs that are general across the entire application (E.g. all pages must load in less than 1 second), tests should be added to a larger list of constraints (which may include functional tests) that get tested for every story in order to determine that story meets minimal “definition of done” without breaking any of these constraints.
Work Closely with Developers to Anticipate Changes
One of the benefits for testers working in an Agile environment is that they typically learn about updates on development tasks during daily standups or similar scrum-like meetings. In order to get the maximum benefit from this level of collaboration, testers should constantly be thinking about how the stories currently being coded will ultimately be tested. Will these require new load tests? Will they cause errors in current test scripts? Can you get away with slight modifications to current test scripts if you plan ahead? Most of the time, these are small changes, so testers can stay ahead of the curve if they keep engaged with the team.
Integrate with Build Server
Even if you aren’t completely on the Agile bandwagon yet, you probably have a build server that kicks off some automated, unit, smoke, and/or regression tests. In the same way that performance goals need to be added to the task board, performance tests should be among the recurring tests with every build. This can be as simple as setting up a trigger to have the build server kick off the test, including test results display within the build tool depending on the integration sophistication. Ideally, you want the person who kicked off the build to instantly see the results and know which changes went into that build so they can be fixed if there is a performance issue.
CI + Nightly Build + End of Sprint Load Testing
The difference between Continuous Integration builds, nightly builds, and post-sprint builds can be huge. We’re talking the difference between a single change committed to a version control server versus all the changes committed in a day versus all the changes committed during a sprint. With this in mind, you should adjust your load tests to the type of build you’re running.
The best practice is to start small and internal. For CI builds that are getting kicked off every time someone commits a change, you want these tests to run quickly so that you can get results about how his/her changes affected the system back to that developer. Consider running a small performance test with the most common scenarios covered with the typical load on your application being produced from your own internal load generators.
For nightly builds, ramp it up to include more of the corner case scenarios and increase the load to what you see at peak times to see if any performance issues were missed during the CI tests. At the end of the sprint, you’ll want to go all out: consider generating load from the cloud to see what happens when users access your app through the firewall. Make sure every SLA on the constraints list is passed so that every story completed during the sprint can be marked as “done.”
How to Choose an Agile Load Testing Solution
You have read 47% of the article