Blog

Whose job is it? Performance testing in an Agile environment

  • 11 August 2023
  • 0 replies
  • 263 views
Whose job is it? Performance testing in an Agile environment
Userlevel 4
Badge +2

There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particularly finicky error.

That was a long time ago (for most of us).

One of the biggest reasons that Agile development was created in the first place was to improve the quality of software by improving the way software is written. The old communication barriers described above were quickly torn down as the tester’s job shifted to validating software, not finding bugs. That philosophy is represented directly as one of the Agile Manifesto’s core principles: Deliver working software frequently, from a couple of months to a couple of weeks (or days, even hours), with a preference to the shorter timescale.

Before Agile, it was common to spend months writing code without seeing anything functional. Agile sought to break that pattern.

Quality is everyone’s job

In order to do that, development teams had to change the way they thought about quality. One of the most important ideas that Agile introduced was the idea of done-ness. Code was written until it was complete, functional, and working well — and if you couldn’t get a portion of code done in a suitably short timeline, you had to break it up into smaller components that could reach a final state of being done. By following this process, quality would improve.

Now, quality is part of everyone’s job — that’s the essence of Agile testing. You can no longer write code in a bubble and throw it over the fence to a tester to make sure it works. You have to take responsibility for the quality of what you are writing. Otherwise, it isn’t done.

As a result, the role of the QA department has become the keeper of that process. QA professionals make sure that the right test automation cases are being written, and that the entire test function is being executed properly across all of the codebase. They monitor test metrics to evaluate how done everyone’s code is, so that when a sprint is completed, the quality bar is met.

But what about performance testing? Performance often still remains an afterthought in the test process — even at Agile shops. But if performance is not the job of one lone performance engineer or an isolated team, can it be part of everyone’s job?

Can the performance testing function become a keeper of a process, as has happened with functional QA?

Of course! Here’s how to make that happen.

Share metrics from performance tests

“You get what you measure.” It’s a common saying that describes how you can motivate behavior simply by tracking and publishing the metrics you seek to influence. It really works, and can be a great way to get everyone on your team sharing the responsibility for the performance of your application. Prepare a report that identifies a few key metrics, and share that with everyone across your Development, QA, and Operations teams. If you really want to go the extra mile, customize that report to fit the needs of each department, by including data that:

  • Helps developers find the root causes of performance problems
  • Assesses task-based performance for UX
  • Embeds performance test results with functional test results for the QA team
  • Highlights inter-component performance for the integration team
  • Positions live data from performance monitoring systems for the operations team

Ask developers to write performance unit tests

By now, Agile developers are used to writing their own test cases. In fact, if you are a shop that practices test-driven development (TDD), you build your tests even before you write a line of code. But are your developers writing performance test cases? Or are they just writing functional tests?

Load and stress tests don’t need to be large, all-encompassing tests that put the entire production environment to work at maximum scale. There are a number of ways to write unit tests for performance that can be part of your TDD process, or even added as performance requirements on your Agile task board. If your development team is thinking about acceptable performance early in the process, the same way they think about acceptable functionality, you’re a lot closer to a truly distributed sense of ownership for application performance.

Run performance code reviews

Code reviews are nothing new, but in Agile development they have definitely gained a heightened degree of prominence. Whether your team casually asks for a peer’s help on an ad-hoc basis, or the practice of reviewing code is rigorously built into the fabric of your development process, getting a second pair of eyes on code can boost code quality significantly. Plus, knowing that someone’s going to be looking at what you write may force you to write cleaner, more maintainable code in the first place — an all-around good outcome.

Here’s the thing — performance should be a part of whatever code review process you have in place. Your colleagues should be trained to look not only for functional problems and edge-cases, but also for poorly written algorithms and other problems related to scale and optimization. One good technique is to actually have dedicated performance code reviews, where people sit down and review code exclusively for the purposes of finding performance problems. Once again, the more your development team thinks about performance up front, the fewer problems you’ll run into on the back end.

Establish common performance test scenarios across departments

Finally, it’s important to have everyone speaking the same language. The performance and scalability tests you run should be based on real-world usage and behavioral patterns of your users. That means that performance problems you see in your production environment should be explored and tested thoroughly in development and QA. Similarly, the key pathways that are identified during development as those most likely to bottleneck should be tagged and monitored closely once that software makes its way into the field.

To do this, share a common library of performance test scenarios that can be easily transported between environments. A test that the development team creates should run easily as a simulated user in a live production environment. And a user path that the Ops team is monitoring should be easily testable in a structured continuous integration process. That’s why a shared test scenario library is so important, and why we’ve built it into Tricentis NeoLoad.

It takes a village

No man is an island, nor is any performance engineer. Your entire application team shares the responsibility for keeping applications running fast and responsively, and by following the guidelines above, you can help them accept that responsibility. Encourage everyone to pay attention to your application’s performance, and you can play an active role in advancing your organization’s culture of quality.


0 replies

Be the first to reply!

Reply