At some point you’ve probably heard the term “test early and often.” If you are in an Agile organization, that term perfectly captures the philosophy of iterative development and the commitment to rooting out defects sooner rather than later.
But do you know the origin of the phrase?
The original form of the saying was actually “Vote early – and vote often.” It appeared in print as far back as 1858 as a tongue-in-cheek commentary on the democratic ideal that every person gets an equal vote. For you history buffs out there, it’s more closely associated with corruption in the voting process in the early 1900s. Local thugs would make men grow beards before voting day. After they voted once, they’d shave it to a moustache, rendering them unrecognizable so they could vote again. Finally they would vote a third time, clean-shaven. The end-result: a high-level politician in bed with organized crime. Some people believe that Al Capone himself coined the phrase, though it was more likely popularized by Chicago mayor William Hale Thompson.
It’s nice – maybe even ironic – that a phrase which had such unscrupulous origins is now a hallmark characteristic of a process that exemplifies teamwork and quality. It’s in that modern spirit that we wanted to share these 10 strategies to help you test early – and test often.
1. Performance Test Automation
Test automation is often set up without performance testing in mind, but that doesn’t make the practice any less relevant for performance testers. Automation is probably the single most important activity to testing often. An automation system helps you configure entire series and libraries of tests to run by themselves, on a set schedule or as needed. They plug seamlessly into your continuous integration system if you have one, but even if you don’t, a strong suite of automated tests will help you prevent regressions and increase test coverage.
One way to get started with automation for performance testing is to leverage whatever has been done for functional testing. Dig into that test suite and look for opportunities to modify test scenarios for high scale and concurrency. You don’t need to reinvent the wheel, and you can walk away with a great foundation for automated performance testing.
2. Make Performance Unit Tests
Performance tests don’t have to be limited to large-scale peak-load tests. In the spirit of testing early, you can start building unit tests for performance almost as early as you have code. In fact, performance test-driven development (TDD) is a practice that involves creating performance expectations and executable tests for a code module even before the code is written. You define a service level agreement, or SLA, for a component that dictates how it is to perform – seconds to load, response time, scalability requirements, etc.
If you want to test early, start identifying specific code paths that could be bottlenecks and build small unit tests that exercise their scalability. Good places to look include complicated database queries or key transactions like user login or cart checkout. These scenarios can also form the basis for your test automation library, which will get populated as the platform gets built.
3. Be Modular
As you combine the concepts of performance unit testing with automation, you end up with a modular library of building blocks that you can mix and match into a wide variety of different testing scenarios. Each of these building blocks may be simple and straightforward on its own, but you can piece them together into quite complex interactions that fully exercise a system in a complete and realistic way.
Here’s the power of this approach. Let’s say after a new build you hear from Operations that a particular transaction path is “acting up” in production. Maybe you notice customers complaining or the metrics just look off. You can quickly assemble a scenario from your modular unit tests to recreate this transaction, letting you quickly troubleshoot what’s going and get to a resolution as fast as possible. Modularity helps you adapt, react, and explore the system more thoroughly.
4. Synthetic Users
One way to test often is to integrate your testing procedure with your production environment and have test scenarios running all the time. You can do this with synthetic users. A synthetic user executes a specific transaction path within a live environment on a specified frequency. To the system, synthetic users look like any other users. In reality, they are fully instrumented and marked up so they can report the metrics of the experience they are executing.
A synthetic user acts as a canary in a coal-mine. It runs through its defined path and tells you where it gets into trouble – before a real user gets into trouble. That way you can see problems before they become critical. Check out more about synthetic users here.
5. Exploratory Testing
When an app – or even a feature – is still in its early stages, it can be hard to build rigorous tests around it. Code is changing rapidly, entire sections are incomplete, and there’s no guarantee that what you are testing will ever make it into production. But you can still test.
The method here is called exploratory testing, and it is a process that combines learning, test design, and test execution all into one connected set of activities. As a tester, you start to use the app, or feature, or module, without any real direction or purpose. You are an explorer. As you use the app, you can study the code to understand how it works, and you look for places to test – complicated algorithms, messy code, critical functions. As you explore, you build tests and exercise those tests. The process helps you get familiar with the landscape and create a suite of tests early on in the process. It also helps you get to know the product better than the development team.
6. Use the Cloud
Performance tests are often limited by the hardware you have available. Load testers know this too well – you can’t test too often if you are waiting for production resources to free up in the middle of the night on a Sunday when usage is very low. Fortunately, the cloud can push things wide open.
First and foremost, you can use the cloud to drive load on the system. In fact, this may be an even better way of running a load test than you are doing right now because when the load is generated in the cloud, you can be sure it is going through all the same network layers, load balancers, and security firewalls that your users have to go through when accessing your system. You can spread load sources across multiple geographies for the most realistic tests. It’s easy to scale it up and down, or to target specific functionality, and it’s flexible because you don’t have to have all of that hardware sitting idle, waiting for Sunday night to come along. Incorporating cloud testing into your collection of testing tools can give you powers you never had before.
7. Make Everyone Responsible
You can only do so much as an individual person or an isolated team. If you really want to test early and often, you have to get everyone else involved. Start right now by raising awareness about performance testing, and educate people on what performance testing means. A great way to do this is to share the results of your performance tests on a regular basis so people know which needle you are trying to move. Also, you can integrate your performance testing with other systems like automation and continuous integration.
Many of the other tips in this list are better done as a cohesive part of an Agile team and to get there, you have to make everyone feel a sense of shared ownership over the performance of the system. Quality is on everyone’s mind these days, but if they are only thinking about functional testing then your organization just isn’t where it needs to be. Performance should be part of everyone’s job description.
8. Partner With Developers
Look through many of these tips and one thing may be clear: testing is a lot easier when you know the code. Whether or not you know how to code, you should have a good understanding of what’s happening under the hood of the application your team is creating.
Teamwork is so critical to Agile, but often performance engineers are left in their own separate silo. If this is the case in your organization, you have to proactively build a bridge to your development team. One effective way to do that is to include performance requirements on the Agile task board. Define performance behaviors up front so developers are thinking about them, and offer to help them think through requirements, challenges, and solutions. Get to know the code they are writing. A strong partnership with your dev team will greatly improve quality.
9. Shared Test Scenarios
As is true with almost anything, the more you can leverage from work you’ve already done, the better. That means finding ways to take advantage of all the tests you’ve created and all the infrastructure you’ve invested in to allow you to test more often.
The Neotys suite of products supports shared test scenarios, which means you can take a specific scenario you’ve developed in Test and push that out as a simulated user test executed by a synthetic user. The same scenario used for load testing can be migrated into production for performance monitoring. Component tests created by developers or embedded testers can be used as the basis for creating larger load test scenarios. The ability to share test scenarios across teams and environments greatly increases your coverage and flexibility.
10. Stay Laser-Focused on KPIs
Finally, as a performance tester you should know what your key performance indicators (KPIs) are and stay laser-focused on them. All of these strategies in this article can be overwhelming if you don’t really know what you want out of them. Your KPIs tell you.
Whether you are load testing in a hurry or have all the time you need, your KPIs keep you on track with respect to page load times, scalability requirements, transaction rates, and the user experience. To best employ these strategies for testing early and often, you’ll want to define your KPIs, vet them with the team, measure your success against them, and share those results on a regular basis.