10 Tips to Improve Automated Performance Testing within CI Pipelines

Getting testing right in a Continuous Integration pipeline is a critical part of software development at web scale. For many companies, it’s a challenge, particularly with automated performance testing. It’s not for lack of effort. Many companies can’t seem to realize the full value from their efforts. The reasons are many. Some testing efforts merely reinvent the wheel. Others are conducted randomly with no apparent intention other than just for the sake of execution. Ensuring the testing is both appropriate and aimed at meeting the business requirement(s) is a distant afterthought.

It doesn’t have to be this way.

Any company can conduct useful, efficient performance testing in an automated CI pipeline. All that’s required is some helpful knowledge from those in the know. To increase this circle of “knowers,” we’ve created a list of ten tips to improve performance testing within CI Pipelines.

1. Test continuously according to the product’s long term goals  

2. Distinguish between SLA, SLO, and SLI

3. Keep tests small and targeted

4. Test the segments before the whole

5. Automate things that are not flaky

6. Save time, use smoke tests

7. Leverage the use of your source control management system

8. An expectation without a feedback loop ain’t

9. A known infrastructure is a testable infrastructure

10. Work with your CI/CD pipeline, not against it

Today’s post will focus on the first two tips, while the remaining recommendations will be covered in separate installments.

1. Test continuously according to the product’s long term goals

Want to quickly identify whether something terrible is going on with your company’s testing processes, consider the following scenario. It’s time for the production release. Everybody on the development team is sitting around the conference room table in a state of white-knuckle anxiety waiting for the end-to-end testing to finish so the code can be pushed for release. Either way, you’re hosed. Why? After all, if the testing goes well, the code will go forward. No big deal, right? Wrong.

The fact that the viability of a release depends on the state of a single, anxiety-provoking end-to-end test reveals a significant shortcoming in the overall testing process. Such testing should not come with stress any more than that of a routine blood test. If a patient has a history of regular physical checkups, a healthy diet, and regular exercise, the blood test should validate the historically positive behavior. However, if the patient hasn’t seen a doctor in ten years, lives on a diet of potato chips and root beer, and walks no further than to the mailbox, a simple blood test can be life-altering. Who knows what it might reveal?

The analogy holds for IT processes. If a company’s overall development practices are healthy, with continuous testing throughout all levels of the software development lifecycle, end-to-end testing should be just another validation checkbox. Where, if a problem uncovers, fixing it should require no great effort.

When development practices are haphazard, and testing is left to the end, how could the last end-to-end test be anything but an anxious experience? Who knows how much technical debt lurks in the code, how many mysteries were left behind by developers no longer with the company. How much of the test is nothing more than cycling through 20% of the code base, while the rest remains untouched? Questions like these are surprisingly common in a development environment where the long term product goals are unclear, and testing events are episodic.

When conditions are created where long term product goals are well understood by all, the development process and code quality produced increase too (provided that testing is conducted continuously throughout all lifecycle stages). Continuous testing based on the product’s long term goal is wise and practical and helps to improve automated performance testing.

2. Distinguish between SLA, SLO, and SLI

When most companies think about defining how software/software services need to work, they believe in the Service Level Agreement (SLA). A well defined SLA is essential. Otherwise, there’s no reliable way for a company to operate as a service provider or consumer. From a provider’s standpoint, a missing SLA means that customers can demand anything, anytime (and expect it). Not having an SLA as seen through the consumer’s lens results in putting the technical staff at-risk for hours as they sit on-hold trying to reach mission-critical support staff while their company’s digital infrastructure comes tumbling down.

SLAs are necessary, yet they only capture part of the picture. More is needed. Adding in Service Level Objectives (SLO) and Service Level Indicators (SLI) provides the additional information necessary to ensure that the partnership between providers and consumers meets the needs of all.

An  SLA describes the commitment between a provider and a consumer; its complexity will vary according to the needs of each party. The SLA will define the responsibilities of each and the availability of the service. The agreement is only as good as the definitions of the objectives behind it, the metrics by which service level will be determined. This is where SLOs and SLIs become important.

Establishing an SLA is a lot easier when an SLO is defined ahead of time. Understanding what a consumer wants from the code/service provides the insight necessary to affirm that the service being offered meets the expectation. A well defined SLO will help craft an SLA that makes sense for all.

For an SLA to effectively meet the objectives stated in the SLO, a standard, well understood set of metrics must be defined. Otherwise, companies run the risk of comparing apples to oranges. Hence, the value of the SLI. The Service Level Indicator(s) provide the details regarding how operational performance will be measured in terms of the SLA. The more exact the SLI, the better.

SLOs/SLIs allow you to make assumptions out of the SLA construct. The needs and conditions of operations and the measurement approach are defined. It’s the difference between, “I need something to drink” and “I am thirsty and need 8 oz of water to satisfy my thirst.”

An SLA based on a well defined SLO, then measured according to a set of metrics that are the result of a detailed SLI will benefit general operations as well as the testing process. Tests that meet the SLA to the letter according to clearly defined parameters described in the SLI provide greater accuracy and more reliable analysis vs. improvisation.

Distinguishing between SLA, SLO and SLI will go a long way to creating meaningful, reliable relationships among the parties that the software touches – developers, testers, and users alike.

 

Next week, in the second of the 10 tips three-part blog series, we will dive into tips 3-6.

Learn More about Automated Performance Testing

Discover more load testing, performance testing, and automation testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Paul Bruce
Sr. Performance Engineer

Paul’s expertise includes cloud management, API design and experience, continuous testing (at scale), and organizational learning frameworks. He writes, listens, and teaches about software delivery patterns in enterprises and critical industries around the world.

Leave a Reply

Your email address will not be published. Required fields are marked *