10 Tips to Improve Automated Performance Testing within CI Pipelines Part 3

The following is the final piece of our three-part series, “10 Tips to Improve Automated Performance Testing within CI Pipelines.” This week’s installment focus is on tips 7-10.

In case you missed, or if you’d like to go back and re-read, here are parts one and two.

7. Leverage the use of your source control management system

Source code management systems (SCM) are no longer just a place to store code under version control. They are a key enabler to repeatable, verifiable deployment (only when all automation artifacts contributing to the process are included).  

Today, CI/CD integration features modern SCMs such as GitHub, BitBucket, Microsoft Team Foundation Server, and AWS Code Commit offer, use one check-in event to kick-off full build/test/deploy processes with no human intervention. Also, today’s SCM systems can be integrated into workflow management systems so that all parts of the business have visibility into development status and testing results at every level.

Traditionally, performance testing suites and results were siloed in tools and platforms that made collaboration with development teams difficult. In new Agile and DevOps mindsets, enabling easy performance test re-runs to understand required fixes warrants tools that fit into your existing engineering practices and technology landscape. If everyone else in your organization uses Git (for instance) as the source of truth, your performance practices should follow suit.

When leveraged correctly, modern source control management systems save time and improve hand-offs between human and automated processes. The trick is to invest the effort to learn the capabilities your SCM system offers, putting concrete goals in place to implement the SCM workflows that will benefit your company.  As your code and tests change, branches and tags help teams test only the changes within a particular sprint or cycle, thereby saving time and accelerating critical feedback before releases.

8. An expectation without a feedback loop ain’t

Testing is about managing expectation. You exercise pieces of code in a controlled manner and get results. You then compare the results of the test to a set of predefined criteria. Hopefully, the outcomes meet the expectations. If not, it’s back to the drawing board.

Seems simple enough, right? Just keep in mind that a test is only as good as the observable data it produces. The operative word being “observable.”

Sadly, it’s not uncommon for test engineering to go through the effort of creating a set of test expectations only to find out there is no way they can capture the required information. Test engineering processes can have the predefined outcome of how the code is to perform mapped out, but without a feedback loop documenting the results, the expectations are unfulfillable.

So, what’s to be done?

Firstly, you must follow a fundamental assertion of Agile: working together is essential. It’s critical that all engineers (developers, operations, and test practitioners) collaborate from the time the first line of code is written ensuring that it will be observable from a testing perspective. Visibility into “public” entry points such as an API URL is rarely a problem. Where observation becomes vexing is at lower levels, internally – between components and microservices. In these cases internally-reported telemetry and tools that allow distributed tracing are indispensable.

A key outcome of effective performance planning is the creation and agreement on clear Service Level Objectives (SLOs). Even if one engineer can say, “I’m going to put monitors in all servers to observe results,” they rarely have the authority, let alone the expertise to follow through. It’s got to be a team effort. SLOs are concrete expectations on system performance that map organizational risk to observable metrics (service level indicators) that can be seen and verified. Performance plans can be put in place using SLOs, simplifying requirements for in-product telemetry, load testing, and systems monitoring in production.

Creating useful feedback loops by which test expectations can be confirmed is an essential requirement for any software deployment process. The faster the delivery cycles become, the more imperative it is to automate these feedback loops and provide real-time results available to all stakeholders. The first and most critical step to achieving a healthy continuous performance practice is to state expectations in terms of SLOs, then implement with processes and technologies that make it easy to adapt to improvements in expectations and objectives as your practices mature.

9. A known infrastructure is a testable infrastructure

You’d be surprised how many operational details of mission-critical software systems are known only by way of common knowledge. It’s woeful how much a company’s digital assets (it’s software) are under-documented. Few have the intention of making the workings of their systems unknown. Mostly it’s due to the enormous demands that IT departments face to fix the current issues or to get the new features out the door yesterday.

While a lack of formal documentation is a hindrance for those trying to perform fixes or conduct upgrades, missing information can stop the show for test engineering activities and release-readiness.

A specific example from performance testing is that engineers should always start with an understanding of the architecture (diagrams if possible) and access to any existing operational data available. Without a formal and verifiable way to determine how a system works, engineers are merely guessing. Testing via guesswork produces inconsistent, unreliable, and non-reproducible outcomes, and for those reasons is a waste of time.

There is a valid argument to be made that no system ever has 100% observability. A company can do several things to make its infrastructure well known. For example, all code and configuration should be documented. Documentation coverage has to be included as part of formal policy, verifiable during check-in by the SCM system. Companies should incorporate a Wiki that allows all personnel to add qualified system operation information. Also, integrating source code management with project management systems, like integrating GitHub into Jira, creates an audit path for determining how a system evolved.

Another example of making a system well known at the infrastructure level is when companies implement service discovery mechanisms which reveal the operational details of private and public APIs. Additionally, Application Performance Monitoring (APM) solutions aren’t just for production; they’re highly valuable in staging and other pre-release environments as well. Most large organizations are already doing this, and responsible engineers are requesting access to this data as part of testing and analysis processes if it isn’t already the cultural norm. There should also be a process for providing access to the observable elements to contractors and third parties, but make sure it is aligned to work plans/schedules, is de-provisioned automatically based on a known and agreed to scope.

Ensuring that a system is “knowable” is a required shared responsibility among product owners, architects, developers, test and operations engineers. No one person can make this happen alone. It’s the result of a collective group’s effort: the more well-known the environment, the more comprehensive the testing.

10. Work with your CI/CD pipeline, not against it

Forward-thinking companies rely on automated Continuous Integration and Continuous Deployment systems (CI/CD) to get work done. They have no choice. The demands of the modern marketplace require that companies release software at breakneck speeds beyond the manual human capabilities of the past. The competitive company is always refactoring its release processes toward more effective automation. But, there can be roadblocks when these organizations work against the CI/CD pipeline.

The trick to effectively implementing automation in the CI/CD pipeline is to ensure the processes it’s going to support are consistent. A haphazard release process – having no reliable way to verify code quality, functional integrity, or systemic coherence – will not improve with automation. Consider this. When deployment phases are all over the map, and system versioning between environments changes unexpectedly, and no monitoring safeguards or version control are in place, and hotfixes are more norm than the exception, your problems are not going to be solved through automation. Automation might make things worse.

Your CI/CD pipeline will only be as good as the underlying systems and processes that support it. Test engineers: get to know your CI technologies if you don’t already. If there are other teams who provide infrastructure and release engineering assistance, get friendly with how they work. If your SRE or “DevOps” teams work in sprints, don’t expect that they can help you firefight issues on-the-fly. Treat your automation work just like any other project: plan, communicate and obtain resource approvals before rushing into it. Once you work out the initial approach, expect revision. Document your process and make it something that someone else could efficiently utilize. When the process is consistent an effective CI/CD pipeline will follow.

A final note on implementing performance testing as part of a continuous pipeline: start small, test frequently. 80% of your performance problems show up at low volumes, but the other 20% become increasingly imperative to resolve under pressure once obvious issues are addressed. For pipelines, a scheduled nightly (or even better, after any major merge or new build variant is produced) small performance test provides longitudinal data that teams can use as a baseline in stand-up meetings to see accrual of technical debt over time. Last week’s spike-development sprint may have missed that all-too-critical stabilization phase to verify scalability and reliability expectations will be met. Let automated performance testing in CI be your early beacon so you can fix the obvious performance issues that naturally flow from feature and maintenance work.

So, there you have it! The 10 Tips to Automate Performance Testing within CI Pipelines. Performance testing is a critical part of any company’s software development lifecycle. When done effectively, it results in significant time and dollar savings while producing better software.

Learn More about Automated Performance Testing

Discover more load testing, performance testing, and automation testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Paul Bruce
Sr. Performance Engineer

Paul’s expertise includes cloud management, API design and experience, continuous testing (at scale), and organizational learning frameworks. He writes, listens, and teaches about software delivery patterns in enterprises and critical industries around the world.

Leave a Reply

Your email address will not be published. Required fields are marked *