Accelerating Performance Analysis with Trends and Open API Data

This article is the third in a multi-part series on how to establish performance engineering practices that produce actionable performance analysis from automated pipelines.

TL;DR

  • Once performance test automation is reliable, it produces a critical longitudinal view
  • Open API based performance data improves human and pipeline-based workflows
  • Performance data must be easily consumable and domain-specific

Distributed Systems are Painful

The immediacy of feedback plays a crucial role in behavioral psychology. In her 1984 book, “Don’t Shoot the Dog,” Karen Pryor elaborates on how valuable positive feedback can be when applied sooner rather than later. Conversely, delayed feedback is often ineffective at improving situations, can easily have unanticipated out of context adverse effects.

Similarly, software teams building distributed systems suffer from “signal delay” in terms of feedback loops about complex behaviors of the software and infrastructure they deploy. Often non-functional (or, as I now refer to as “holistic”) requirements are ignored due to:  

  • Pressing demands for higher velocity
  • Cost and time to create and maintain environments suitable for verifying these requirements
  • A general lack of expertise

One psychological specter remains feeding teams the doubt and justifications for ignoring what they will ultimately face: delayed pain. A cloud-based microservices example – late-stage performance engineering. Leaving performance criteria out of planning processes and running big-bang load tests only at the end of a project leaves teams in a world of hurt. Non-trivial issues like cross-zone latencies, misconfigured timeouts and circuit breaker configuration, payload bloat due to nonoptimal queries, and poorly understood storage subsystems regularly leave development/operations teams scrambling to fix architectural issues.

Bring the Pain Forward

In many of our field experiences, customers who suffer less from these types of painful situations internalize the importance of making performance engineering a part of their team culture. Be it through dedicated expertise available to product groups, shrinking batch sizes, so fault isolation is easier in continuous delivery, or breaking traditional monolithic notions of testing up into activities matching clock speeds between development/release cycles, one common theme exists: automate what can be automated and verify frequently.

For modern performance engineering, the goal is to produce a regular stream of information relative to how the systems perform so that when performance degrades unexpectedly, it is identified and resolved quickly. This often means load testing at various volumes in multiple environments using a continuous integration orchestrators like Jenkins, Gitlab, or CircleCI. The choice of what tests to run depends on risk factors in the System under Test (SUT), time requirements of test design/execution, and code and environment configuration change frequency.

Sometimes automating performance testing isn’t straightforward, especially if it’s your first time. Neotys works with organizations of all sizes and savvy. Despite every organization having different objectives and appetites for where fast feedback loops are applied, common outcomes include observability improvement through real-time performance testing results and historical trending.

Once performance automation is reliable and easy to pay attention to, teams feel more comfortable expanding performance results into automated go/no-go decision processes to reduce unnecessary manual effort further. Having bigger picture freedom regarding how systems perform over time, how performance criteria are met across a portfolio of projects helps dissolves late-cycle pain reinforcing the value of early and often performance validation.

Example: Longitudinal View of API Performance

As an API team evolves project code to add endpoints, adjust existing back-end queries and change deployment configurations, low-level load tests in non-production-like environments provide early warning indicators when performance doesn’t meet SLAs. Key data points for API performance testing are:

  • Overall test pass/fail
  • Request-per-second (RPS)
  • Throughput (Mbps)
  • SLA failures

See below for an example of this in NeoLoad Web, which displays results of nightly load testing where critical API endpoints and workflows provide an easily accessible view of performance over time:

Additionally, each of these test results has specific SLAs, further clarifying what meets vs. falls short of expectations.

Importantly, all this data is available via a RESTful API with NeoLoad Web. Secure and open access to system data are critical capabilities of any product that works well within the modern IT landscape. Beyond information transparency, Open APIs allow teams to optimize how their processes work within a safety net of standard interfaces.

Whether deploying this yourself or using our SaaS-based hosted version, your performance testing data can be aggregated and consolidated into whatever particular details you need to begin to build an automated go/no-go decision process suitable for your continuous pipelines. NeoLoad results boil down to a few common structures:

Improving Pipeline Throughput with Automated Performance “Go/No-Go”

Automated pipelines lacking stages for minimum-viable verification and validation checks don’t tell the whole story. They look green but leave critical details of reliable software delivery to be dealt with far later in production when it’s near impossible to trace back to contributing factors. 

If you can’t fit something in a pipeline, it’s an excellent opportunity to ask, “why doesn’t this fit?” Not everything imaginable can be automated, but that’s no excuse not to try hard. What is automated will minimize the toil. If it proves incredibly difficult, try a few approaches, document the process, automate as much as possible and come back with fresh ideas later. When you begin to fit performance testing into an automated pipeline, you learn:

  1. Which areas of your apps/services have obvious and repeatable errors
  2. Which tests are worth running (providing value) and those who are not
  3. Which systems are unnecessarily complicated and cause anti-patterns and flakiness
  4. Which supportive environments (including data) are unstable/unreliable
  5. Which contributors demonstrate critical thinking vs. writing more code

Case Study: Pipeline-based Automated “Go/No-Go” for DevOps Customers

Back in September 2018, we worked with Panera Bread’s performance and automation teams to build a custom go/no-go process in Jenkins pipelines that worked with their particular requirements, some of which included:

  • The production of a human-readable, archivable performance comparison report that highlighted percentile variations beyond predetermined ranges (+/- 5 or 10 percent) for: 
    • Specific test workflow transactions (i.e., Login, Add to Cart, Checkout, etc.)
    • System indicators (JVM garbage collection counts, thread pool sizes, etc.)
    • Specific API requests
  • The output of a list of violations based on the above constraints in a format easily consumable by Jenkins
    • JSON comparison data that indicated go/no-go if any violations exist
    • Custom range and violation rules are written in Node.js (for skills ubiquity sake)
  • A process pattern for persisting particular test results for a project as the baseline for future job runs (until a new benchmark on that project is established) 

The goal of this work was to enable product teams to easily add performance testing as a high-level requirement using an automation flag, then based on Groovy libraries/naming conventions, run existing performance tests regularly and automatically manage baselines used in continuous comparisons to highlight unexpected performance changes in frequent CI builds.

The generic code for the comparison report is on Github. However, it only highlights an example of what you can do with a proper enterprise performance testing platform that is transparent and built with extensibility in mind. Since they also had custom rules for maintaining and automatically rolling over successful tests as the new baseline under specific conditions, we came up with a primary sequence for the process:

With this simple mechanism, they were able to build standard Jenkins shared libraries to simplify logistics in individual project pipeline scripts. Ultimately, rolling up all performance semantics to a single project opt-in flag. Leveraging NeoLoad test suites in a project subdirectory, these pipelines ran various load tests compared to baseline (using any out-of-bounds variances and violations to stop the deployment process signaling engineers to investigate).

Other examples of customers using the NeoLoad APIs include heatmaps of pass/fail over time across project portfolios, custom trending based on a set of key transactions in and across systems, and even simple self-service portals to help development teams see how critical modules are performing, like so:

Reduce Cognitive Thrash in Performance Data with SLAs

Performance data must be easily consumed and domain-specific. The more of it and the more often you have to look at data, graphs, and charts, the likelihood you will get desensitized to essential indicators. Many teams making the switch from other monolithic performance products to the NeoLoad platform have heard from their developers that a big, impersonal PDF report of load testing results from a load testing tool they don’t use doesn’t help them. Digging through statistics to get to meaningful insights shouldn’t be what product teams spend their time on.

In some organizations, raw test results continue to be piped through an “analysis process,” usually a manual set of export/import steps via semi-intelligent spreadsheet requiring the copy/pasting of charts and graphs into a final slide deck (not easily machine-readable). Although sometimes elements of this process are necessary, particularly when there’s a recurring anomaly (in mostly automated processes), doing this every time for each new build is just insanity.

A better approach is to break the process apart (like any good engineer does), identify which parts are easiest to automate, which parts take the most time, and those blocking the automation of the full end-to-end process. As you do this, you learn where critical context differs by project, how to extract that context early, and how to let people know the moment performance within that context deviates from expected norms.

An important part of performance testing is to establish SLAs and imbue them into your load tests. Each environment may have different variances and therefore different SLAs, but still, NeoLoad makes it easy to express SLAs as code:

Unlike functional tests that suffer from occasional performance outliers, NeoLoad SLAs can be configured to calculate per interval/test, providing triggering time flexibility.

Once these SLAs are layered together with other elements, be it from a graphical designer or YAML-written (after a test is executed), it’s pass/fail, and threshold data will be available from the NeoLoad APIs via the [/tests/{testId}/slas/…] endpoints.

Another cool feature of the NeoLoad APIs is that they include graphing endpoints such that you can generate meaningful images for an automated notification (i.e., Slack) and customize the graph’s data:

… produces the PNG output…

Quickly, you can turn what used to be a complex analysis process into more discrete automatic notifications to product teams when SLAs are violated during performance tests.

Next Step: Schedule a Quick Call with our Team

No matter where you are on your Agile, DevOps, or otherwise overgrown software forest journey, the Neotys team is interested in helping you shape the performance practices of your organization. Find an engineer and an hour or so, download NeoLoad, and contact us to figure out how to get your performance ball rolling in a better direction.

Learn More about Performance Analysis

Discover more load testing and performance testing content on the Neotys Resources pages, or start testing with NeoLoad today.

 

Paul Bruce
Sr. Performance Engineer

Paul’s expertise includes cloud management, API design and experience, continuous testing (at scale), and organizational learning frameworks. He writes, listens, and teaches about software delivery patterns in enterprises and critical industries around the world.

Leave a Reply

Your email address will not be published. Required fields are marked *