2019 Performance Testing Trends Part One

Performance testing continues to evolve to meet the growing demands of the modern enterprise. It’s no secret that the industry has upgraded from its predominantly manual roots (rooms full of QA personnel administering tests) to a world influenced by automation. Also, change has occurred in the application types and systems under test. Further, the business application testing process has been altered by the proliferation of mobile devices (think IoT) requiring a human to machine and machine to machine interaction. As the testing environments change so too the ways that test practitioners approach their to-do list. We see some trends on the 2019 horizon that will influence the theory and practice of performance testing. These trends are:

  • As Shift Left continues, more Devs will be working on performance tests
  • Increased reliance on defining test processes as code to create efficiency in CI/CD pipelines orchestration
  • Increased use of APM to support Shift Right performance monitoring
  • Greater repurposing of functional testing assets
  • Increased use of AI and APM to fully automate test processes
  • Continued support for the bimodal enterprise
  • A broader approach to testing Enterprise Resource Planning applications such as SAP

In this article, we’re going to take a detailed look at the first four trends above. Next week, we will dive into the remaining items.

As Shift Left Continues, More Devs are Working on Performance Tests

Forward-thinking IT departments have already recognized the value of the phrase test early, test often. CI/CD continues to increase the speed at which software is released. To keep costs down as code moves faster to the right side of the deployment process (release side), testing earlier is a must during left side developer code creation. The result of such shifting means that developers have a testing responsibility.

Historically, most of the Shift Left-supported developer testing has been focused on unit testing and some functional/integration testing. As self-service environment provisioning are becoming more commonplace in IT departments, developers have to become more active in performance testing. They need to ensure that they’re creating environments which perform according to expectation. As the saying goes, with great power comes great responsibility.

Increased Reliance on Defining Test Processes as Code to Create Efficiency in CI/CD Pipeline Orchestration

These days more computing is becoming ephemeral. In ephemeral computing virtual resources are created on demand via automation. So, it’s possible to develop enormously powerful computing instances in a  surprisingly cost-effective manner. While it might cost thousands of dollars a month to run a 64 core, 258 GB VM on AWS for a month, and running one for a few minutes at a time is a trivial amount; the value is hard to ignore.

As with any computing resource, these temporary instances cannot go untested. In the past, one test script might have suited any number of computing scenarios. However, in today’s ever-changing environment things are different. Testing needs to be as dynamic as the environments under test. The solution is to implement testing which takes an infrastructure as code approach.

In the infrastructure as code paradigm, all computing assets, virtual hardware, and applications are represented as software. They can be programmed to meet the need of the moment. IT staff create scripts that, in turn, generate and configure the virtual environments required. Also, these scripts install and configure the applications along with the network configurations needed to run in the virtual environment. This is fast becoming a standard undertaking in modern IT. What has been lagging in the process is administering tests using automation beyond early unit testing. IT departments understand that to keep up; automated testing needs to be applied throughout the deployment process, especially during later stages. The best way to accomplish this is to design test scripts that can be encapsulated into discrete software components/services that can be used as needed in a variety of situations in a relatively agnostic manner. Just as systems designers treat infrastructure as code to create the virtual environments necessary, we’re going to see test designers approach test processes as code to create the functional, integration and performance tests.

Increased use of APM to Support Shift Right Performance Monitoring

Application Performance Monitoring (APM), the process of using monitoring tools to allow IT staff/business personnel to observe backend application infrastructure, is the way companies ensure that their digital infrastructure is operating to expectation on a 24/7 basis.

In addition to standard system operation, APM tools are seen as components of a variety of testing processes. For example, a growing practice in load testing is to use APM tools to monitor the test environment while testing is being conducted. APM  solutions complement the monitoring done by the testing tool, providing more granular metrics at the code/component level something that standard monitors built into many testing tools cannot achieve.

Administering performance tests during pre-production is useful as a precautionary measure. Companies don’t want to release applications into production that are notoriously poor performing. More companies consider the production environment to be a viable test platform. As a result, they’re shifting a healthy amount of performance monitoring as far right as possible actually into production. We see production-centric test practices such A/B testing and Canary Releases with finely-tuned rollback mechanisms integrated into the testing processes.

But, these processes are only as good as the monitoring tools watching them. To measure application performance comprehensively in production, companies need a robust set of monitoring tools in place. Without such tools, Shift Right halts at pre-production. Testing only as far pre-prod stages might have sufficed in the past, but these days more is a necessity.  To compete in the modern marketplace, companies are making performance monitoring and alerting a key feature of any production environment.

Greater Repurposing of Functional Testing Assets

Functional testing has always been an essential part of the release. Some might say it’s the glue that binds a developer’s activities to the broader development process. For instance, a developer’s code can meet expectation when exercised at the unit test level, yet create failing side effects during functional testing.

Functional testing provides the assurance necessary to escalate code along the deployment pipeline.

Automation has played a key role in functional testing at the enterprise level. Sheer testing volume alone has created the automation demand. It’s not unusual for companies to make the code deployment pipeline so fast that the time between executing unit tests and moving the code forward (to be subjected to functional testing) is a matter of minutes. Companies have fine-tuned their functional testing assets, tools, and processes, to enable such speed.

Furthermore, more companies are repurposing these functional assets against test targets later in the deployment pipeline. They’re leveraging the knowledge, techniques, and expertise acquired through automation to reduce testing overhead and create greater efficiency. For example, companies are using AI to inspect operational system logs to identify the URLs associated with most called user paths. Automation then uses this identification to make these paths a priority for testing.

APM tools are being used in functional testing to detect operational side effects. Remember not all side effects are about incorrect output. It’s possible for a system to return consistently correct data on a system-wide basis yet produce unintended performance side effects creating bottlenecks due to extraordinary CPU usage or excessive disk access activity.

The value of reuse is king. The repurposing of functional test assets saves money and increases the overall quality of the code sent to production.

Putting it All Together

Shift left, testing as code, using more APM further down the release process, and repurposing testing assets for greater efficiency are just four of the performance testing trends making headlines this year. Check us out next week as we dive into how the increased use of AI/APM,  continued support for bimodal development, and Enterprise Resource Planning software round out the 2019 trend perspective.

Learn More about Performance Testing

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.


Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist, and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).

Leave a Reply

Your email address will not be published. Required fields are marked *