What Makes a Great Performance Tester in a DevOps Environment?

Performance Testers are the Ideal Change Agents for Implementing the Digital Transformation

With all this digital transformation going on, testing organizations are in a prime position to drive this process across their employee culture, process, and technology because their resources are inherent across the entire software development lifecycle. No matter how the technology and details of testing environments change over time, skills that will always be valued include strong judgment, flexibility and the desire to learn. For performance testers and developers, this could mean that you will have input into critical decisions around such issues as the optimum risk/test coverage plan, and evaluating the ROI for automating a part of the build process that is currently manual.

Great DevOps Teams Take Ownership of Performance

Another key area where performance testers and developers can show their value to the business is in the area of designing applications for optimum performance from the start. For example, in eCommerce use cases, performance optimization and robust application design result in performance resilience of the application. In other words, users could fail back to a lower tier of GUI complexity and have at least SOME useful interaction with an app rather than experience a complete crash despite poor connections or heavy, unexpected loads.

To take SaaS applications as an example, as developers and performance testers (and as users, also!) we have come to expect that there will always be ready availability of low-latency, high-bandwidth connections available for the front-end application tiers, as well as plenty of computing and database resources. However, the best developers and performance testers take the viewpoint that “their” applications have to be designed for the real world, not the laboratory.

Effective DevOps leaders take ownership of, and responsibility for, the complete end user experience of “their” applications. They design in the ability for applications to fail gracefully even when they are under heavy load, or when they encounter under-provisioned or oversubscribed resources at lower tiers. By taking this approach, great performance testers and developers work together to deliver the best possible user experience and contribute to the success of the business in the long-term.

Automation is Expanding the Value of Performance Testers to the Business

Working harder on a specific task (E.g., writing scripts to accommodate edge condition scenarios or different versions of a mobile device), doesn’t scale the way that learning and investing time in automation to make the whole testing process faster and easier do. With the ever-increasing velocity of delivery, it is more important than ever to be able to define what is relevant to test app performance, analyze overall app performance, identify bottlenecks, and feed that data back into the development cycle.

Developers and performance testers often think of automation as automating the tough or tiresome tasks—writing scripts that reduce arduous multi-step setups for a test, for example. But, you might not think about the benefits we gain from automating even the simple tasks. Taking the example of setting up “one-click” automation to restart a failed test with a single click, you would want to automate the several manual checks you perform now, such as availability checking for the dependent database services, for instance. Even one-line commands might make sense to be automated if there are important environment settings or flags you don’t want to forget.

Other benefits to automating everywhere you can include making it easier to plan work and to test that work in a safe environment. Even more important than safely testing changes, you want to ensure that the work you did in testing employs the same set of steps you take during the real production event. And be sure to consider the differences between your test environment vs. production environments. For instance, staging environments tend to have different performance characteristics than production environments, due to differences such as load and volume of data.

New Architectures Require New Testing Approaches

Once you accept the value of higher degrees of automation in performance testing, the sheer scope of automating the entire testing process may seem a bit overwhelming at first, especially if you are considering other strategic shifts as well, such as a move to Microservices architectures.

Microservices architectures are emerging as a response to the shortcomings of traditional monolithic applications, but they come with their own set of complexities and concerns, particularly around performance testing. Because of the nature of these architectures, a solid testing plan requires new approaches to confirm proper operation and continued availability under heavy load or in the face of resource failure. With the proper focus, DevOps organizations can make certain that their Microservices applications operate properly and perform well under load and “real-world” failure scenarios.

One way to approach the complexity of testing automation is to tackle one piece at a time, in a step-by-step fashion. For example, once you can reliably do your nightly testing using automation, move on to automating the next level of testing, and continue with the process. While it’s hard to represent “real-world” usage on a single service, and testing an entire component through the UI might be a great way to complete the test coverage, the ultimate goal is to gain a true user-centric perspective for a given realistic business transaction.

Consider a banking example; the decision has been made to take a Microservices architecture approach, rather than a monolithic application. When thinking about how to effectively test this new app, we might want to automate the loan application process through the app, using both valid and invalid data to test boundary conditions and ensure that the user gets some useful information. Nothing could hurt customer satisfaction more than having real-world users frustrated at the bank for their lack of response to a loan application.

If you take this step-by-step approach to automation build-out, application performance testing can become more robust over time. Rather than trying to automate everything all at once, making incremental improvements to portions of the test process makes the testing process easier and stronger overall. Start the process by automating a single, less-complex task first, such as re-running a previous test using the same environment and parameters.

API Testing is More Important than Ever

While you’re considering how to automate your testing environment, don’t forget the importance of WHAT you’re ultimately going to test. You probably want to consider testing your APIs right alongside the application itself. Because of the importance of APIs in providing integration and openness for your applications, API test automation has the potential to significantly accelerate the testing/development process. It ensures consistency in testing and enables continuous improvement in application quality. Increasingly, testing and development teams are moving towards API test automation and integrating their testing tools with CI frameworks like Jenkins.

Including API tests in your app development process provides several benefits to the businesses that get passed down to customers via high-performing, quality applications. Including API tests in your overall SDLC improves test quality, test coverage, and enables test reuse.

Test Quality

If you wait until post-development to build API tests, you’re focused on how the API is supposed to perform instead of exposing faults when used in other, similar scenarios. Conducting a comprehensive set of API tests makes these APIs stronger and more comprehensive in production use cases, benefits the DevOps team, in the long run, raises the overall quality of the API and application, and exposes faults earlier in the SDLC.

Test Coverage

Exposing all potential application performance issues/failures is critical to delivering a strong performing product that builds customer engagement and stickiness. API testing during development can reveal issues with your API itself—servers or other services, or the network that you might not otherwise discover until very late in the SDLC, which can translate into increased expense compared to catching and fixing issues early.

Test Reuse

One of the best reasons to create and run API tests in the early stages of the SDLC is that the bulk of your testing is already accounted for earlier in the process. Reusing API tests across the development lifecycle fosters a focus across all SDLC stages on application performance, builds collaboration among teams, and provides a better, more accurate testing process overall.

Address Testing Failures Before They Become a Problem

If a test fails overnight and the reason is not immediately apparent, I may want to re-run it in the morning using the previous version of the code, to see if the failure was a “false positive.” This is a simple and logical approach to isolating the cause, quickly. Otherwise, if you just ignore the failure as an isolated anomaly, deploy that nightly build anyway with a “wait and see” approach, you could run into the same failure again. Worse, now you have lost two days of work before digging into the reason for the failure.

Investing in elements of your testing environment that either has at least some automated capabilities now (with significant deliverables in this area in their roadmap) or that are at least tightly integrated with other, more automated elements in the environment, is the key to being able to perform these types of tasks more easily. As you build the right toolchain for your organization, pay attention to the amount of openness these elements deliver; that is, how much they can be “driven” by another machine (through APIs).

Getting Started

Take advantage of today’s best practices by testing early and often in your SDLC. Approach performance testing as a holistic effort, and not just focusing on load testing. As more DevOps and Agile-focused teams adopt “shift left,” what they are doing is changing the granularity of what is tested at different stages of the SDLC, moving toward deploying the application into a production environment. Additionally, consider how/when testing APIs, Microservices and components should be a part of your overall performance testing plan.

For example, early cycle testing is focused on components; these tests could be automated as part of the nightly build process. Try testing a single component or Microservice first, then move on to testing integrated groups of components together (E.g., UI). These tests ensure that groups of components can hold up under more complex test and load scenarios than simple “smoke tests.” Then, you can transition to doing more traditional system-wide performance testing in a pre-production environment.

Don’t let all those passed performance tests make you lose sight of the fact that the actual users of your applications are likely to be real human beings who will interact with the application through many services, components, databases, networks and maybe even third-party connected apps. You need to make sure the actual end user experience will be satisfactory, and they can engage with the tasks at hand without frustration or failure. Ensuring a particular Microservice performs well is, after all, just a means to achieve this goal, not the ultimate goal itself.

Choose a Best-of-Breed Approach to Build the Test Environment

Integrating best-of-breed Application Performance Management (APM) tools into your testing environment allows you to collect the detailed metrics on application components and servers that pinpoint and help you understand causes of performance problems. This type of rich data can help identify the cause of performance bottlenecks in highly-complex applications, and give performance testers/developers “big picture” context of what is going on with the app’s performance.

Consider the example of a retail bank that wants to improve its engagement with its customers by developing and deploying a new app to process automobile loan applications, and how it might best achieve the goals of good app performance and quality. A customer applying for a loan and the resulting approval/denial response can be thought of as a single, logical operation on the data—a user transaction.

Understanding performance testing results for this application in the context of the complete end-to-end user experience for that service leverages a set of deep metrics from a comprehensive APM tool. These deep APM metrics can provide insight into application performance, help pinpoint bottlenecks, and possibly suggest app resiliency workarounds.

Gaining this deep level of insight into the application’s performance builds trust and confidence in the realism testing results, and can empower moves toward continuous delivery. If you’re able to trust your automated testing process, you’ll be confident using it for minor updates to enable continuous deployment and testing in your production environments and enable daily updates in production. This level of trust in the testing process also could also enable you to test major updates, architecture or infrastructure changes, within a specific testing campaign.

In today’s DevOps world, how do you know when you are “done” testing? When is the performance “good enough” to meet the requirements? The answer is, “it depends.” We might have a design goal that is simply “the app must be fast” and specify a maximum response time, say 3 seconds. Are we “done” when performance testing data show that the app met that goal? This is not a trivial consideration.

As a final suggestion, especially within DevOps teams, be sure that whatever testing environment you decide on, you will be able to easily and quickly share test results, settings, analysis and reports between performance testers, QA, Developers, and business stakeholders.

 

In the final piece of this three-part series focused on the speed AND quality possibility next week, we will take a look at “The Future of Performance Testing.”

Want a refresher on part one – “The Coming Digital Transformation”?

Leave a Reply

Your email address will not be published. Required fields are marked *