#NeotysPAC – Shifting Left Performance Testing, by Amir Rozenberg

[By Amir Rozenberg]

Shifting Left Performance Testing… or, How to Stop Discovering Delays on Launch Day 🙂



1. Introduction

The practice of performance testing, in the days of agile software SDLC, deserves another look. Trends driving the application space require increased test activities in-sprint via automation. Digital transformation is enriching and streamlining user experiences in applications.  Increasingly client-side resources and computing power is leveraged: no longer is the backend exclusively contributing to a quality or poor end-user experience


2. Breaking Non-Functional Testing

In almost every practice of non-functional testing (NFT), there is an aspect that can be broken out. Many types of NFT test cases are today conducted AFTER the sprint, manually. That clearly does not lend itself to efficient and on time delivery of the next build.

One can view the practice of performance testing actually as a combination of two: load testing and responsiveness testing. While load testing applicability and feasibility in the test environment could be up for debate, responsiveness testing is not.

What is responsiveness testing? It is the practice of measuring trends of key transactions in the user flow. In addition, offering insight into what might cause degradation in such measurements and opportunities to optimize. The main point about responsiveness testing is that it can easily be added to any functional test, in the context of unit, smoke or regression test. Imagine a developer, responsible a multi-device native app or responsive web app, downloading a large image to a device over a cellular connection. Few minutes after finishing writing the code, few tests could run, revealing the impact on the user experience. Within minutes developers can learn their mistake and correct it. This enables highly efficient teams and developer awareness to performance impact.

Figure 1: Breaking performance testing into in, and post-sprint activities

Similar to breaking performance testing into in- and post-sprint practices, similar thinking can be applied to security, compliance testing etc.

3. Responsiveness Testing

As mentioned, Responsiveness testing is the practice of measuring and trending response times along a user flow. In that respect, it is possible to measure visually a desired step, for example, post login, how long does it take to see the balance of my bank account? Google lighthouse does a very nice job of offering a “filmstrip” for the view of the user interface over time:

Figure 2: Google lighthouse Filmstrip and performance analysis for web pages

The visual measurement can be applied to both mobile-native apps as well as web pages.

In addition, it is possible to measure the performance as seen from a native timer perspective, the W3C navigation API offers page, and object level timers that are very easy to analyze:

Figure 3: W3C Web Page Timers

Figure 4: W3C object-level analysis

Once these measurements are in, it is possible to offer logs to assist the developer to understand what might cause a spike in the performance. Those can include measurement of the device/app vitals (memory, CPU etc.), crash log, and lastly, HAR/PCAP file.

Figure 5: Example HAR file from a native app

4. Diversify Your Testing

So far we’ve discussed measuring and trending the application responsiveness to offer fast feedback to the developer in the context of performance degradation, so they can quickly correct issues.

The challenge is that in most cases, developer test applications in ideal conditions: perfect WiFi connectivity, no applications running in the background etc. That is far from the reality of real user experience. Not only their network connectivity is suboptimal, it is constantly changing. Most end users don’t even know how to disable applications running in the background.

Using test automation it is possible to diversify the test environment to emulate different real user conditions. Below, for example, are measurements for a real app in different conditions:

Figure 6: Native application response times, based on the network (seconds)

5. What About Load Testing?

Naturally, it is imperative to ensure the backend is ready for a load event. Here again, it is a good idea to measure the front end responsiveness while load is applied on the service APIs.

Figure 7: Load testing including front-end measurement

Specifically for real devices, one needs to extend slightly the load times to gain sufficient statistical data to assemble a minimal data point.

6. Putting It All Together

Performance testing, as other NFT testing practices, needs to be examined to adapt to modern agile development methodologies. Enabling measurements and analysis as part of the in-sprint activity will drive awareness and improvement of performance of the app between developers. Performance testers can create guilds inside squads and help developers improve the overall quality of their apps.

Learn More about Amir Rozenberg’s presentation

Do you want to know more about this event? See Amir’s presentation here.


Leave a Reply

Your email address will not be published. Required fields are marked *