#NeotysPAC – Performance — Cloning Automation from Users in the Past, by Leandro Melendez

Performance testing and load testing automation is being done mostly the same ways that it’s been done since prehistoric times (about 20 years ago).

Nowadays, new tools and technologies have arisen that can revolutionize the way we create, maintain, and configure automation for performance and load testing. They both leverage and depart from the traditional methods. 

I  propose some techniques that could be used to evolve these processes, putting together resources and techniques that are at our fingertips.

Let us first go over some of the traditional prehistoric best practices that are still being used but that will work as departure points for the proposed implementation.

 

Prehistoric Best Practices

Although they are prehistoric in terms of “tech time,”many best practices are still effective. Even if their foundations are old and can be improved, they follow sensible steps that still work.

Even though these best practices have been around since prehistoric times and all, many performance testers still do not follow them. Even worse, they are the best recommended steps to make things right. Without following them, performance testers shine a light on their newbie or misguided status. 

They may seem common sense, but for some people these steps are totally unknown. Beware, they are key!

  1. Performance and load test cases. First we require a test case that is optimized for performance matters. No functional test cases, please.
  2. Multiple recordings. To identify the work that the automation requires, it is crucial to have multiple recordings of the same optimized steps — with only data inputs, credentials and such changing on each recording.
  3. Compare the differences as identified from data inputs, server responses and client-generated values.
  4. Turn the recorded script into a working automation by creating correlations where needed, parameterizing inputs and config client values and clearing up all the differences.
  5. Test run for iterations and concurrence, and shake up a small load test.

There you go: some simple and almost bulletproof best practice steps to ensure that your scripts run at their best, that they have all the correlations and parameters needed and that they can coexist. Ancient best practices that still work.

 

Challenges

Some of these steps pose challenges and are somewhat tedious since almost every step is a bit manual and time-consuming. Creating good test cases is tough if the project cannot provide SME support or just throws you at a functional test case repository. The recording process consumes some time just trying to record the right way, make multiple recordings of the same process, and change the parameters as needed.

Not to mention the correlation process, which may be the most challenging for many: the hassle of identifying what needs to be correlated and parameterized, figuring out the boundaries or regular expressions needed, dealing with credentials and security schemas. And the list here is long.

Even though they are very effective, all the traditional processes become time-consuming — slowing down the creation and execution of performance and load tests, and requiring specific information to be successful.

Gathering some of the key elements needed is in large part the source of the hassle: 

  • Reliable user flows for the performance test cases.
  • User input data to have variable inputs.
  • Protocol communications to identify session values and correlations.
  • Identifying and implementing the correlations.

Each one is tough to gather. But that doesn’t need to be so anymore. All the technology and advances in processes and monitoring should enable us to move faster on these endeavors!

So, let’s move on to my proposed solutions.

 

Ms. Data

Computing power, cheap and vast storage, as well as new techniques are enabling multiple areas of IT that can port easily over to the performance testing world. We just need to enable two buddies that I will introduce here.

First, we have Ms. Data — a crucial enabler to the solution I propose. The key is to gather and store the activity and communication logs of user actions in the system, or recordings of user activity. 

A current challenge is that for various reasons like security, overload and many more, many applications do not allow for the capture of  information that went to and from the user of the system. But the benefits here are great. Imagine turning on a capture of the information that passes through the wire in between your end user and your server.

All that data would be very useful. Coupled with the data already captured by some activity trackers, the possibilities are endless. 

NOTE: I do not suggest capturing all that data all the time. It could be too much to store easily in a day of a heavy production environment. But let’s say just turn on the capture for 30 minutes. That would be more than enough!

 

Enter Mr. ML

Machine learning is the second part of the solution I propose. A miracle of technology that was not possible to be employed as widely as one would wish, ML computing power has reached the point where we can now use it.

After you are assured you have enough data, you can set Mr. ML to go over all the information that you stored, going over it with specific goals and rewards, and get a great deal of results.

To start, you could set him to analyze just the utilization data. From it, ML could provide the most frequent user paths, flows and most frequently utilized elements in your solution. With all that info, you could even create highly efficient and realistic test cases and test scenarios — and on top of that, in a completely automatic way!

Now, if we let ML work on a big amount of logged communication in between the user and the server, we could indicate the goals to find those differences we mentioned at the beginning. We could even use it to identify which ones are coming from the server and need to be correlated, as well the ones that came from the user, being data input or GUI generated.

In the same way, it would be very simple to find, test and implement the most efficient correlations needed. At a given point it could be directly implemented on each of the needed automations.

The cherry on top would be to automate and put it all together. You get automatic business process identification, scenario definitions, and  multiple recordings all from the communication data logged. It is just a matter of putting it all together, and feeding what ML identified in the data to have automatically generated automation, scenarios and even executions!

 

Conclusion

This proposal may sound a bit like sci-fi. But where we are in these modern times, it is totally possible. It is just a matter of getting your hands on it!

Make sure you can capture lots of protocol communication from production, dev, test and everywhere you may be interested.

Let ML dive into all that and find everything that is needed, departing from the traditional best practices.

And finally, automate that output into the recorded automations. And even better, generate them automatically from the identified flows!

I really hope this way comes true. It would be game changing!

Take care amigos! Adios!

 

If you want to know more about Leandro’s presentation, the recording is already available here.

Leave a Reply

Your email address will not be published. Required fields are marked *