#NeotysPAC – Many Shades of Automation Testing, by Alexander Podelko

  [By Alexander Podelko]

 

Automation is an extremely vague term if we apply it to performance testing. Just mapping of functional testing notions and approaches may be very misleading here. There are multiple levels of automation testing in performance testing and it is almost pointless to talk about “automation” there without clarifying what exactly we are talking about. What, unfortunately, happens quite often as “automation” became a buzzword nowadays. The question is not binary “automate or not automate” – the question is what and how to automate in your specific context considering available tools and technologies.

It is not clear-cut even in functional testing – check Michael Bolton’s “Manual” and “Automated” Testing and The End of Manual Testing blog posts that challenge the notion of “automated” testing. They are somewhat going against the mainstream – but have a lot of good points and it indeed appears that the mainstream sees it in rather a simplified way.

It is even more complicated with performance testing. There is a danger of replacing holistic performance testing (as a part of performance engineering) with just running the same tests again and again automatically. While we can get closer to these bright “automation” promises in the future (as described below), we are pretty far from that today and such replacement leaves huge gaps in mitigating performance, reliability, and scalability risks. So let’s consider the different meaning of “automation” and what they mean for performance testing.

Historic Meaning of automation testing

The first meaning of “automation” was just using a testing tool in functional testing – as opposed to “manual” testing when testers directly worked with the system. According to that meaning, performance testing was always “automated”. The things you can do without a load testing tool or a harness are very limited and it happens in rather special cases.

Generic Meaning of automation testing

Another level is getting all pieces together including setting up and configuring the system under test and testing environment. Here we have a great breakthrough with the arrival of Cloud and Infrastructure as Code. While they are not specific to performance testing, there is no reason not to use it. Such automation, in addition to saving efforts, eliminates human errors in configuring systems – which are a typical reason for difficult to figure out performance issues. Another item that may be added here is data generation. That part is a prerequisite for continuous testing – but could (and should) be used in any kind of performance testing (when available/feasible).

Today’s Meaning of automation testing

Then we may talk about automatic test scheduling and execution with simple result-based alerting. This is also a prerequisite for continuous testing.
The distinction I see here is between performance testing as a part Continuous Integration / Deployment / Delivery (some performance tests are run for each code commit) and continuous performance tests which are run periodically (larger tests that can be, for example, run once a day). The difference becomes very significant if we have multiple commits per day – so we need to build a hierarchy of tests to run (on each commit, daily, weekly, etc.) – details, of course, would heavily depend on the specific context.
We definitely need tool support for that – but most advanced tools already have that functionality, so you can easily start such continuous performance testing. But today you still should:

  • find an optimal set of tests for regression testing;
  • determine how and when running them;
  • create and maintain scrips;
  • do troubleshooting and non-standard result analysis.

Considering iterative development, continuous performance regression testing does have significant value. However, except a few trivial cases, it still leaves script creation/test maintenance in the hands of performance testers – so, as you add more tests (which usually mean more scripts), you get more overheads to maintain and troubleshoot them. And running the same tests, again and again, solves just one task – making sure that no performance regression was introduced.

Looking into the Future

And then we get too much more sophisticated automation topics – which we are just approaching and which nowadays are often grouped into the “Artificial Intelligence (AI)” category. They basically boil down to defining what we are testing, how we are testing (scripting, etc.), and what we are getting out of it (analysis, alerting, etc. beyond simple if statements).
Some interesting developments in that direction can be found in Andreas Grabner’s Performance as Code blog post and presentation.

There are numerous startups trying to solve these problems with AI. My understanding is that they are rather in the early stages and current products can be used in rather special cases. It is rather a large separate topic – you may check, for example, a recent Mark Tomlinson webinar Solving Performance Problems with AI as an introduction to it.

When we get to the point when it will work in more generic and sophisticated cases, it will expand what we can “automate” and probably shift further what a performance engineer will be doing. However, at the present, while automation (in today’s and generic meaning) is definitely here and should be embraced, it is just a piece of the whole performance puzzle and the exact combination of pieces to use heavily depends on your specific context.

Learn More about the Performance Advisory Council

If you want to learn more about this event, see Alexander’s presentation here.

Leave a Reply

Your email address will not be published. Required fields are marked *