I’ll admit it. I’ve never met an automation process that I didn’t like. I’ll automate anything, from scheduling the delivery of a pizza to my door to executing a mission-critical testing session. It seems I’m not the only one. More enterprises are getting into the automation game too.
But, just because something can be automated, doesn’t mean that it’s warranted. Sounds like a paradox, doesn’t it? When you think about it, there is a certain logic to this.
Understanding the Limits of Test Automation
The overall value proposition of automation is that it increases the speed and reliability of testing. Spinning up a few thousand virtual users to execute a test script simultaneously against the pages of a web application is more efficient than hiring the same number of human data entry operators to perform the same task. By comparison, automated data entry is equally less error-prone. When it comes to data entry and result recording, automation is a necessity in today’s user expectation driven climate. When it comes to complex test situations, such as those found in performance testing scenarios, testing solely via automation may not be the best approach.
For instance, functional testing is an excellent automated testing candidate. The scope of testing is somewhat static. The function entry points under test are well known as are the expected behaviors. The entire process is uniform, even when the functionality is updated. Some of the test verification behavior might need to be altered, but the function entry point is slowly changing. Having such tests run as a regular part of CI/CD process is low maintenance.
Performance Testing can be Difficult to Automate
Performance testing is different. Just setting up the test environment can be difficult and expensive. A comprehensive performance test can involve a variety of input parameters, each with the configuration of a set of parameters producing a range of responses. Defining the input parameters for a given scenario for results processing goes well beyond the click of a button to record/verify the type of user input activity done within the UI. Determining/configuring parameters requires a significant amount of human intelligence.
Merely identifying when to run a performance test is not as simple as the creation of a recurring event on a calendar. Human planning is needed. Consider the design and execution of a performance test meant to emulate activity of a football site on Super Bowl Sunday. It’s going to be a lot different than the performance tests run during the draft week for instance. The scope of testing will vary just as the Virtual User emulation requirements makeup does. Standardizing such activity is hard. And, scenarios that are tough to institutionalize are likely to be problematic to emulate through automation. In such situations, it’s a better practice to rely on the work of both human and machine intelligence.
Automate When Appropriate
Automated testing is suitable when applied to the right situation. Sadly, for many companies, once the automation reward is reaped, they want to automate everything. It’s like the old story where the only tool you ever know is a hammer making everything else seem like a nail.
The reality is that some testing situations lend themselves well to automation and some are better left to humans. The question becomes, how can we identify testing situations that are appropriate for automation?
The scope of behavior under test is limited – e.g., unit tests. Unit testing makes but a single assertion over a body of code and has a limited range of possible outcomes. Automation tends to be easy and practical. The input and results can be subject to a high degree of standardization, as described above. Test execution can be implemented at any time. For example, the test can be run at random intervals, yet produce results within an expected range of results.
Once we go beyond these criteria, there is a good chance that the test cannot be implemented by automation alone. Human intervention is required.
Putting it All Together
Test automation offers many advantages. In certain situations, automated testing is easier to plan and implement. Tests can be executed quickly, and results can be recorded uniformly. However, for more complex testing scenarios in which test time is critical and where there are a large number of input parameters needing configuration, human attention is required.
Enterprise test automation adoption does not represent the abandonment of human interaction during the testing process. Sometimes, human labor is easier based on test requirements. The key to making sure that test automation is incorporated in an appropriate manner is to let the need dictate the decision.