I’m a big fan of test automation. To me, it’s the best way to get software out the door. For the most part, automation brings a degree of speed and accuracy to the testing process that in many cases surpasses human capability. This is particularly true when it comes to UI testing. Having a roomful of testers sitting at keyboards entering data in the UI and then recording results can be a bottleneck when implementing testing in today’s enterprise.
Automation is particularly useful when doing large-scale performance testing. It’s impractical to hire the hundreds, if not thousands of testers required to deliver the data entry necessary to implement a real-world performance test adequately. So, the conventional test practice is to create a large number of virtual users (VU), with each VU exercising the application’s UI in near simultaneity. The automation script completes the front end work of entering input data, clicking buttons and selecting items for lists and dropdowns. No human intervention is required. This testing method has become commonplace, as well it should be. It’s an efficient way to go.
But as useful and conventional as the process is, there is a problem. When it comes to UI testing, test automation can distort the actual behavior of human interaction and thus make the testing inaccurate.
Machine Behavior is NOT Human Behavior
Figure 1 shows a simple login UI for a sample web application I’m writing.
Figure 1: Simple login UI
The actions required to execute testing the UI are to enter data in the UserName and Password fields and then click the login button. It’s a straightforward test that happens every day and one that I can write in my sleep, and it shows. It turns out that I missed an important consideration. I’ve been writing tests that execute actions in machine behavior and not human behavior. The difference is causing my tests to be inaccurate.
I took the two measurements. One – timing the automation script and the other to time myself doing the input. See the results below in Table 1:
Time in Milliseconds
Bob’s Manual Entry
Table 1: A comparison of the time it takes to fill out a simple login form
It turns out that a human needs almost 25% longer to complete the login form. And, this is a simple form! Imagine how much longer it might take a human to submit a form with many input fields, lists, options, and validation rules. The automation script can whiz through executing the input tasks. I can imagine the human taking twice as long, if not longer.
Make Your UI Test Scripts Act Human
When you’re writing a performance test in which human behavior, such as data entry, is a critical factor in the test, you need to make sure that you are indeed accurately emulating the human behavior expected. Otherwise, your tests become distorted. I know mine was.
An automated UI script running under machine behavior can assault server-side logic in ways that are are not attainable by human action – the same holds on the client side. Machine behavior can create errors that are just not possible when a human is doing the data entry. Thus, test reporting becomes inaccurate. An inaccurate test process has little value.
In my case, I just wrote the scripts to enter the data. I never measured the average duration for a human to enter the data, adjusting my scripts to emulate data entry time. Now I do. On a mission-critical performance test, I take the extra thirty minutes to measure how long it takes me (the self-proclaimed World’s Worst Typist) – to perform the data entry. Also, I’ll try to measure two other human subjects doing the same data entry. Then, once I get a sense of the actual time it takes a person to perform actions on the web page, I’ll adjust my scripts accordingly, adding waits or conditional accommodation where necessary.
Putting it All Together
UI testing is a critical part of performance testing. An application can have amazing features, but if it doesn’t meet a user’s expectation regarding UI performance, it will just end up as another rarely tapped icon on a mobile device or rarely visited browser bookmark. However, intrinsic to the notion of the user interface is the understanding that there is a human driving the application. Therefore, performance testing needs to accurately reflect human behavior, including the time it takes an actual human to work the UI.
Good UI performance tests accommodate actual human behavior during test execution. This includes measuring the time it takes for a human to input data into the UI, emulating it in the test scripts. Accurate measurement is the foundation for all test activity. Accommodating human behavior in test automation is critical to providing the accuracy required to meet the testing demands of the modern enterprise.