Accommodating Human Behavior in Automated Testing

I’m a big fan of test automation. To me, it’s the best way to get software out the door. For the most part, automation brings a degree of speed and accuracy to the testing process that in many cases surpasses human capability. This is particularly true when it comes to UI testing. Having a roomful of testers sitting at keyboards entering data in the UI and then recording results can be a bottleneck when implementing testing in today’s enterprise.

Automation is particularly useful when doing large-scale performance testing. It’s impractical to hire the hundreds, if not thousands of testers required to deliver the data entry necessary to implement a real-world performance test adequately. So, the conventional test practice is to create a large number of virtual users (VU), with each VU exercising the application’s UI in near simultaneity. The automation script completes the front end work of entering input data, clicking buttons and selecting items for lists and dropdowns. No human intervention is required. This testing method has become commonplace, as well it should be. It’s an efficient way to go.

But as useful and conventional as the process is, there is a problem. When it comes to UI testing, test automation can distort the actual behavior of human interaction and thus make the testing inaccurate.

Machine Behavior is NOT Human Behavior

Figure 1 shows a simple login UI for a sample web application I’m writing.

Figure 1: Simple login UI

The actions required to execute testing the UI are to enter data in the UserName and Password fields and then click the login button. It’s a straightforward test that happens every day and one that I can write in my sleep, and it shows. It turns out that I missed an important consideration. I’ve been writing tests that execute actions in machine behavior and not human behavior. The difference is causing my tests to be inaccurate.

Here’s what I mean. I went back to the automated test I wrote that exercises the UI and added a measurement which reports the time from when the first keystroke is entered in the UserName textbox to when the login button is clicked. In other words, I measured the amount of time it took the automation script to complete the login form. Then, I added some code to the JavaScript in the web page to measure and report the time it takes for any operator, human or automated, to fill in the fields on the login page and click the login button. Now I had a way to measure how much time it takes for a human to act.

I took the two measurements. One – timing the automation script and the other to time myself doing the input. See the results below in Table 1:

Test Process

Time in Milliseconds

Automation Script

2398

Bob’s Manual Entry

3181

Table 1: A comparison of the time it takes to fill out a simple login form

It turns out that a human needs almost 25% longer to complete the login form. And, this is a simple form! Imagine how much longer it might take a human to submit a form with many input fields, lists, options, and validation rules. The automation script can whiz through executing the input tasks. I can imagine the human taking twice as long, if not longer.

Make Your UI Test Scripts Act Human

When you’re writing a performance test in which human behavior, such as data entry, is a critical factor in the test, you need to make sure that you are indeed accurately emulating the human behavior expected. Otherwise, your tests become distorted. I know mine was.

An automated UI script running under machine behavior can assault server-side logic in ways that are are not attainable by human action – the same holds on the client side. Machine behavior can create errors that are just not possible when a human is doing the data entry. Thus, test reporting becomes inaccurate. An inaccurate test process has little value.

In my case, I just wrote the scripts to enter the data. I never measured the average duration for a human to enter the data, adjusting my scripts to emulate data entry time. Now I do. On a mission-critical performance test, I take the extra thirty minutes to measure how long it takes me (the self-proclaimed World’s Worst Typist)  to perform the data entry. Also, I’ll try to measure two other human subjects doing the same data entry. Then, once I get a sense of the actual time it takes a person to perform actions on the web page, I’ll adjust my scripts accordingly, adding waits or conditional accommodation where necessary.

Putting it All Together

UI testing is a critical part of performance testing. An application can have amazing features, but if it doesn’t meet a user’s expectation regarding UI performance, it will just end up as another rarely tapped icon on a mobile device or rarely visited browser bookmark. However, intrinsic to the notion of the user interface is the understanding that there is a human driving the application. Therefore, performance testing needs to accurately reflect human behavior, including the time it takes an actual human to work the UI.

Good UI performance tests accommodate actual human behavior during test execution. This includes measuring the time it takes for a human to input data into the UI, emulating it in the test scripts. Accurate measurement is the foundation for all test activity. Accommodating human behavior in test automation is critical to providing the accuracy required to meet the testing demands of the modern enterprise.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist, and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).

Leave a Reply

Your email address will not be published. Required fields are marked *