Testing software that engages users must be able to model how the users will interact with the software in real life cases. Working off of assumptions associated with how those users will interact is a recipe for disaster.
One of the results produced from performance testing is how the application under load handles whatever stress you put on it. This “whatever” will be variable, asynchronous, and irregular just as the users can be.
It’s always necessary to remember application architecture, particularly, app delivery, is complicated. What is not apparent to the tester, or hidden deep in the architecture for that matter, can affect user experience. Additionally, other software running on the platform, the local environment, and the ISP of the user can all be involved in what happens during production.
Performance tests should allow for these kinds of factors to be simulated as part of the testing regimen. Let’s take a look at some of the types of influencers that should be involved in testing.
A user’s geographic location plays a significant role in what type of experience they have and includes many factors affecting load simulation. The number of hops and backbone speeds in the path between the website and client system are just a few of the attributes involved in the geography parameter. In its own right, geography can allow simulation of how many packets will be routinely dropped, for example.
Geographically dispersed load testing means the functional ability to distribute load across many places. This forces your servers to handle traffic from a broader range of endpoints to mirror what is expected during production.
Devices and Browsers
A web browser can be a huge blind spot in the testing regimen. Increased usage of client-side scripting requires process monitoring such that you can accurately model performance. A review of a browser’s rendering can lead to identifying how long the user must wait for the individual steps to become active, for example.
Similarly, you need to monitor the processing taking place within other devices that may be used to interacting with your software. Users are connecting with smartphones and PCs. Look for software changes across each device and monitor for differences, since this can impact performance.
Make no mistake. You will need to fix browser or device-specific issues before users decide to abandon your product because it operates poorly. At that point, you will lose them, and they don’t feel bad about it.
Recreating how users interact with a site is a critical piece of realistic load test building. The basis of this will be how scenarios are recorded for analysis. The recording has to be parameterized so that variables are randomized and will, therefore, represent what happens most often for people.
Let’s say the user being recorded waits 10 seconds to click a button. This could turn out to be within a parameter that randomly waits between 5 and 20 seconds to allow for most user actions.
Google Analytics can provide a good view on parameter variability as they are done by actual users.
How a user navigates through the app means that a tester needs to think about that movement in a holistic manner.
Some users’ experiences may be thought of as being separated from the application. But all of the testing scenarios must be designed to be inclusive of what the user will experience. This means including exact test scenarios with all the elements involved such as popups or interruptions.
A chat window is usually a small component on a typical web-based application and will rarely get tested, but it should be.
Infrastructure packages like Java Messaging Service or third-party services (e.g., ad networks) can be involved. The delays each may impact user experience.
Monitoring network bandwidth and web application performance from multiple locations help isolate problems in the network. However, the range of network speeds has to be involved. Each user may have different network speeds each time the app is used. While this may be difficult to model, if you incorporate randomized parameters which account for the network, it may help to locate obvious problems that could be present.
Try to send requests precisely like your browser did when you recorded the scenario. Remember that different devices/browsers maintain various policies on concurrent request allotment. For instance, a phone will generally be more restrictive than a desktop or laptop.
Again, it may be necessary to have a variable number of kinds of connections available to the app during the test to create the most realistic load test. The closest thing to actual user experience simulation, the better off you will be.
Larry Loeb has written for many of the last century’s dominant “dead tree” computer magazines including BYTE Magazine (Consulting Editor) and the launch of WebWeek (Senior Editor). Additional works to his credit include a seven-year engagement with IBM DeveloperWorks and a book on the Secure Electronic Transaction (SET) Internet Protocol. His latest entry, “Hack Proofing XML,” takes its name based on what he felt was the commercially acceptable thing to do.
Larry’s been online since UUCP “bang” (where the world seemed to exist relative to DEC VAX) and has served as editor of the Macintosh Exchange on BIX and the VARBusiness Exchange. He lives in Lexington, Kentucky and can be found on LinkedIn here, or on Twitter at @larryloeb.