4 Ways to Get End Users Involved in Performance Testing

“You’ve got to start with the customer experience and work backwards to the technology.” —Steve Jobs

“Websites that are hard to use frustrate customers, forfeit revenue and erode brands.” —Forrester Research

“The only way to find out if a site works [with users] is to test it.” —Steve Krug

We make web and mobile apps so that we can interact with our customers and users. So it’s no wonder that one of the most important things you can do when you are building your site is to actually test it out with those users and make sure it works well.

We call this User Acceptance Testing (UAT) or Beta Testing, and it’s long been a key stage in the waterfall software development process. It typically happens as a final check right before you release the product. The agile methodology also teaches us to include users in the development process, although agile instructs us to bring them to the table at nearly every stage of the development process. You could say that, in an agile process, users are considered partners in the creation of the app.

But UAT is often focused on functional usability: Does this button make sense here? Do you know where to click next? Is the workflow clear or frustrating? That kind of thing. So here’s our question:

Have you ever considered including users in performance testing?

It may not be the most obvious thing to think about, but the benefits are really interesting. There is nothing quite like the feedback that a real user provides, even for performance. Here’s how to do it.

User Acceptance Testing: Purpose and Objective

There are three main goals to user acceptance testing:

  1. Ensure the application meets the needs of users, thereby reducing development and support costs after the launch.
  2. Spot problems missed by automated testing tools.
  3. Make sure programs support day-to-day usage.

Each of these goals matters just as much to the performance of the app as to the functional usability of the app.

In user acceptance testing, software is tested in the real world by actual human beings as opposed to tools that make simulated users. This type of testing can be done by a dedicated UAT team, internal team members in other departments, or by the public. It’s often wise to include all three groups in UAT, perhaps expanding your circle of testers as you get more experience with the app.

Doing the same thing with your performance test plan isn’t necessarily very different from what you’re doing already with user testing, but you do want to think about what you are asking of those users from a performance perspective, and how to best integrate them into the testing process. Here are some ways to do that.

Method 1: Have a Public Beta

Any web user is familiar with the concept of a public beta. Basically, you release software with a caveat. It may not work well and it may be buggy. There will be general support for users, but the implicit understanding is that you are to use it at your own risk.

Companies like Apple and Google are no strangers in applying UAT to performance testing plans. They have dedicated UAT teams and release beta versions of their software to the public. They provide resources to make it easy for users to report problems via help tickets, community forums, or even phone calls and live chat. Then they incorporate standard performance testing and monitoring processes to the operation of that public beta.
Depending on your software and your users, people might be concerned about running beta-quality software. So you may need to offer incentives to entice participation. For example, Microsoft allows users to purchase the completed version of their operating system at a significantly reduced cost as a perk for their beta testers.

Public betas are obviously a big deal and involve the whole department or company. But if you are doing it anyway, it can serve as a perfect platform to deploy all your load testing and performance monitoring infrastructure in the context of live users.

Method 2: Hold a Panel or Private Beta

This method is much more manageable and can be executed within the context of a performance team without even involving the entire development group. Select a small number of people. Next, ask them to be involved in a private beta or a product panel. Stand up your pre-release software and periodically give them tasks to do that lead them through a directed experience.

With real users accessing the app, you can easily ask for feedback. You may choose to gather feedback in a general way (How did the experience feel to you?) or you can make questions specific (How long did it take you to complete the checkout process?). Keep people involved at several points in the process, and give them small thank you gifts or discounts on the product in return for their help.

A private beta can be conducted at much less expense, in a shorter timeframe, and with far fewer interdependencies than a public beta. It’s a great option when you know exactly what feedback you are looking for and when you have access to a set of customers who are excited to help you out.

Method 3: Pop up a Survey

If you have lots of users coming through an existing app already, you can run a performance test of new software on your public servers by directing a small number of those users to a sandboxed version of your pre-release software. When they first enter the site, inform them that they are being directed to a newer version of the software, and you’d like feedback from them about their experience with it. Then, at some point during their visit, pop up a quick survey and ask them to rate how the visit is going.

Don’t overload them with questions – in fact, sometimes a single question asking them to rate their experience on a scale of 0-5 is enough. You may also want to ask people on the regular, production version of the site to answer the same question, so you can compare results.

This is a great method for gathering quantitative user feedback, so you can evaluate users’ perceptions of the performance of the site using measurable data.

Method 4: Monitor Behavior and Metrics

Ready to secretly involve your users? Try this method. Deploy a version of your product and direct some portion of users to it. Next, compare various performance-related attributes of this population with the baseline. No surveys, no pop-ups, no user interaction whatsoever. Just see if behaviors or results improve with your performance enhancements.

Of course, this technique works best when you can focus on directly comparable tasks. For example, you could focus on the checkout process to discover if the new version results in a better outcome than the old version. The data gathered tells you if your changes were an improvement or not.

Put UAT First, or Come in Last

While automated tools definitely have their benefits and contribute much to the performance testing process, there’s nothing more valuable than the experience and feedback from real users. You’ll be able to deal with problems early on and save time. If you’ve ever tried to fix problems after the program is completely developed, you know what a nightmare it can be. User acceptance testing, even though it may involve sample subsets, can easily be extrapolated to improve the experiences of all users. Remember, the greats recognize UAT as a priority. Why not join them?

Photo: Pixabay

One Response
  1. September 23, 2015
  2. November 4, 2015

Leave a Reply

Your email address will not be published. Required fields are marked *