The Thin Line Between Performance Testing and Performance Monitoring

On October 16, Apple had a major set of announcements. The world finally saw a new batch of iPads along with the imminent activation of Apple Pay in iOS 8.1, and Apple made their newest MacOS operating system, Yosemite, available for download. Within hours, Apple enthusiasts around the world started installing the new desktop OS.

During this time, something amazing happened. Nothing.

That’s right. Users downloaded, bloggers blogged, fanboys surfed, and no one was blocked from accessing the content they needed. Things went much smoother this time than they did just a month earlier, when the new iPhone 6 crashed Apple’s website and live-stream.

Not even a company like Apple is immune from website crashes. But the best way to make sure your website performs more like the Yosemite launch and less like the iPhone 6 launch is to be smart, anticipate what may happen, and employ the right combination of preparation and execution around website performance. That’s what we’ll cover in this post.

Testing and Monitoring: Who Does What

In most organizations, these two functions are owned by completely different groups of people. Performance engineers in the QA organization own testing, while the Operations team is responsible for monitoring. Smaller shops may combine these functions into a single DevOps role, but this is typically outgrown as the organization evolves.

In QA, testers are responsible for developing a set of test cases that cover a multitude of user paths through the system. Using a test automation system they execute thousands of tests against every build to ensure that the application behaves the way its expected. The same philosophy applies in load testing – testers write scenarios that exercise both system behavior and user experience under various degrees of activity.

Meanwhile, the Operations team owns the monitoring function. They identify specific metrics that provide insight into the overall health of a live, running production system. Then they make sure that those thresholds aren’t exceeded. For example: maxed-out CPU, filled-up disk, clogged network, deadlocked database… any number of problems can bring a site to its knees. This type of monitoring is often called real-user monitoring.

But even these definitions are evolving, as the concept of Synthetic Monitoring bridges the gap between proactive testing and reactive monitoring. Here you create synthetic (virtual) users that execute a variety of automated tests in a live production environment. What’s nice about synthetic users is they can execute complex interactions across many layers of technology, mimicking real user behavior. If you know that your synthetic users are having a good experience, you know your real users are as well.

Advancements like synthetic monitoring allow teams to identify and respond to problems much more quickly – often before they become real problems for users. However, that means your testing and monitoring teams need to be working more in sync than ever.

Certainly I Don’t Need All Of This

Well, if you don’t mind losing money or aggravating your users, you might be able to skimp. But for any meaningful web-based business it’s critical to incorporate strong testing, real-user monitoring, and synthetic monitoring practices into your process. Otherwise, too much risk is introduced.

If you don’t monitor your website, you may be losing business without even knowing it. Users today are much more likely to leave a non-functioning site than to complain about it. Imagine if your shopping cart was down and no one could check out. Your monitoring systems ensure that your key site functions are up and running all the time.

On the other hand, if you don’t test your site, the results could be catastrophic. You’ll undoubtedly run into that one day where a well-respected blogger writes a great story about you, or your marketing team runs a promotion that performed way above expectations. Those are the days that matter. And if you’ve never tested those scenarios, you’ll never know what problems are lurking beneath the surface.

The best outcome is to make sure that you are testing and monitoring, and these functions are working together smoothly.

Coordinate a Blended Approach

Here’s a common failure scenario:

  • Your operations team is monitoring a particular transaction in the production environment
  • They detect a problem: something failed or crashed
  • The operations people gather some data and send it back to Test
  • Test can’t reproduce the failure, so nothing changes

Has this happened to you? Often, the culprit is found to be the fact that these teams are monitoring and testing different aspects of the system. Communication is poor, so problems are left unresolved. And they will impact your users again.

Performance testers should be working with the Operations team to properly tune the production environment, based on their knowledge of the internal workings of the apps. They should advise Operations on where to install monitors because they know how the app is likely to behave. And they should learn from Operations to better construct a realistic testing environment that better simulates what’s happening in production.

Both teams should regularly interact to review key metrics and reports and agree upon the critical user paths, use cases, and metrics that should be monitored. Everyone should understand how users are entering the site, what they do along the way, and how they execute a business transaction.

Get Your Tools To Work Together

You should also look for ways to get your tools working together across performance testing and performance monitoring. This means:

Coordinate user paths in test scenarios and synthetic monitoring. Ideally, you want to export your test cases and import them into a synthetic monitoring system so you can execute the same use cases in production.Create complex interactions and transactions. User experience is the ultimate metric. Set up testing and monitoring functions to focus on providing and maintaining the best user experience.

Capture monitoring profiles and feed them to QA. Issues observed in production can be provided to performance engineers to reproduce and fix prior to the next system release.

Match KPIs in testing and production environments to business drivers. Align all your resources behind the business drivers that really matter, and set up your systems, reports, dashboards, and alerts to reinforce this behavior.

Different Functions, But Tightly Related

Hats off to Apple for the smooth roll-out of a major release. It’s not always easy to get QA and Operations working smoothly together, but when they do everybody wins. To learn more about how you can do the same, check out Neotys’ products and see how putting the right tools and processes in place will give you the best-performing environment.

No Responses
  1. November 5, 2014
  2. December 23, 2014
  3. April 7, 2015

Leave a Reply

Your email address will not be published. Required fields are marked *