Blog

7 ways to build a robust testing in production (TiP) practice

  • 27 July 2023
  • 0 replies
  • 141 views
7 ways to build a robust testing in production (TiP) practice
Userlevel 4
Badge +2

Here’s a short anecdote: a QA team maintained a test environment separate from their production environment, but they had gone through great lengths to ensure it matched what was in production in every respect. It was filled with the same hardware, and all the software was identical down to the patch level. They had a strong process to ensure that any change made to one environment was mirrored in the other.

Perhaps you can see where this story is going. Sure enough, the environments turned out to not be identical. In fact, there was some BIOS setting that differed on the servers, which resulted in the production machines running slower than the test machines. As a result, the overall production system couldn’t handle the volume of traffic that the test system could, and one day, the whole live site unexpectedly crashed due to a load level it was supposed to be able to manage.

What’s the point of this story? No matter how small, any difference between testing and production environments can have a serious impact.

And let’s face it — test environments are almost never maintained with the same consistency as in our anecdote. Yet, overall, production systems largely behave the way we expect them to. Why?

One reason has been a growing practice of testing in production (TiP). Essentially, once code is released into the production environment, it is put through a battery of tests to make sure it works. These tests continue as part of ongoing operations, and we’re alerted when there are problems. TiP is steadily becoming a standard and critical part of any modern web development organization.

TiP: Testing in production

QA professionals’ jobs are changing, especially when it comes to cloud or web-based apps. Instead of finding bugs in a particular release of software, the job of a tester is to be the guardian and steward of the entire development process, ensuring that defects are identified and removed before they get to the production environment.

That’s why TiP is so important. It’s not that it takes the place of traditional testing, but rather it enhances it with a set of test procedures that just make sense to do in the production environment. As the story above illustrated, it can be very difficult to create and maintain a test environment that’s truly an exact clone of production — so much so that there is a class of tests that simply don’t make sense to execute in any environment other than production.

TiP provides a structured way of conducting tests using the live site and real users — because for those tests, that is the only way of getting meaningful results.

There are a number of different types of TiP that any software tester should know about. Here’s a summary of some of the most important ones.

Canary testing

Back in the days before PETA, coal miners would bring a caged canary into the mines with them. If there was a sudden expulsion of poisonous gas like methane, the fragile canary would succumb before the humans, providing an early warning system for the miners. Put simply, dead bird = danger.

In TiP, canary testing refers to the process of deploying new code to a small subset of your production machines before releasing it widely. It’s kind of like a smoke test for SaaS. If those machines continue to operate as expected against live traffic, it gives you confidence that there is no poisonous gas lurking, and you can greenlight a full deployment.

Controlled test flight

In a canary test you are testing hardware, but in a controlled test flight you are testing users. In this kind of TiP, you expose a select group of real users to software changes to see if they behave as expected. For example, let’s say your release involves a change to your app’s navigation structure. You’ve gone through your usability tests but want to do a little better than that before everyone sees the change.

That’s where a controlled test flight comes in. You make the change but only expose it to a specific slice of your users. See how they behave. If things go as expected, you can roll the change out to the wider audience.

A/B split testing

Sometimes you aren’t exactly sure what users will prefer, and the only way to know is to observe their behavior. A/B split tests are very common in web-based apps because it’s a great way to use behavioral data to make decisions. In this case, you are developing two (or more) experiences — the “A” experience and the “B” experience — and exposing an equivalent set of users to each experience. Then you measure the results.

A/B testing is an incredibly powerful tool when used properly, because it truly allows a development organization to follow its users. It does involve more work and coordination, but the benefits can be substantial when done properly.

Synthetic user testing

Synthetic user testing involves the creation and monitoring of fake users which will interact with the real site. These users operate against predefined scripts to execute various functions and transactions within the web app. For example, they could visit the site, navigate to an eCommerce store, select some items into their cart, and check out. As this script executes, you keep track of relevant performance metrics of the synthetic user so you know what kind of end-user experience your real users are having.

Synthetic monitoring is a key component of any website’s application performance monitoring strategy.

Fault injection

Here’s an interesting, and perhaps unsettling idea: create a problem in your production environment, just to see how gracefully its handled. That’s the idea behind fault injection. You have built all this infrastructure to make sure that you are protected from specific errors. You should actually test those processes.

Netflix is famous among testing circles for its Chaos Monkey routine. This is a service that will randomly shut down a virtual machine or terminate a process. It creates errors that the service is supposed to be able to handle, and in the process has drastically improved the reliability of the application. Plus, it keeps the operational staff on its toes.

Recovery testing

Similarly to fault injection, you want to know that your app and organization can recover from a bad problem when it’s called for. There are procedures that are rarely tested in production environments, like failing over to a secondary site or recovering from a previous backup. Recovery testing exercises these processes.

Run fire drills for your app. Select a time when usage is low and put your environment through the paces that it is supposedly designed to handle. Make sure that your technology and your people are able to handle real problems in a controlled way, so you are confident they will be handled properly when it’s truly a surprise.

Data-driven quality

Finally — and this may go without saying — put in place systems that will help your QA team receive and review operational data to measure quality. Make sure that testers have access to logs, performance metrics, alerts, and other information from the production environment, so they can be proactive in identifying and fixing problems.

Conclusion

Testing in production can be an extremely valuable tool in your QA arsenal, when used properly. Sure, there are always risks of testing with live users, but let’s face it — there are risks to not testing with live users as well. However, if you build the right procedures, TiP can result in a huge boost to your app’s overall quality.

This blog was originally published in 2015 and was refreshed in July 2021.


0 replies

Be the first to reply!

Reply