Load Testing IoT Apps: It’s a Matter of Policy

A few weeks ago I bought a wireless printer. Along with the purchase came an offer from the manufacturer to sign up for a service that, for a monthly fee, sends me replacement ink cartridges automatically.

How does the manufacturer know when to send me ink? A web client that communicates directly with the manufacturer over the Internet is built into the printer. I registered the machine with the manufacturer. Then, the printer notifies the manufacturer every time a page is printed. Also, the printer reports its current ink level to the manufacturer. When the ink runs low, the manufacturer sends out a refill. The cost of the service depends on how many pages I’m printing in a given month. If less than 15 pages, the ink gets sent to me for free. At 300 pages a month, I pay $9.99. (For what it’s worth, the manufacturer assures me that the content of my printing is never collected.).

For all intent and purposes, my printer has a mind of its own. It’s now part of the Internet of Things (IoT).

So, what does this have to do with load testing IoT?

The Problem Inherent in IoT Testing

Load testing software is a well-known practice. Mainly it’s about code testing code. You write code that spins up some virtual users, exercises it under test and gathers the results. Everything is apparent and standardized. Testing software can be accomplished automatically, with little or no human intervention.

Load testing IoT devices is different altogether. Take my printer, for example. It seems the only real way that I can load test the printer, which is now primarily an IoT device, is to send X amount of pages to print to the machine via a script that identifies that printer on the network, then pinging it with print commands. I can pace the virtual users. I can pace printing (E.g., a page a second, two a second, etc.). I can even monitor the device if it publishes a way for me to access its print queue, print count for instance. But, what do I do when the paper runs out? It seems I need to have a human on hand. Might be a trivial use case, but I think not.

Printers can be easily compared to household appliances such as smart refrigerators. How do I even go about creating a test plan for that device?

Things get tough when we go to life and death scenarios such as driverless vehicles. I wonder, how is Tesla load testing its cars. Or, how do you load test all the traffic lights in a municipal traffic system? Load testing gets even harder when each of those traffic lights become edge devices that can make decisions on their own. Throw in the fact that outside of cell phones and gaming systems, there’s not a lot of interface standardization – real or virtual – allowing access to the given device. Automating the load test process at the device level requires a significant amount of custom work that is not reusable. Every test becomes unique and hence, time-consuming to prepare.

So what’s to be done?

Why Policy Counts

Given the technical disparity among IoT devices and related applications, conducting enterprise-wide verifiable, consistent IoT load testing, at least for now, is a matter of explicit policy.

You can think of policy as a set of rules governing an organization’s activity. Typically, the pattern is that the organization establishes policy then prescribes the procedures required to support it. Such a system is necessary when no standard code of conduct exists.

The value of a policy is that it establishes a valid rule for conduct, regardless of the implementation of that conduct. For example, imagine the following procedure for a fictitious IoT traffic light company:

“… all traffic lights will be operational up to 98% of the time, and data emissions will be 99% accurate on a network of at least 1,000 lights.”

Once such a policy is defined, it’s up to managers and engineers downstream to devise plans in support of it. In the traffic light scenario, the support policy might be to set up 1,000 traffic lights on an internal network, into which light changing behavior is injected. Then, hire a few thousand people (over eight-hour shifts) for a month to observe light behavior. Or, the test might be to dedicate a video camera to each light. Later, as the technology matures and traffic lights can report their behavior, load testing might be no more than incorporating behavior into the network of IoT-enabled traffic lights such that the lights communicate behavior post-injection.

The critical thing to understand is that policy provides a verifiable standard for testing that allows test practices to evolve reliably as the given technology matures. But, policy without procedure is meaningless. Procedure without policy is random and potentially ineffective. Regarding the fictitious IoT traffic light instance, were a tragedy to occur navigating the conversation as to whether procedures were followed based on policy framework is not something you want to think of. However, policy counts. It’s the standard that bonds organizational behavior over time, across groups.

How to Move Forward

Consider the load testing of my printer again. The device was probably subjected to significant load testing. The manufacturer is a mature company. It’s been making devices since 1939, making life and death medical devices too. Thus, I have a hard time thinking the manufacturer doesn’t have a stringent set of policies and procedures for testing built into the fabric of its business. The testing policy was probably put in place before the first printer came off the assembly line. As for the actual testing procedures, that’s another story. Maybe there were thousands of people feeding paper into thousands of printers. Maybe robots were in attendance, or the company used virtual printers to emulate ink consumption. How the testing was executed is interesting and worth further investigation. But, no matter the testing procedures, I’ll wager there is a clearly defined policy in place. The company has been in operation for 70 years. It’s the way they do business.

Can the same be said of all those IoT companies who are pushing the envelope technologically? I wonder, how many of those small, startups have design meetings in which the question of testing policy is discussed. Or, if these threads typically start and end with, “We’ll leave it to QA to do the testing.”

The intriguing thing about policy establishment, particularly the testing policy, is that it’s incredibly cost-effective. Investments require time and thought investment commitment. The ROI can be significant, especially in the case where IoT can affect the life and limb of those using it.

“Good” policy creation takes practice and should be strict enough to ensure that the supporting procedures produce consistent, verifiable results, yet, flexible enough to accommodate change as a business grows.

Operating via policy is the sign of a company who plans to stay in business for a long time. The act of defining and supporting a thoughtful, consistent policy, at all levels, provides a compass by which organizations can move forward safely and reliably. It’s no different for IoT device and application load testing, which can be a fast-paced, the seat of your pants undertaking until a universal set of standards and protocols emerge. The tests will either support the policy or not. Those companies who continue to use policy as their behavioral guide will evolve.

Learn More about Load Testing IoT

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.


Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or on Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).

Leave a Reply

Your email address will not be published. Required fields are marked *