How Performance Testers can Help Protect & Secure IT

In August of 2018 Bob Diachenko, Director of Cyber Risk Research at Hacken.io found that over 2 million Mexican citizens had their healthcare data leaked due to a security vulnerability in a system’s database. At that same time, he found that the data of 93 thousand users of a popular babysitting app, Sitter, was also exposed. As if this isn’t scary enough, in November, Marriott Corporation revealed that over the last four years hackers had broken into its reservation system and stealing the private data of over 500 million customers. That’s more than the combined population of Russia, Germany, France, the United Kingdom, Italy, and Spain combined!

Given the security breaches that have occurred at places like Facebook (50 million profiles), Equifax (143 million), Yahoo (3 billion), eBay (145 million) and surprisingly, Adult Friend Finder (412 million), you’d think that companies would learn from one another. Many have, and yet, many have not. Data theft is still an all too common occurrence on the digital landscape. That’s the bad news. The good news is that security has become a company-wide concern, not something relegated to a few compliance officers sitting in isolation in the data center. Companies have come to understand that everybody has a role in making sure that its digital infrastructure is secure and protected. This includes test practitioners and QA personnel.

Granted, implementing a comprehensive approach to security needs to come down from above. But, there are steps that testing and QA teams can take to help make their work safer and more secure. The following five practices describe how these teams can help protect and secure their company’s data and digital infrastructure.

Know the Playing Field

Having a solid understanding of distributed computing is critical when it comes to secure performance testing. Test practitioners will benefit from having a basic knowledge of networking. They should know about machine locations,  their interactions with each other on the Internet using IP addresses, ports, and DNS names. Understanding how they are accessed remotely via Secure Shell (SSH) for access between Linux computers and WinRM under Windows are essential insights. Also, testers should possess the basics of Transport Layer Security (TLS), for example, using SSL certificates to facilitate machine access.

Testers (and the scripts they write) need to be able to contact test environments to perform work. Also, they must have a firm grasp of how the situation is constructed to ensure that the proposed testing plan can execute effectively. Secure testing is more than merely providing access credentials. Test designers need a keen awareness of the risks and vulnerabilities unique to each computing environment, making sure the tests behave appropriately. For example, a common mistake is to have testers assume root privileges when testing directly on a computer. This is fundamentally poor business. Instead, the examiner should work with system admins and security personnel to create users and groups dedicated to the scope of testing activity required. Something as simple as creating unique roles for testers in the computing environment will go a long way to ensuring a more secure operation.

These days more testing activities are taking place in the Cloud, so cloud architecture knowledge is necessary too – e.g., having the ability to discern private cloud vs. public providers like AWS and Google Cloud. Recognizing how these work together in a hybrid solution is equally as useful, particularly if the performance testing scope crosses between public and private cloud instances.

Steer Clear of Production Data

Working with real-world data is essential for conducting reliable performance testing. Getting data that is real-world can be a challenge. Sometimes the size of the datasets needed to emulate production scenarios can be huge (in terabytes, if not petabytes). In such cases, a test that was supposed to take an hour turns into an impediment as test data procurement can take hours, if not days. So, there are occasions (to meet a deadline, for example) when an organization will try to save time by running against production data. (Usually, this happens with read-only data or data that is dedicated to a fictitious user.) The logic – risk is minimal, mainly if conducted during off-hours. Sadly, this thinking is flawed. The potential for incurring significant risks is real.

To access production data, production credentials are used to access the datastore. This means that highly sensitive access information is shared with a party that is temporary and likely unknown. I am cringing at the thought of the danger this creates. For all a system administrator knows, the test might be saving access information to disk for later use. Or, maybe the access credentials are being shared in the report that’s issued post-test run. There’s no way to tell.

The safest way to ensure production data is secure is to use it solely for production purposes. If your tests need fast access to data that emulates production, a good approach is to use modern data virtualization technologies such as Delphix or Denodo. Database virtualization allows you to quickly get at testing datasets which closely emulate data in production without running the security and performance risk.

If you find yourself tempted to use production data to save time, avoid the urge, not even in a “just this once” scenario.

Practice DevSecOps

The growing practice in the DevOps community is to give security personnel an equal seat at the table in the software development lifecycle – known formally as DevSecOps. A tenet of DevSecOps is to bring experienced security personnel into all aspects of the cycle earlier; this includes testing.

In a healthy DevSecOps environment, security personnel is seen as teachers and best practice champions rather than approval-giving safety cop. One of the potentially overlooked perks of this practice is that the knowledge and practices shared by the security representative can tend to rub off on others. Indirectly, each member of the team has the opportunity to improve and enforce his or her security best practices.

The goal of DevSecOps is to weave security into all aspects of software development, making all those who touch the product competent security practitioners.

Beware the Fast Release Cycle

There is a growing trend to use production as a testing environment. It’s understandable given the shortening of release cycles most companies demand. A/B testing is probably the best example of this.

For example, a company has an idea for a new feature. The feature is implemented and subjected to “just enough” testing to make sure it won’t wreak havoc during production. With A/B testing, the feature is released to a portion of the customer base. If the feature gains traction in the limited release, the user base is increased. If it’s a dud, the code is rolled back. No harm, no foul, right? Well, not quite.

As the user base increases so too do the potential for unintended risks – the new feature could introduce side effects that might go unnoticed until it’s too late. For example, unintentional I/O blocking on the backend. Or, the potential for script injection ultimately putting the entire app at risk.

Using production as a test platform is always a gamble. The reality is that many companies are going to be hesitant to slow down a release cycle in deference to more pre-release testing. This means that test practitioners have to adopt an approach that accepts the imperative espousing fast release. Hence, more attention needs to be given to making pre-release testing faster. Also, testers need to come up with ways to identify emerging risks on the production targets in real time. Testers might not be able to prevent testing on production entirely, but they can ensure that problems (actual and potential) are spotted quickly. It’s a matter of planning (and monitoring).

Monitoring is essential for identifying undesirable behavior as soon as possible. Not only do issues need to be uncovered quickly, but they also need to be reported immediately, within a process facilitating fast reaction to danger. Waiting on a manual rollback won’t cut it. Automated response to a threat is crucial.

The demand for fast release cycles isn’t going away. We could be seeing more testing activity in production. With regularly shrinking release cycles, the test practitioner is going to have to execute with greater awareness, better preparation, and an extraordinary sense of vigilance.

Practice Compliance

There’s a reason regulations such as PCI-DSS and GDPR exist. The organizations publishing these have spent millions of dollars, thousands of person-hours to develop them. They have experts on staff who understand security at specific levels. These experts also understand how organizations work, be it large or small. It’s not as if the regulators sat around throwing rules against the wall at random. Time was spent thoughtfully creating policies and procedures with cost and benefit in mind.

While it might be a bother to implement the procedures that standard security regulations require, there is a bright side to the effort. You don’t have to reinvent the wheel. The practices are well defined. All you need to do is know about and comply with them. Of course, improvement is always useful. Once you adopt the security regulations relevant to your business, you can continually enhance the base practices (make a better wheel, if you will). Who wants to spend hours on the redefinition of something that’s been standardized? Just follow the regulations and focus on execution.

Those opposed to security regulations point out that breaches have occurred despite the cost of time and money associated with compliance. It’s an argument with some merit. But overall, the world has been/is better off where regulations are in place. Just imagine what it would be like without them.

The simple fact is that compliance, even among test practitioners, is a quick and easy way to protect and secure the IT infrastructure.

Putting it All Together

One of the more popular memes on the IT landscape is Conway’s Law, which contends that “… organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

Conway’s Law is particularly relevant to protecting and securing an IT environment. If an organization and its employees are diligent about the protection and security of the company’s digital infrastructure, then the systems it designs will be well protected and secure. If the company has a lax approach to security or one that is haphazard, the system’s it maintains will be too. This applies to the organization’s in its entirety and departments/groups inside, QA included. Secure testing is the result of safe behavior.

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist, and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).

Leave a Reply

Your email address will not be published. Required fields are marked *