The Road to DevOps – Infrastructure as Code, CI/CD, Jenkins & NeoLoad

There’s a joke running around corporate IT. It goes like this: Implementing DevOps is akin to getting to heaven. A few companies can get there by grace alone. For others, they have to work harder at it.

Making DevOps a part of a company’s culture requires new ways of thinking and working. Gone are the days of waterfall development in which work groups are organized into silos separated from one another. DevOps needs groups to work together transparently and cooperatively. There are no more tossing parcels of finished work “over the fence” from development to testing and into the release. DevOps is about doing the job continuously and seamlessly. Under DevOps, the software development lifecycle (SDLC), from programming to release, is approached as a set of unified activities in which all personnel all equal, continuous participants. Workgroups are made up of product managers, project managers, developers, testers, system admins, and release personnel. They work together under the Agile methodology. The goal is to work is to make continuously improving software that meets the evolving needs of the user community. In other words, make great software that counts!

An essential aspect of Agile is approaching the SDLC as ongoing, time-boxed sprints in which small sets of features are released quickly. DevOps embraces the continuous integration/continuous delivery (CI/CD) paradigm to achieve fast release cycles. CI/CD is the process of using automation to test and release code as soon as possible from when a developer checks code into the source control. Human intervention in the CI/CD process, other than for approval events, becomes the exception rather than the rule.

There are tons of products in play that are dedicated to making CI/CD work. For example, there are source control management tools such as Git and its derivatives, GitHub and BitBucket; developers use to manage feature implementation and source code versioning. There are integrated development environments (IDE) such as Visual Studio Code, Eclipse, IntelliJ, PyCharm, Android Studio and XCode allowing developers to seamlessly integrate their coding activity with source control management. The goal is to make development rapid. Many IDEs inform developers when they’re making coding errors. Some IDEs will even suggest ways to make the code better. Also, all the popular IDEs support unit testing at the programmer level. Unit testing is a common practice in Agile.

One of the mainstay products for running CI/CD is Jenkins. Jenkins is an automation tool that DevOps personnel use to implement the soup to the nuts deployment of software. Jenkins can be configured to get the code from a source code repository such as at GitHub, build it and run the unit tests that accompany the code. Also, Jenkins can run functional and load tests against the code using a testing plugin such as the one provided by NeoLoad. Then, if all is well, Jenkins can be configured to work with cloud providers such as Amazon Web Services (AWS) to provision computing environments into which the new code is deployed. Jenkins deploys the new code into the cloud computing environment and then tests the new code in the cloud.

CI/CD does a lot of work that, given the demand for rapid release, is beyond the capabilities of any group of manual testers and release personnel. But, this does not mean that the need for human interaction is eliminated. Quite the contrary. There is an important role for IT personnel in the CI/CD process. An automated process is only as good as the logic behind it. Writing the logic that drives CI/CD is the work of IT personnel in the world of DevOps. But, the transformation is usually necessary.

The critical transformation that those companies new to DevOps need to make is to change the way IT personnel approach a company’s technical infrastructure. The approach used in the world of DevOps is to adopt infrastructure as code.

Understanding Infrastructure as Code

Over the last ten years, there has been a significant change in the way IT works. In the old days, when it came time for a company to increase its computing capacity, it usually called up a hardware vendor and ordered some new computers and networking equipment. The hardware arrived, and someone in the system installed the new equipment in the server room.

Then, one day somebody (usually the CFO) noticed that a lot of the hardware was being underutilized. For example, only 10% of disk capacity among all the computers was being used, or that CPU utilization was around 40% for 20 hours a day, just to hit full size for short bursts of time. Underutilized hardware was a waste of money. Thus, virtualization technology emerged.

Hardware virtualization (a.k.a “virtual machines”) represents a computer as software. The benefit of virtual machines is that you can put a lot of them on a single piece of hardware, thus more efficient use of the hardware. Recently another type of virtualization has appeared: containers. A container is more lightweight than a VM. Also, containers take less time to create than a virtual machine. Virtual machines and containers have become so prominent these days, the few IT personnel at the enterprise infrastructure level interact with hardware directly. There is hardware in the desktop and mobile environments, but regarding servers, most hardware is virtualized.

Another trend in corporate IT emerged along with virtualization: cloud computing. Companies such as Amazon, Google, Microsoft and IBM did some analysis and determined they could make money by setting up large data centers full of hardware and then put in virtualization provisioning technology to sell VMs, storage, containers and other forms of computing as service to its customers. It’s akin to electricity. In the old days, some companies might have their power plant. Or a city might it have its power plant. Then one day a company builds a massive power generator on Niagara Falls and sells that electricity to power companies throughout the region.

The result – today we live in a world in which most enterprise infrastructure is virtualized. Once everything is virtualized, infrastructure manipulation is done via code. When you need to provision new capacity, instead of hitting up your old vendor, you write something like this:

gcloud container clusters create simple-cluster –num-nodes=1 –zone us-west1-a
Listing 1: The code to create a single node (VM) container cluster on Google Cloud

Not only will you write code to create virtual machines and containers, but you’ll also write code to provision the networking around the device. You’ll also write code for security. In fact, a current trend in cloud computing is to do away with virtualized hardware completely in deference to pure computing services like AI, machine learning, and custom functions. Developers write the intelligence only. The work of DevOps is to transfer the intelligence into the cloud. The Cloud provider does the heavy lifting regarding provisioning virtual hardware.

As you can see, now that hardware has gone away from the day-to-day work of IT personnel, all that remains is the code to make it function. From this comes the notion of Infrastructure as Code. However, Infrastructure as Code is not limited only to machine provision, the concept plays a significant role in the CI/CD process, particularly around testing.

Deployment and Testing the DevOps Way

As mentioned, deployment tools such as Jenkins have an essential role in the CI/CD process. Jenkins and similar tools are the engines that drive deployment automation. They’re powerful, but they’re not magical. To use them effectively in the CI/CD process, planning is required. This means that all points along the CI/CD path need to accommodate automation, whenever possible. For example, using Jenkins to get, build and deploy an update to a web application is useful. But then to have the deploy come to a screeching halt when Q/A personnel run UI tests over the application’s web page defeats the spirit of DevOps. It would have been better to have the UI test prepared beforehand under an automated UI testing tool such as Selenium. Jenkins has a Selenium plugin that allows UI testing to be built into the deployment process.

The same is true of load testing. You can use NeoLoad’s Jenkins plugin to run load tests that have been orchestrated using the NeoLoad UI. But instead of requiring a tester to sit at a computer, fire up NeoLoad and run tests against the deployment target, Jenkins can do it automatically. And, Jenkins makes it so test results can be reviewed within the Jenkins UI.

                                            Figure 1: The Jenkins NeoLoad Plugin displays test result as part of the build and deployment

In fact, test results can be stored by Jenkins. Storing results under Jenkins means that a test engineer can write code to do further analysis. Or, Jenkins can leverage the NeoLoad plugin to execute follow up behavior that is intrinsic to the native NeoLoad test orchestration, email according to alert, for example.

The important thing to remember is that it’s all run by automation and that automation, as well as the infrastructure in which it operates, is defined by the code.

Putting it All Together

DevOps works! There’s a reason why. The demand for more software, delivered at ever-faster rates has not abated. Companies in the know realize that the way to getting more quality software into the hands of customers is to embrace the cultural unification and process automation that is essential to DevOps.

Companies that adopt DevOps has been disappointed. Adopting DevOps can be a challenge for mature companies that have hardened development processes. But, be advised: adopting DevOps is not an all or nothing undertaking. It can be implemented incrementally. Some companies take the first step by incorporating feature branching and pull request processes readily available under GitHub. Other focus first on implementing a CI/CD tool such as Jenkins and then over time expanding the number of events in the deployment process that Jenkins controls. Other companies focus on something as simple as writing automated tests under Selenium. Baby steps work! Many companies need such flexibility to grow into a fully automated CI/CD process. What’s important is to make a commitment to the DevOps sensibility and then learn the skills, on an enterprise-wide basis, which go with accepting Infrastructure as Code. After that, it’s just a matter of taking a unified approach to pursuing continuous improvement toward the common goal that unites us all: Making quality software that counts!

Learn More

Discover more load testing and performance testing content on the Neotys Resources pages, or download the latest version of NeoLoad and start testing today.

 

Bob Reselman 
Bob Reselman is a nationally-known software developer, system architect, test engineer, technical writer/journalist and industry analyst. He has held positions as Principal Consultant with the transnational consulting firm, Capgemini and Platform Architect (Consumer) for the computer manufacturer, Gateway. Also, he was CTO for the international trade finance exchange, ITFex.
Bob’s authored four computer programming books and has penned dozens of test engineering/software development industry articles. He lives in Los Angeles and can be found on LinkedIn here, or on Twitter at @reselbob. Bob is always interested in talking about testing and software performance and happily responds to emails (tbob@xndev.com).

Leave a Reply

Your email address will not be published. Required fields are marked *