API Testing Under Continuous Integration and Continuous Delivery

Load testing an Application Programing interface (API Testing) within a Continuous Integration/Continuous Delivery framework is more than having a testing tool fire off a bunch of requests against some URLs and comparing time and results. Today’s modern APIs go far beyond simply retrieving data from a database and then delivering the results as XML or JSON. Today’s APIs. They leverage the power of cloud based computing to operate at web scale. As such, load testing methods need to support both modern API architectures and continuous deployment practices.

To explain load testing APIS under modern Continuous Integration/Continuous Deployment, we start with a reference example of what we might be testing. Then we’ll look at how microservices extend that paradigm. Finally, we’ll examine ways to implement useful load testing within a CI/CD paradigm.

NeoLoad is a performance load testing tool designed to let any user run API performance tests. It is fast, easy to use and to learn. To learn how to use NeoLoad for API testing, read our white paper Using NeoLoad for Microservices, Component, and API Testing

Understanding APIs and Microservices

The idea that APIs and Microservices are one and the same is a misconception. You can make a perfectly viable API without ever implementing a microservice and vice versa. The sections that follow provide a fundamental explanation of APIs and microservices. We’ll use the concepts described therein later on when describing how to do load and performance testing upon each, separately and together.

Representing an Application as a Service using an API

An API is an abstract representation of the services that make up an application. For example, imagine we have a Blogging application that allows a subscriber to create an article with photos that can be read on a web site. The application also allows subscribers to comment on a given article. (See Figure 1, below.)

blog publication structure

Figure 1: An example Blog application utilizes the resources article, photo, subscriber and comment, as expressed in ERD notation.

We could easily implement the application to end users as a client side Blog Editor tool that is rendered on a web page client or within a native mobile app. The Blog Editor has the intelligence to create, store, publish and manage blogs embedded directly in application.

This is a “monolithic” approach because the application is done in one large piece. Re-using parts of the application, testing parts of it in isolation, or deploying a single small change, will be challenging to impossible. Making the same functional change to both applications requires changes and re-testing in two codebases. Abstracting out the essential application features into an API that is used by both web and native mobile clients makes implementing change easier. Only one codebase needs to be updated. The user interface changes may need to happen in two places, but he code that represents the essential services of the Blog Editing application resides in a single place.

We’ll use this example of an API in order to limit confusion later.

When it comes to implementing the API, things get interesting. The current popular approach is to represent each resource as a URL. For example in our Blog Editor application we can have a URL for working with articles, one for photos, another for comments and another for subscribers. The code behind all these URLs can reside in a single server, a PHP server for example.

But there is a problem. Most commercial grade APIs will have hundreds, if not thousands, maybe millions of clients making calls to the URLs that make up the API. When demand gets high, the servers can get overtaxed and performance will degrade. To address this risk, a load balancer is placed in front of many identical instances of the API server. The load balancer sends requests to the servers according to available capacity. If one server is operating at maximum threshold, the request is sent to another server. Some system designers will go so far as to automatically provision additional servers and wire them into the load balancer should all available servers be operating at maximum load.

Figure 2, below, shows a typical API architecture that designed to scale to meet usage needs as we’ve just described.

API architecture

Figure 2: An API architecture that is built to scale will provision identical copies of a server in order to meet the usage demands upon a given API. (1) Requests go a load balancer (2) which in turn sends the request to a server instance that has the capacity to handle the request (3)

Provisioning multiple instances of an API server behind a load balancer is a clever way to meet demand, but it does create another problem. The application server now becomes the monolithic bottleneck. For example, if a new feature — convert color photo to black and white — needs be implemented in the code behind /v1/photos, that new code will need to be to propagated to all the instances of the servers behind the load balancer.

While this sort of update might seem like no big deal, it is. Remember, that one URL is but one in a set. Many times companies will dedicate an entire development team to support upgrading the API. Also, the responsibility to keep the API up and running might be assigned to a separate system team.

Having a single development team and maybe a system team responsible for all the services that make up an API is a risk. A jack of all trades is rarely master of one. There might be people on staff that have the expertise with the details of /v1/photos to make the necessary changes quickly. And, hopefully, the internals of  /v1/photos might be well encapsulated so that there are no side effects that will have another impact on the other URLs that make up the API.

We can hope for the best, but given the monolithic nature of the API, we might not get what we hope for. When faced with a monolithic architecture, whether in an API or a standalone application, the risks are high. The way to reduced risk is break up the monolith. The trend in modern API architecture is to use microservices.

Segmenting an API into Autonomous Microservices

A microservice is a discrete resource that is well encapsulated, autonomous and responsible for its own well-being. Figure 3, below shows the example Blogs API refactored so that each URL for a given resource uses a distinct microservice.

microservice based architecture for API Testing

Figure 3: In a microservice based architecture a URL in an API (3) forwards a request onto a URL associated with a resource that is completely autonomous and responsible for its own wellbeing (4).

A mentioned above, a microservice is completely autonomous. Autonomy means the the microservice has everything it needs to do work. For example, using the feature request to make it so that the service /v1/photos has the ability to convert a color photo to black and white, all the logic required to perform the operation resides within the microservice. If more worker processes are required to get the operation to happen, how and when the worker is implemented is the business of the microservice. Keeping the service available 24/7 is the responsibility of the microservice. If the microservice needs to use its own load balancer, then so be it. All that matters is that the microservice meets the conditions of its service level agreement.

The autonomous nature of microservice implementation has implications not only in terms of software design, but also in the way a company organizes its development staff. Forward thinking companies understand that a microservice is a distinct product. A microservice will have its own staff, development cadence and deployment pipeline. Thus, in addition to having dedicated programming, QA and deployment staff, a microservice team will include project and product managers. Some staff members might be full time, others part time, but the important thing is that the microservice will have the resources necessary to operate autonomously and provide ongoing value to its consumers. This means that the microrservice needs to have the tools and processes in place that support the full gamut of testing, including load testing.

Implementing API Testing

As mentioned at the beginning of this article, performance testing a modern API is more than firing some HTTP requests off at set of URLs. In order to implement effective performance and load testing in the real world you need to have a very clear understanding about the scope of testing – from expected demand to expected performance. Also, in terms of an API, you need to define the access points against which load testing will be executed. Defining access points is important because, unlike full scale application testing, API testing is better suited to taking the Shift Left approach, in which testing occurs early and often upon discrete access points, in short intervals. Finally you need to identify the tools that will be used to do load testing within the given CI/CD process.

Let’s take a look at the details.

Set Testing Scope

What and how you do performance/load testing depends on the purpose of the testing session. The purpose of some testing is to make sure that the code is good enough to move onto the next stage of the software development lifecycle (SDLC) — development to QA, for example. Other types of testing are more stringent — full scale regression testing right before deployment to production. Testing takes time, sometimes a lot of time. Thus, you want to make sure that you are investing the right amount of time and energy to testing, as is appropriate to the goals of the test session.

For example, the goal of testing code as it moves through a CI/CD pipeline is to make sure the code works at a functional level — all unit tests and smoke tests passing, for example. Also, you want to ensure that the code does not incur any blatant performance impact. Thus you want to implement a scope of load testing that does “just enough” calls/requests to uncover glaring performance bottlenecks. The testing needs to be fast, yet adequate. So creating a test that runs under a hundred virtual users (Vu) against the API will suffice.

Full scale regression testing, on the other hand, takes a lot of time. Such testing is necessary. But, given its expense, it needs to be administered judiciously. For example, performing a full regression test that fires up thousands of virtual users, each making hundreds of requests upon every code commit imposes a burden on the testing process that does not justify the benefit of the result. Thus, full scale regression is best done before release to production. This is the most efficient place in the SDLC to invest the significant time required to ensure that the code going into production is safe and performant.

Use the Shift Left approach for your API testing

Shift Left testing is a school to software testing and system testing that promotes the notion of testing as early as possible in the software development life cycle. The term, Shift Left, refers to the idea of moving test activity toward the beginning of a project plan. (Progress in a project plan chart is illustrated as a left to right movement. Hence, “Shifting Left” means to go closer to the beginning.)

The benefit of using Shift Left is that most issues in API testing can be uncovered early on in the development process provided takes place. The rule of thumb in software development is that fixing issues early in the development process is significantly less expensive than letting issues linger on toward the end.

One of the best ways to keep the cost of software development down is to follow the philosophy of the Shift Left movement: test early and test often.

To learn how to execute load test with a shift left approach, watch our recorded webinar 3 Keys to Performance Testing at the Speed of Agile

 

Under CI/CD, there will be releases to production several times a day by every team! This calls for micro-api performance tests that demonstrate that the change-under-test does not degrade in it’s own performance or degrade the overall system.

Put Testing Responsibility on the Owner of the API

Load testing is divided into two parts, how to test and what to test. The notion of what to test becomes a bit tricky when API testing. As you read above, it’s quite conceivable for a seemingly standalone API to be nothing more than an API gateway that serves as a facade for many other APIs in the background. These background APIs might be microservices internal to the company. Or they can be located at a variety of external domains. Hence, what to test requires some analysis.

Of course we can and should always perform load testing upon the API in the forefront. But, there is a good case to be made that deeper testing might be warranted, that constituent APIs need to be tested too. However, we need to careful not make deeper testing so complex and costly that it becomes a folly in which the expense of testing outweighs the benefit.

 

NeoLoad enables developers and testers to ensure the performance of their APIs. It smoothly fits into your CI/CD pipeline, and enables you to analyze your APIs non-regression performance trends, within NeoLoad and/or your Continuous Integration server. Read how to use NeoLoad for Microservices, Component, and API Testing

 

So then, what is to be done?

The answer is a matter of policy. The essential operational rule is that any team publishing the API is responsible for providing a general service level agreement to its consumers, API testing to ensure the SLA meets its guarantee and then making those performance results available. This is feasible when all the APIs in play reside within the company. Third party APIs require a bit more work.

One way to address the need is to have an independent test suite that performs load testing upon third party APIs at realistic intervals. Most third party APIs charge according to request, with a cap imposed when the request limit is exceeded. Thus, load testing a third party API every hour, for example, becomes costly and impractical. Rather, it’s up to a company’s technical authority to determine the frequency at which load testing third party APIs can performed in a cost effective manner. Once the policy for testing third party API to ensure performance is set, it must be followed. The performance quality of any company’s API is only as good as its weakest constituent.

Choose the Right Tools

The final consideration for implementing performance/load testing in a CI/CD pipelines is choosing the right tools. Tools save time and money provided the company chooses a tool that is right for the job. Not all testing tools are appropriate for the given phase of testing in the CI/CD pipeline. For example, you don’t need an extreme load testing tool for running tests in the early stages of deployment. Conversely, a tool that is designed to run UI tests as a single virtual user is of limited value for pre-production testing. Finding and using the tools that are appropriate for the scope of testing at hand is essential.

 

NeoLoad is the load testing platform design for continuous testing. Because it is simple and automated it supports your API testing requirements. And NeoLoad also supports your needs to run complex and large load tests on full assembled application. Learn how to do continuous performance testing with NeoLoad.

 

Therefore it is essential that you identify load testing tools that are compatible with your CI/CD technology. If you identify a tool that does NOT integrate in the CI/CD process, the result is that in order to make the tool work, human beings will need to intervene to do testing. Clearly having a human do work because automation can’t due to systemic limitations defies the spirit of CI/CD.

Having the right tool, at the right time is critical for effective performance testing.

Putting It All Together

In today’s world, in order for an API to be commercially viable, it needs to operate at Web Scale. This means supporting millions of users at very high levels of service. And, as the Internet of Things continue to grow, eventually to become the predominant consumer of API services on the Internet, the performance levels that APIs will need to support go well beyond today’s norms. Being able to implement an automated API testing strategy that is appropriate to all CI/CD pipelines in the Software Development Life Cycle, using the best tools available is not a nice to have, it’s critical. A company’s APIs are only as good as the testing it performs. A company cannot test good performance into an API, but if it doesn’t test, it will never know it’s there.

Keep Me Informed