Blog

5 challenges associated with performance testing an SAP application

Author:

Guest Contributors

Date: Feb. 25, 2021

By Bob Reselman, Software Developer and Technology Journalist

Enterprise resource planning (ERP) applications are the lifeblood of large organizations. They tie all the activities of the enterprise together — from payroll to purchasing and sales — into a central digital platform allowing all members of the company to work together intelligently and cohesively.

Initially, ERPs were big monolithic applications that were expensive to purchase, set up, and maintain. However, over the years the cloud has become pervasive and has put its thumbprint on application architecture design. Big, imposing software emblematic of the old-school ERP has experienced a transformation, becoming an aggregation of smaller applications. The trend was particularly evident with SAP, one of the largest ERP manufacturers in the world.

SAP has transformed from a single one-size-fits-all application model to a collection of cloud-based components released as the SAP Cloud Platform. While the changeover from a monolithic application to the platform as a service (PaaS) model has its advantages, it’s not challenge-free, specifically, when it comes to load testing practices.

The methodologies for load testing monolith applications like SAP, which we usually name SAP load testing, don’t necessarily apply directly to load testing cloud-based applications. Therefore, companies moving to the SAP Cloud Platform from prior ERP solutions need to make adjustments. The most significant changes include:

  • Test data availability
  • Use case description accuracy
  • Test infrastructure size
  • SAP Basis team interaction
  • Capacity management

Availability of test data

SAP, like other ERPs, takes care of many aspects of data management behind the scenes. For example, SAP will automatically generate an invoice number that is immutable and unique. While this makes things easier in most situations, it can cause a problem when testing, mainly when running the same tests multiple times within a single session. A unique invoice number that is expected to be available throughout a testing session might become invalid as invoice data is regenerated over some scenarios within the test session.

This is not an unusual dilemma, but it is one that must be addressed. Consistent test data needs to be available to analyze test results reliably.

One way to address the problem is to accept the system-imposed data and generate as much useful test data as possible such that overwrites do not occur within the scope of the testing session.

An alternative solution may be to back up the database containing the data sets of interest and recover it before each test run.

Regardless of the approach chosen, remember that dependable testing is predicated on the accuracy and availability of the test data.

Accurate description of the use case

The value of a test is only as good as the use case behind it, even in the most simple scenarios. The clearer the use case definition, the more comprehensive the test design and, ultimately, the more reliable the test results. For example, imagine describing the following use case:

Use Case: A user will be able to successfully log in to the system upon entering login credentials.

Seems simple enough, right? Not so much. What exactly are the login credentials that are supposed to be entered? To assume that the credentials are just username and password is a risky play. What if the login process supports multi-factor authentication? In such a case, the login credentials might very well be a username, password, and SMS-delivered code. Unless such details are documented in the use case, the test designer must take a leap of faith.

The result of the above scenario: the test designer is going to have to take the time to track down the engineer or subject matter expert most knowledgeable on login credential structure. Then, once the information is known and subsequently documented, the test designer can create a test that aligns with the details of the login credentials. This discovery process takes valuable time. An argument can be made that it’s wasted time at that.

The person creating the use case was aware of the login credential structure when he or she documented the use case. At the same time, it would have been a small amount of added effort to include the login credentials exact format along with a method to discover the login data. In the absence of this step, important information relevant to the use case was left to chance.

Assumptions cost money. When it comes to implementing useful SAP performance testing, providing a clear use case is one of the best ways to ensure that reliable testing is being conducted in an efficient, cost-effective manner.

Sizing of the test infrastructure

For large-scale system testing, one size never fits all. Different tests will require different volumes of data. The scope of the test data will vary too. Many test practitioners tend to define a single dataset for all tests, leaving it on autopilot from there.

Remember, the more data in your test, the longer it takes to execute the test. That time comes at a cost. Thus, test practitioners want to ensure the appropriateness of the dataset’s size based on the testing conducted. Proper component testing requires data that is appropriate for the operation of the test. For example, an address validation component needs only address data. When you overlay unnecessary data, you’re not adding value to the test process. If nothing else, it’s introducing an undue burden to test execution.

Event considerations also matter. Consider how much higher the volume of the test data and number of virtual users needed for system burden emulation during the Super Bowl vs. what would be required should you retest a month after the season ends.

Don’t forget about the test infrastructure size. This matters too. An infrastructure that is too big will incur an unnecessary expense, yet one that is too small puts test validity at risk. The trick is to make sure that the test infrastructure is just the right size to meet the need at hand.

Interaction with the SAP Basis team

No single person knows it all. This is particularly true with a product as large as SAP. I think about the analogy of the space shuttle. You’d be hard pressed to find a person who knows every detail about the technology, from propulsion to onboard life support systems. It’s just too much knowledge for one individual to master. NASA has learned to accept this. To address the challenge, they go to great lengths to make the training and education available such that it can be shared among teams. A propulsion engineer with a concern about life support is not expected to know the details. However, the engineer is likely to invest in building a topical knowledge base through collaboration with the life support team. Likewise, the life support team is expected to accommodate the inquiring engineer.

The same philosophy applies when you work with SAP, mainly for performance testing. Test designers can’t go it alone, nor should they be expected to. Having an open, accommodating channel to the SAP Basis team is critical for creating useful, reliable performance tests. Sure, the test designers don’t know it all, but they should be required to obtain the knowledge they need for their role. A company functioning effectively supports clear and continuous communication among test designers, practitioners, and SAP experts.

Capacity management

What about capacity? It must count, right? There are few things more daunting than planning a performance test that involves emulating the actions of a million virtual users, only to find that the test systems cannot support the work. Reasons can vary. There might not be enough CPU to accommodate the user volume. Perhaps network latency might be so high that merely storing the test results becomes a herculean effort.

With the increasing prominence of systems and the scope of testing on the rise, having to account for the size and capacity expansion management is a critical test planning consideration. However, for many companies, test planning focuses on the nuts and bolts of test logic and coverage. Considering where and how these tests will run is far too often an afterthought — if thought about at all. Meanwhile, systems have limits that should be known. When testing goes beyond the limits of a system unexpectedly, the value of the testing is undermined.

Making capacity planning part of the overall test plan is critical. If the capability to run a test is not available, you’re wasting your testing effort. Making sure your test systems have the horsepower to conduct testing efficiently and reliably is a mainstay for effective test administration.

Putting it all together

SAP has proven to be a valuable asset that helps companies make their business operations more efficient and profitable. The power that SAP brings to enterprise resource planning should not be overlooked. The technology is a game changer. But as with any system, making sure that SAP is performing to a company’s expectation requires appropriate testing. Otherwise, it’s guesswork and hope. Few businesses will prosper from hope and prayer. Implementing test strategies that address the five performance testing challenges described herein will ensure that a company’s SAP installation will meet expectations reliably and efficiently. The data that well-designed tests produce allows a company to evolve from guesswork to an embrace of intelligent, data-driven planning — a cornerstone of growth and prosperity.

This post was originally published in OCtober 2018 and was most recently updated in July 2021.

Bob Reselman’s profile on LinkedIn

Author:

Guest Contributors

Date: Feb. 25, 2021

Related resources