#NeotysPAC – Strengthening Your Performance Testing Policy, by Bruno Da Silva

[By Bruno Da Silva, Leanovia]

By the time your web application goes into production, it likely will have gone through several rounds of tests during development. But, no matter the number of unit and regression tests you’ve run, there’s only one way to know if your app can handle realistic user load. Performance testing allows you to validate your application capacity such that it performs under various loads and conditions.

IT Performance 101

Before diving further into performance testing, and how to get the best from the practice, I’d like to just remind you of the IT performance drivers.

IT performance boils down to the balance of three aspects:

  • Application: How your application is architected and developed. Experience shows that bad performance may come from improper design and development.

 What is the application supposed to do? Is the implementation adapted? Are the design patterns/frameworks used correctly? Are the components designed to scale? Is there a single point of failure or bottleneck in your architecture?

  • Resources: What your application relies on to ensure its working order. Experience also tells us that poor performance could be resultant from insufficient environment sizing.

 How are clustering and high-availability implemented? Are you limited by network capacity or virtualization? Can your system handle a heavy load from all your applications at the same time?

  • Load: How your application and resources are used. Based on experience, another determinant of bad performance can be usage.

 Are use cases/user profiles identified on your application? What load peaks are you facing? Are you testing a new application or an existing one?

Performance Testing Strategy

Testing for performance stability is challenging because it is hard to predict the load you will be facing in production. No matter the case, there is always a point where your application will start behaving poorly or even crash. Because of this, you run the risk of not being able to process an unexpectedly heavy load. However, you can identify your bottlenecks prior so you can build a highly available and resilient system.

I recommend you consider incorporating the following phases when performance testing:

  • Load Testing: Helps with evaluation of your compliance with performance requirements under a defined load.
  • Stress Testing: Helps with evaluation of the system or one of its components under heavy load. This is essential to identify potential bottlenecks, actions you need to consider for resource capacity planning.

Load Test Design Best Practices (two approaches)

  • Shift left: Continuous testing performance during application development. This ensures performance from the earliest stage of the application and the guaranty that you’ve included all use cases for final testing. When designing load test scenarios at the end of development, chances are, you will miss something.
  • Shift right: Improving your load tests with production feedback. The information received based on real usage is valuable insight which helps you build better load tests, check for performance stability in future releases during regression testing. This is also a great opportunity to check whether your application and infrastructure need further adjustment.

By integrating load tests into your development cycle, you will recognize immediate delivery improvement and efficiency gain.

To do so, you will need to carefully monitor your environment. This is essential because it gives you a good baseline when conducting the analysis of test and production behavior. Moreover, it enables you to measure the impact of an application on shared components, which again, aids the capacity planning effort.

Data Generation Best Practices

Data is key when executing load tests. It helps you build more accurate use cases/user profiles. That is why you should consider data from the earliest stage of development.

When planning your scenarios, try to identify the activity you want to simulate. There are several types of phase activity – the following are considered typical:

  • Initialization: The first uses of the application, which means the database is not loaded and where many creations are likely to be made.
  • Campaign: The database will be loaded (likely impacting performance) and the usage is more about consultations, editions, and some creations.

When generating datasets and database dumps, you want to make sure your user profiles are realistic enough for your tests. Based on my experiences, here are a few tips to consider:

  • Double-check that the rules you implement for generating data are correct. Failing to do this will prevent the ability to identify performance issues that could come from specific functional cases.
  • Make sure to vary profiles according to reality. Users might have a higher impact on the database because of too many relationships. Vary number/size when it comes to files as well. A fresh visit or a returning user will not have the same impact on the cache, etc.
  • Use references among your tests for better consistency. It is good to keep data up-to-date when the application is evolving but should be re-usable when checking for performance stability.

In case the application is already live, you might want to generate your database dumps from production data. However, you might not be able to do this because of data protection laws. As a result, you need to proceed to data anonymization.

You have to identify and substitute personal with dummy data. Afterward, it is essential to automate this process, so you save time in the future. Doing so will allow you to build the best data reference when conducting production-like testing using reliable data.

Performance in DevOps

As a consultant, I recognize that for most clients, IT performance is not the priority and only comes as a concern when performance failure is detected. Thus, it is quite difficult to implement performance governance according to the best practices. Thankfully, I had the chance to elaborate on performance within a DevOps context at Leanovia. Here are the categories and goals I try to achieve when addressing performance.

Performance Testing

Find the appropriate tool for load/stress testing according to your needs. You should consider tools that implement framework parameters to save time when testing an application being developed or a similar one.


Finding the right tool for monitoring is critical as this helps fuel the performance analysis of your load tests. This is invaluable to your knowledge base, as you can compare your tests with production cases to address them more efficiently.

Continuous Performance

Ensure performance from the start. Capitalize on your automated performance tests to avoid performance regression. Continuously update monitoring, so nothing goes undetected.


Automate the application deployment for improved consistency between tests. Experience shows you are more efficient and less error-prone if you can restore the environment easily.

Last, and most important, you should containerize your injectors so you can create instances on- demand and not bother with tedious installations.

Data Generation

Capitalize on data you need for your load tests for reference during regression testing. Don’t forget to automate data generation with the proper tool, or by developing re-usable scripts with more complex data.

Bruno Da Silva

After IT studies at the UTC in Compiègne (France), Bruno Da Silva joins Leanovia for an epic journey. Passionate about IT performance, he works as a consultant to address challenging customers issues. In less than a year, he obtains NeoLoad and Dynatrace certifications to work with state-of-the-art technologies regarding load testing and monitoring. The adventure of becoming a Performance Ninja Warrior is just beginning.

He also loves chocolate, video games, and skiing; not necessarily in that order.

Learn More

Do you want to know more about this event? See Bruno Da Silva’s presentation here.

Leave a Reply

Your email address will not be published. Required fields are marked *