#NeotysPAC – Performance Test Automation Beyond Frontier, by Stijn Schepers

  [By Stijn Schepers]

At the beginning of February, I was honored to represent Accenture at the Performance Advisory Council (PAC) in Chamonix, France. Bringing Performance Engineering to the top of the highest mountain in Europe (Mont Blanc) must have been challenging for Neotys to organize. But they did an awesome job!  Hard to stay focused with such a breathtaking view.

I was privileged to be selected as one of the 12 speakers and to kick-off the event. My topic was about Performance Test Automation beyond Frontiers.

DevOps is a blessing for a Performance Engineer

DevOps has been a blessing for a Performance Engineer. Finally, a Performance Engineer can add value. The days that Load Testing is done at the end of the SDLC are – hopefully – finally over. With classical “waterfall delivery models”, performance testing was done way too late in the life cycle with a primary focus on fully integrated, end-to-end testing. There was not much time to optimize systems and to improve the non-functional aspects of a solution. With DevOps, this has changed completely! Performance Testing starts when features are being designed by assessing the risk these features impose to a performance degradation.  Developers write unit tests to profile there code. Automated load tests are being executed using web services. End-to-end load tests are a final validation of the performance. APM tooling (e.g. Dynatrace, AppDynamics, New Relic.. ) provide us with valuable insights of the impact on performance of these new features in production. With DevOps, performance testing shifts in two directions: left towards the design phase (risk assessments, unit testing) and right toward production (APM, Synthetic Monitoring).

During the first PAC in Scotland, I presented in great detail how a move to DevOps delivery model has changed the way performance testing happens:  https://www.neotys.com/blog/pac-performance-testing-devops/

Automate wisely:  analyse your results based on raw data

DevOps is also about continuous automation. A performance engineer should automate load testing as much as possible. But to which extend can we automate performance testing? Can we automate the analysis of performance measurements and use the outcome of the analysis to pass or fail a build? How can we automatically test the latest version of the application and measure the performance difference with the previous version? And based on this difference how can a framework take an automated decision to pass or fail a build?

I don’t know any commercial tool that has the ability to automate load testing in such a way that we can fully trust the automation framework to take the decision to pass or fail a build. When you analyse your performance results, it is absolutely crucial to analyse the raw data (every measurement of request response time) and not just the averages. Averages hide system bottlenecks.  Therefore my team took up the challenge to build a new innovative framework that not only automatically executes load tests but also analyse the results based on the raw data. This framework is called RAF, the Robotic Analytical Framework.

Raf, the best friend of a Performance Engineer

RAF is unique as it enables a Performance Engineer to automatically analyse the raw data of a load tests and to drive continuous delivery. The picture below explains the working of the framework.

  1. A NeoLoad NoGUI test is automatically launched from JENKINS
  2. NeoLoad exports the RAW data into a file share
  3. RAF automatically polls the file share to find the results file with the raw data. RAF transfers this data – together with runtime data (RunID, version number) – into a MySQL database. The MySQL database becomes a centralized repository of all the load test results.
  4. Based on the type of test (eg. Regression test), RAF analyses the raw data using predefined Validation Rules and smart algorithms. The analysis is done using the raw data, the error count and throughput. The output of the validation is a Test Execution Score (T.E.S).
  5. Based on the value of the Test Execution Score (T.E.S), a build is automatically failed or passed.
  6. RAF performs clean-up steps

Automation Frameworks are a set of tooling

BI Tooling

Frameworks like RAF need to solve complex issues in a simple and intuitive way. Therefore RAF is built in Python and is easy configurable for the application you want to test. The prime goal of RAF is to drive Load Test Automation and Analytics. Tableau is used as a Data Visualisation tool.  It is important to create dashboards that visualize the test results in a comprehensive way.

The dashboard below exists out of different graphs. The prime info is the Test Execution Score (T.E.S). A high score means that the performance, throughput and error rate is equal to previous execution runs. When the score is higher or equal to 70 , the build is PASSED. When the score is lower than 70, the degradation of the performance is not acceptable and the build is FAILED.  Secondary info provided is the raw data, a trend line, error count, throughput and a percentile graph.

For a performance engineer, response times are by far the most important measurement. Response Times are directly linked to End User and Digital Experience.

APM Tooling

Don’t re-invent the wheel and extend your automation framework with software solutions that are already in place. Extending an Automation Framework with APM tooling is a logical next step. APM tooling will provide you with resource utilization (Heap, CPU, Memory, … ) and with additional performance insights (deep dive capabilities). Additionally Health Rules can be defined that trigger the Test Engineer (email, WhatsApp or SMS message) when APM metrics are not conform to the baseline.  As an example, AppDynamics  provides a framework Dexter which is ideal to extend the automation framework with APM features. For more information check out Dexter’s documentation: https://github.com/Appdynamics/AppDynamics.DEXTER/wiki

Take-aways

Performance Test Automation is not a myth. It is a must-do to speed up performance testing and to uplift the quality and consistency of testing. By automating performance tests, performance engineers can spend more time in assessing designs and architectural solutions and in coaching graduates to become performance engineers. I believe that the biggest bottleneck of Performance Engineering is not technology BUT a lack of knowledge and experience engineers that understand the profession. These senior engineers should have a focus on building clever automation frameworks so they have more time for coaching and mentoring.

The PAC at Chamonix was a magical event. The scenery of the Mont Blanc inspired the speakers to get the most out of the event!

Learn More about the Performance Advisory Council

Want to learn more about this event, see Stijn’s presentation here.

Leave a Reply

Your email address will not be published. Required fields are marked *