Neotys Performance Advisory Council Experts went PAC to the Future


[By Henrik Rexed]

Back in 2017, while brainstorming to decide how we could organize a global event aligned to the Performance Advisory Council, we had this crazy idea to host a virtual webinar over a full 24 hour period (starting in New Zealand and wrapping up on the West Coast of the USA). Twenty speakers and 800 attendees did us the honor of participating in the initiative, for which we are thankful. So, it is no surprise that we decided to renew it in 2019.

Before we go into detail about the recent Virtual PAC, let’s discuss the objectives of the Neotys Performance Advisory Council. “The mission is simple: to make an event dedicated to the performance where people can train, share, and learn,” according to Henrik Rexed, Partner Solution Evangelist, Neotys. “The goal is to prepare and present best practice recommendations such that the attendees can return to their workstations with an improved skill set. The virtual event is designed explicitly for practitioners thirsty for a hands-on tutorial,” contends Henrik.

As you may know, each year, we have two PAC formats: the physical PAC, where we bring 12 people to a single location; the other, a virtual platform, which enables more speakers over a longer period.

What’s with the “PAC to the Future” concept anyway?

In our space, methodology and technology are changing at the speed of light, and automation is everpresent. When we started preparing for the conference, we knew we needed to travel to the future of performance to validate what we’re doing today on these topics. So, in turn, we considered the view a “PAC” to the future perspective. We’re not talking about revisionist history here. However, we do think about it as a revisitation of a time when a great professor invented the flux capacitor. The view includes three dimensions, and when you look at it, a few mistakes too.

At that time, the three dimensions were response time, performance modeling, and infrastructure. Sadly, if you look at the movies, the future was there, but it was different than our present and indeed our future (look at Marty McFly’s hoverboard: it is entirely different than the ones we see in our streets today). Our view is that Doc Brown was way too optimistic. Herein is the emotional part of the movie – we need to be more pessimist and realistic, introducing two extra-dimensions.

The user experience is one of them. Sure, response time is important, but a lot of people started talking about browser test measurement. What happens in the browser becomes more critical. Meanwhile, HTTP/3’s improvement of the user experience is a hot topic in the future.

The second dimension is performance modeling. Most of us have been building models to reflect reality or to capacity plan. These are still true, but we think they are starting to be forgotten. It makes sense to recall how to model appropriately or to build a test that will reflect reality, and it is going to be necessary, especially if we start to automate.

Infrastructure has changed, and in fact, becomes a cost consideration if not handled appropriately. We’re using the Cloud more, taking advantage of elastic infrastructure, which is excellent. Now, the main difference is that if we deploy lousy code, it will consume more infrastructure, so it’s going to be more expansive to handle, complex to maintain, indirectly impacting cost. As a performance engineer, the cost is an integral part of the picture.

The last of the dimensions is automation. Lots of speakers will focus on this – CI/CD is the fifth pillar of our flux capacitor.

What is it like to jump into this Delorean time machine?

We’ve been preparing our Delorean for a few weeks now in hopes that it’s able to travel to a realistic future, bringing new and useful ideas to our community.

In Auckland, New-Zealand, we met three wonderful speakers: Stephen Townshend (who, once again, kicked off the Virtual PAC session), Srivalli Aparna, and Philip Webb.

The Delorean then brought us to Central Asia (Bangalore), where Reuben Rajan George (Accenture) got us going, followed by Hemalatha Murugesan (Infosys) and Uma Malini Natarajan & Hari Krishnan Ramachandran (Cognizant).

Our journey then took us to Europe, which brought to our attendees a great group of experts:

The next location was the East Coast in the United States, where we met three practitioners: Scott Moore (live from Florida), SK Duraisamy, and Alexander Podelko. The 24-hour long marathon culminated in the delivery to the West Coast attendees where Federico Toledo, Leandro Melendez, Mark Tomlinson, and Antoine Toulme wrapped things up for ys.

Considering that we have five dimensions, we also five locations. Here’s what our flux capacitor looked like:

What did we learn after having delivered the most recent Neotys virtual PAC?

For starters, as Joe Colantonio would say, all our speakers were awesome.

From our flux capacitor, let’s highlight a few of the presentations from each pillar:


Stephen Townshend, Joerek Van Galen, Bruno Audoux, Federico Toledo, and Rob Davies made talked about performance testing within a pipeline.

If you’re interested in finding the best cooking recipe to add continuous performance testing within a pipeline, I’d recommend you check out Joerek Van Galen and Federico Toledo, who explain, step-by-step, all the things that you should consider. PAC members made it clear: the usage of coding scripts makes the adoption easier within the dev team by using either Taurus, Gatling, or of course, NeoLoad as code.


Stijn Schepers and Srivali Aparna dedicated their presentation to analysis. Stijn highlighted The Robotic Analytic Framework (RAF) using Tableau. RAF is calculating the level of performance through a scoring mechanism and compare it between each test to detect any potential regression.

Meanwhile, Aparna introduced RStudio, the equivalent solution to Tableau, where you can code to manipulate the data to generate a report. From my side, I thoroughly enjoyed this presentation by discovering RStudio, especially since my goal is to have enough bandwidth to be able to use it within a project.

We also had the chance to see Christoph Neumueller from Dynatrace present one of the open-source projects built to analyze thread dump analysis. Thread dump analysis can be painful and requires various types of tools, depending on the OS. Crash Dump analysis is a Docker-based solution that will allow having a web portal to run and visualize the thread dump analysis of any system. To fully automate this task, the CrashDump also has an API layer.


Covered primarily by Stefano Doni, who presented the results of using Akamas to tune the Cloud configuration based on performance and cost. After a brief explanation of the concept of Akamas (AI-driven auto-tuning system), Doni shared the results of optimizing MongoDB on AWS. Akamas detected the instance type that will deliver the best performance at the best price.


Scott Moore and Mark Tomlinson led these discussions:

Scott Moore shared the results of comparing the usage of HTTP/1.1, HTTP/2, and future HTTP33 (QUIC protocol). He explained the approach and the differences with each version of HTTP. The result shows an advantage/disadvantage depending on the network condition of the End-User.

 The future HTTP/3 will help to improve the user experience of a user having network constraints.

Mark Tomlinson took the time to explain to the performance testing community the X concept in web performance. It was a great topic that will help performance engineers who usually don’t have the knowledge and the opportunity to work on client-side performance.

Performance Modeling

Alexander Podelko and Hemaletha Murugesan represented this topic. Alexander Podelko did an excellent job reminding us of the concept of performance modeling and capacity planning – exposing the mathematical model, the different tools available to deliver capacity planning. If you want to learn more about performance modeling, see Podelko’s presentation.

Of course, we also had presentations on the core principle of performance testing. My favorite presentation would be Leandro’s presentation reminding the 80/20 rule (Pareto) and the logic to follow to avoid negatively impacting the overall cost of your load testing project.


In case you missed it, don’t worry. Every video and presentation are now available on our website: You can also follow our LinkedIn page for all pertinent PAC information:

Leave a Reply

Your email address will not be published. Required fields are marked *