I will open this text with a yes/no question for you. It’s aimed at people with previous experience doing load test projects. But if you are new to load tests and have never had enough experience to answer, don’t worry, you will face this eventually.
Have you ever been asked to create load automation for a process with poor performance in plain sight?
This was more or less a trick question as I already know the answer. Most probably, you have responded with an astounding yes! Multiple times, yes!
Why have so many performance engineers experienced this? To explain the reason (and the title of this post), that is what I mean with racing (and crashing) your performance. Many organizations and teams want to head off automating for a load even as their solution is not performing under manual and single user interactions.
The questions come
To an experienced performance engineer under the situation I asked about, that is a flawed request. One that sadly is not a once in a lifetime experience. It is a request that happens so often. It will pop multiple questions on the minds of any experienced performance engineer.
- If poor performance is already visible by manual single user interaction, why bother load testing it?
- Why bother creating automation that may be useless after the performance testing issue is fixed?
- Why is this happening in so many teams and organizations?
For novice performance scripters, these questions may not pop up on their minds at all. They are often told to start scripting and slamming loads to measure the performance of the tested system. Again, even if the performance is already bad. Or even mildly bad.
This approach can be compared to a racing stunt. Professionals may be able to do it. They will crash here and there. When you try to do racing, stunts are complicated. Oh yeah, they are also not good when you are supposed to be winning races.
On top of this, just rushing into the race as it is, most probably you don’t know the status of the car. How are the tires? Is there enough air on them? Does the car have enough gasoline? Is the oil-filled?
Man, following my opening question, you can even see that the tires are a bit flat (poor performance). Why in the world would you be requested to race the car?
This flawed approach of just slamming the gas pedal into load tests comes by because it is common (wrong) knowledge (or ignorance) that performance and load are the same things. That through “performance” is the only way that you will be able to know the response times of your processes.
In other words, the only way to know the status of the tires, gas, oil, suspension, etc. is by racing it. We all know that is ridiculous. Right?
Now, time for the answers.
The mighty Thor
One big reason this keeps happening on teams and organizations is a lack of performance assurance culture. These organizations may have only a load testing culture. All this time, they are doing load tests thinking that they are performance assurance.
This situation is known as the “Man with a hammer” syndrome or Law of the instrument. This means that just because the only tool you have at hand is a hammer (load testing tools), you treat everything as if it was a nail (load tests.)
It could as well mean that you only learned how to hammer. Then you will proceed accordingly with every task.
Don’t treat all your performance assurance needs just like nails. Please do not. You have screws to be screwed, weld to be weld, USBs to be plugged, teeth to be brushed. Do not treat them all as nails! Especially your teeth!
It is a desert
Another big here is the massive misconception in the industry that performance assurance should happen at the very end of the SDLC (again just through load tests). That is an incredibly wrong idea. Especially on these agile project days.
Load tests can happen in the end. Yes. But only after you have polished all the performance metrics and created self-sustainable artifacts, before attempting dangerous racing stunts. The load-until-the-end approach will only delay your project and probably waste resources. Not to mention increase the risk of crashes. For you will crash for sure.
Performance assurance should be done from day one of your project. You should know how each piece performs at every step; in the same way, the parts of a car are always tested. Even before mounting a tire, you must know it is balanced, has air, has pressure sensors, has no holes, etc.
The only way to measure
Moving on. The next one arose from genuine (but ancient) need. The measurement of response times was only possible through automation. All done through external devices that called the process and timed the response. Automations.
But that is a stone-age approach. It is like bringing your broken car to your mechanic. The process of pinpointing the source of the problem was often tedious as the mechanic had to, almost literally, jump on in the inside of the car to disassemble it and dig out the problem.
Nowadays, there are modern sensors everywhere that enable your mechanic just to plug a scanner and tell you what is broken. Even the car itself may prompt you the problem on the dashboard using those sensors.
In the same way, we nowadays have beautiful things called APMs. More on those below.
All this post full of problems and wrong approaches. You may be wondering now, what is the right way then?
Well, there are multiple approaches and steps that you could take. It is not mandatory to do them all, but the more you implement, the better coverage for performance issues you will have. As well, it is recommended to adopt the steps gradually and incrementally in the order below.
This goes in line with doing preparation and measurement steps before racing into load tests. Many steps should happen before and, hopefully, each in a specific order. So, let’s begin in order.
Production line inspections
In the same way that factories for car parts do, our software assembly line should have performance checks as well, right after the part is produced.
Factories have implemented checks on production lines that build the car parts. As soon as the part is ready, it must pass some quality verifications. Necessary checks are done here, nothing fancy (visual inspections, quick measurements, and checklists of quality gates). At this point, there are even some automated verifications done through robots or scanners to ensure each piece complies with quality. Even the part could have on itself a checking device integrated.
In the same way, as soon as developers finish any piece of code, necessary performance measurements should be conducted. This could include response time limits, database reads, connection limits, and so on. But this is a task that should be instructed to developers as quality gates — providing indications, tools, and guidelines on how to comply. Most SDKs nowadays come with performance metrics analysis integrated as part of the debugging tools.
Another approach can be made by automating performance readings on your CI platforms. As soon as a developer checks in code, a lock can request for basic response time metrics to be provided before allowing a check-in. Or even further, the CI can execute and gather the metrics itself.
The opportunities are endless here, folks!
Test each new piece
Once you produce a tire (rim, rubber, and sensors), you must spin it, make sure it is balanced, that it can roll a bit on itself, that it has air, etc. You put each produced tire on a quick spin device that puts them under a little bit of stress just to make sure that they can take a bit of struggle on their own.
You could even test each piece on its own to stress or breakpoints. Just to make sure that once real pressure comes to them once assembled on the car and the race, that they can take the heat. Even after that, you can put four tires together and spin them on an axis to make sure they will not hit each other or cause any unwanted vibrations.
In the same way, when each piece of software is released, we should create some automation that can be done on them each time a new part is released. This automation should be treated as part of the “done” requirement for each module, function, or class. Must be available to execute them and gather metrics at single executions, some repeated executions, and low stresses. If the need arises, each code component could even be stressed to breakpoint iterations and concurrency to ensure they can handle the load.
After each piece is installed on the car, more testing actions take place. The assembly line checks that the part got a good grip where it is mounted, that it works well together, and even visually that it looks accordingly.
Once the team has automation in place and has tested the pieces each on their own, we can proceed to test them together and at the same time. Again, incrementally. First aligning all the automation to start at the same time, but only one thread and once each.
Then we can increase a bit the load by running only one thread per automation and letting them iterate for a while. Then we can improve the concurrency of each a bit. Remember, these are low-stress tests just to check that the performance is not degraded while working concurrently.
Don’t push them too hard (yet).
Speed meter. Gas meter. Distance meter. Lights indicators. You will install all of those and more on the car. You should not need a race to find out if something is wrong in your car. You should have that info as an indicator. Even for mechanical failures, these great things called scanners can be plugged to your dashboard to know what is going on with the car, even at low speeds.
In the same way, APMs (Application Performance Management) has come to help and save the day in terms of performance for application development, control, and error detections.
They are on the bottom part of this list, but they should be one of the first efforts implemented in all the environments of the solution. From Dev to Prod, passing through all, test, stage, pre-prod, etc. This will incrementally enable more early detection. They are providing capabilities to detect misconfigurations among the environments, degradations, and even close the prod loop for Ops-Dev optimization.
Given that you now have dashboards on the car, it is safer to take it out for a ride. Not for a race. Not yet, at least. This is just a test drive. The ones that were going to be done anyway. Only that, now you have scanners and dashboards on the car. You will be able to detect any issue while just driving it!
Having APMs, you can leverage other testing practices while at the same time, receive performance metrics. Let your Unit tests, functional tests, UAT, etc. teams to do their thing while the APM gathers the performance metrics. Some of those practices could count as well as a very realistic small-scale load test (especially UAT, where you have a bunch of real-life-in-the-flesh users banging the system as it is supposed to be used).
Fix as detected
On all our car assembly and preparation examples, we included multiple checkpoints. They are supposed to raise big red flags on each step of our racing car assembly process. In the real-life car production efforts, as soon as a problem is detected in part, it is fixed. There and then. Not later, not after assembling the defective part, and of course, not during/after the race.
In the same way, issues detected using all the recommendations above should be fixed right away. Letting them get further on the SDLC can bring big problems, make it harder to fix, and, most importantly, could be exponentially more expensive to fix.
As a general recommendation, the earliest a defect is detected, it should be fixed right away as there is faster, cheaper, and safer. Not on a hardening sprint, not at the end of the sprint, and not in QA environments. God forbid in production!
Ready to race!
Well, by now, if you applied all the recommendations above, that means your car is ready to race! You can now test its limits, check how fast it can go, and how narrow it can take the curves at max speed.
You can now start automating for a load. Do all the big-time load tests that you want. You can be confident you can now test the upper limits of the solution instead of crashing at every little attempt.
Learn More about the Performance Advisory Council
Want to see the full conversation, check out the presentation here.