Performance testing has come a long way from the days of conducting tests at the end of the software development life cycle (SDLC) or once things go wrong in production. And we have the DevOps and Agile culture to thank for it. This culture has allowed us to:
- Conduct performance testing early. You have developers running unit-level performance tests before checking in their code.
- Provide performance feedback quickly. This helps address performance issues before they become a major problem and potentially require re-architecting the whole application.
- Involve everyone from the start so performance-related decisions can be made quickly.
However, with this shift, I have also observed different misperceptions, such as:
- If you have experience using automation tools such as Postman, Selenium, etc., then you can conduct performance testing.
- The performance test framework needs to be extended to incorporate other types of test frameworks because we want functional testers to run performance testing. For example, adding the Karata framework to the performance test framework.
- Performance testing can be conducted quickly because that’s what the vendor said about their tool.
So they think the whole interaction revolves around the performance test tool or the framework. In their eyes, that is all there is to performance testing. However, performance testing is much more than just the performance test tool or the framework.
There are different activities involved in conducting load testing. The complexity of these activities can vary based on what type of application you are testing, who is involved, and the level of expertise.
Application Simulation Model
- Designing an application simulation model (ASM)
- Knowing concepts such as pacing and think time and how they influence the design
- Deciding how many threads/virtual users are required to generate load
- Knowing how the application will be used by customers or how the external systems will consume your services
- Learning how to capture performance requirements and goals
- What if the client has none. How do you go about collecting them?
- Can you use some kind of heuristics/oracles to help you with that?
- Defining performance requirements in user stories that are measurable
- Determine what kind of data to use in performance testing.
- Consumable vs. non-consumable data
- Data skewness
- How much data is required?
- For your script and type of test runs
- Other application components such as a databases
- The effort required to set up the data
- How many deliberate failures should be incorporated into the test?
- Slicing and dicing the results data to observe different patterns and making sense of them
- Different visualization to use for analysis vs. reporting
- Understanding key mathematical principles with a focus on basic statistics, especially with respect to sample sizes
- An added benefit is if you know how to analyze heap and thread dumps and garbage collection logs.
Architecture and Environment
- Understand the differences between test and production environments.
- How to model tests if the test environment is not correctly sized as production
- What about if external systems are not available? Stubbing external systems and what latency to implement for these stubbed systems
- What if they have different architecture (i.e., clusters/load-balanced sets / single instances)?
- What metrics to capture and report? Do you even have monitoring in place to capture the necessary metrics?
- Understanding what each of these metrics means and how to translate them
- Knowing about different types of systems, protocols and architectures
- Knowing about your application
- How many deliberate failures should be incorporated in the test (e.g., failed logins or payments)?
- Do you know if your code is performance-optimized to handle failures?
- How do you expect the end user will be using the application?
- What components are available for performance testing now?
- Different channels accessing the system (i.e., mobile/desktop). Emulating network latencies to mimic the requests from these channels
- Learning to consider the appropriate duration for test runs
- What should be a good enough sample size?
- What if you are running a test in production?
- Do you need to notify anyone?
- Who needs to be on standby in case something goes wrong?
- What matters to end users, business and IT/Ops?
- Learning about different perspectives on “response times” that are to be measured (i.e., is the response time determined by particular data appearing on the screen, or the ability to interact with the screen?)
- Validating test results against production
- Understanding what is happening in production
- Replicating monitoring from production into the test environment and vice versa
- Soft skills (communication, listening, writing skills)
- Front-end performance
- Tuning (database, application, web server, load balancer)
Knowing how to drive a car does not make someone a mechanic. Similarly, knowing a performance testing tool does not mean someone is doing performance testing. It is just the start of the journey. There is more to performance testing than meets the eye. And it ain’t just the performance testing tools and scripting or the test framework.
Takeaway points from this post are:
- Knowing a performance testing tool or how to script does not equate to performance testing.
- The performance testing world is much more than just a performance testing tool.
- Have a mentor or group of people who can guide you in the field of performance testing/engineering. Read books/articles/blog posts to help you expand your knowledge of performance testing.
- Surround yourself with people with different technical and non-technical knowledge and learn from them. If you want to learn about databases, find a DBA, and learn from them or read articles around it.
- Performance testing and engineering is an ever evolving field. There is always something new to learn — whether it is learning how to tune MySQL databases or how to represent information that makes sense.
f you want to know more about Harinder’s presentation, the recording is already available here.