Context-Driven Performance Testing
Instead of a single way of doing performance testing, we now have a full spectrum of different tests which can be done at different moments – so deciding what and when to test became a very non-trivial task heavily depending on the context.
The recent revolution in software development – including agile / interactive development, cloud computing, continuous integration, and many more – opened new opportunities for performance testing and affected its role in performance engineering. For example, early and continuous performance testing. However, performance testing in general and specific performance testing techniques should be considered in full context – including environments, products, teams, issues, goals, budgets, timeframes, risks, etc. The question is not what technique is better – the question is what technique (or what combination of techniques) to use in particular case (or, in more traditional wording, what should be performance testing strategy). The term context-driven appears as a great fit for me here – in its classical form as described at http://context-driven-testing.com/.
Drastic changes in the industry in recent years significantly expanded the performance testing horizon – agile development and cloud computing probably the most. Basically, instead of a single way of doing performance testing (and all other were considered rather exotic), we have a full spectrum of different tests which can be done at different moments – so deciding what and when to test became a very non-trivial task heavily depending on the context.
For example, the purpose of continuous performance testing is, basically, regression performance testing. Checking that no unexpected performance degradations happened between tests (and verify expected performance changes on the established baseline). It may start early (although it may be a bigger challenge on very early stages) – and probably should continue as soon as any changes happen to the system. It may be on a component level or on a system level (considering that not all functionality of the system is available in the beginning). Theoretically, it may be even full-scale system-level realistic tests – but it doesn’t make sense in most contexts.
Moreover, performance testing is not the only way to mitigate performance risks – there are other approaches too and the dynamic of their usage is changing with time. So the art of performance engineering is to find out the best strategy of combining different performance tests and other approaches to mitigate performance risks to optimize risk mitigation / costs ratio for, of course, the specific context.
Alex Podelko has specialized in performance since 1997, working as a performance engineer and architect for several companies. Currently he is Consulting Member of Technical Staff at Oracle, responsible for performance testing and optimization of Enterprise Performance Management and Business Intelligence (a.k.a. Hyperion) products.
Alex periodically talks and writes about performance-related topics, advocating tearing down silo walls between different groups of performance professionals. His collection of performance-related links and documents (including his recent articles and presentations) can be found at http://www.alexanderpodelko.com. He blogs at http://alexanderpodelko.com/blog and can be found on Twitter as @apodelko. Alex currently serves as a director for the Computer Measurement Group (CMG, http://cmg.org), an organization of performance and capacity planning professionals.