[By Limor Leah Wainstein]
This post is the second in a three-part series on how to properly test the software you develop. The series began by discussing general best practices and strategies for effective software testing and a brief overview of modeling software tests. The focus now shifts to conducting a risk analysis during software testing and measuring the right test metrics. The next post will complete the series by talking about load testing and trend analyses.
Conduct A Risk Analysis
A risk analysis is an important part of software testing that involves examining an application’s source code for code violations that present a threat to the stability, security, or performance of the system. Testers analyze the code both for risks within the code itself and between units that must interact inside the application. This is particularly important for complex applications using multiple frameworks and languages because it’s often the interactions where most of the risk exists. A risk analysis helps to identify hidden software flaws that pose serious threats to how the software functions before it’s released into production. Since software flaws that are found in production are extremely costly, a risk analysis is imperative for understanding what exactly can go wrong with the application before it goes into production.
In terms of the overall testing effort, a risk analysis assists with prioritizing what should be tested so that the risks with the propensity to cause most harm can be proactively reduced. The highest risks within applications that have the potential to cause the most harm should be tested earliest and most frequently. Two approaches for thinking about software risks are:
- Effect: Consider the impact of possible outcomes or events that might occur.
- Probability: Measure the likelihood of different undesirable outcomes occurring.
With these two criteria in mind, you can then create a simple risk assessment table that highlights where to focus efforts for a risk mitigation plan. You can also think about causes of risks by stating a particularly undesirable outcome that might arise in an application and considering all possible events that could lead to such an outcome.
Measure the Right Test Metrics with Software Testing
Regardless of how you test your applications, it’s important to measure any testing effort. Keeping track of the right test metrics leads to improvement in software testing efforts and, by extension, higher software quality because good metrics provide actionable information and feedback for development teams. See this article by SeaLights.
Some examples of test metrics used to drive improvement in testing efforts include:
- Code complexity—the more complex a codebase is, the greater risk of something going wrong with that application. Code complexity refers to a broad set of measurements of which cyclomatic complexity is the most common to use.
- Escaped defects—ideally, a good testing effort must find defects before the software is released. Measuring this test metric per release or unit of time helps to ensure continuous improvement in testing and development processes because teams can closely scrutinize escaped defects, preventing the recurrence of the same issues in future releases.
- Defect cycle time—this metric measures how long it takes to fix a defect from the moment the testing team finds the problem until the moment it’s resolved. To ensure quicker release times in fast-paced modern development teams, defects must be resolved swiftly.
It’s important to note how test metrics and a test strategy are inextricably linked. The insights gleaned from previous testing efforts in the form of test metrics should be used to improve upon test strategies for future software releases, ensuring that the same issues don’t crop up again in the testing effort.
- Conducting a full risk analysis is a non-negotiable aspect of any successful software testing effort. Risks with the highest probability and high negative impacts need to be tested for regularly and early to avoid serious issues in production which can cost a lot of money to fix.
- Test metrics can improve testing processes, identify errors and inefficiencies, and lead to better risk mitigation with future testing efforts.
- Any worthwhile metric should provide actionable insights that can lead to improvements in future software testing efforts.
About Limor Leah Wainstein:
Limor is a technical writer and editor at focused on technology and SaaS markets. She began working in Agile teams over ten years ago and has been writing technical articles and documentation ever since. She writes for various audiences, including on-site technical content, software documentation, and dev guides. She specializes in big data analytics, computer/network security, middleware, software development and APIs.