Digital transformation has been adopted by organizations across the globe, allowing them to streamline business process, accelerate innovation, and pursue new revenue sources. It has compelled companies to focus on essential organizational competencies that promote ideation, collaboration, and flexibility. In short, digital transformation has allowed businesses to replace their legacy low-scale, low-leverage business point solutions with more nimble tools that transform enterprise offerings and help organizations offer value faster and achieve better business results than ever before.
So what’s the problem? And as a tester, why should you care?
Inherent Risks of Digital Transformation – Technical Failure and Revenue Loss
In today’s competitive, global, and digital world, customers demand even more value, personalized communication, targeted messaging, and excellent user experiences. Customer demand is driving organizations to work feverishly on being more agile, customer-centric, and efficient as they strive to capture every opportunity to increase revenue. Amazon and Google achieve their differentiation via a relentless focus on the quality of service. Competitive advantage and market dominance protection stem from the commitment to excellence – elegance in application design, optimized speed and performance, and extreme usability. A key priority in this battle for dominance is application performance – performance undergirded by a relentless focus on load testing.
Brand offerings must be strategically tested to minimize errors, delays, and the crashes resulting from traffic spikes and connectivity lapses. Strategic testing, however, does not imply limited testing. In fact, the application under test must be exposed to comprehensive, across-the-board test interaction with every application layer. Respective applications must maintain quality benchmarks while ensuring exemplary user experiences and delivering promised value. When offerings are not adequately tested, companies not only lose revenue, they experience loss of goodwill, must contend with social media outrage, and invest effort and resources to win back the confidence of frustrated customers.
Consider a local application example. Two days before Christmas, a friend decided to buy a Walmart gift card. He attempted to purchase the card five times on Walmart’s site. Each time, he input his credit card number, completed the transaction, and received a confirmation notification. And each time, within 30 minutes, he received an email from Walmart declaring that they were unable to process the transaction. Frustrated, my friend finally gave up and purchased a Target gift card using the same credit card he used on the Walmart site. The operation went through on the first try.
This scenario drives home several points:
- Walmart lost a $100 transaction and an undetermined amount of additional revenue from the card’s intended recipient.
- It’s unlikely that my friend was the only prospective gift card buyer that day as many last-minute shoppers were contemplating a Christmas gift card purchase. Walmart lost that revenue too.
- My friend and undoubtedly many other gift card shoppers were warmly welcomed by Walmart’s competitors, vendors who were just a few clicks away, ready and eager to process any and all gift card transactions.
Realistic Test Strategies Reduce the Risk of Application Failure
Testers formulate realistic test strategies to ensure sufficient testing coverage within the timeframe allotted for a test. QA teams understand that it’s not possible to test 100% of any application. Therefore, their test strategy must address the application facets and components that pose the most risk. This sounds simple, but in the world of digital transformation, formulating test strategy itself can be fraught with risk.
Risk creeps into applications from many directions. Consumers in the digital economy continually elevate the thresholds that describe the user experience and provide real-time feedback directly to businesses, while demanding faster transactions and more efficient, personalized digital experiences. On the business side, initiatives to expand the reach and increase monetization opportunities introduce more complexity into enterprise architectures and ecosystems by incorporating analytics, mobile devices, and the Internet of Things. Add in fluctuating market forces and the advent of new technologies, and the emphasis on application performance just got more intense. As their test timeframes decrease, testers need to prioritize testing application components that support transactions as opposed to those that support, say, customer profile control. The first supports revenue and the second may arguably impact the user experience.
Neotys has referenced that QA teams can identify 75-90% of defects and performance issues using only 10-15% of their most successful test scenarios. However, for this to work in practice, testers must have access to reliable builds, have sufficient application architecture awareness (something often easily addressed in agile and collaborative environments), and have access to the appropriate tools for diagnosis. And of course, they need to craft realistic yet comprehensive test strategies.
The Essence of a Realistic Testing Strategy
How the word “realistic” gets defined depends on with whom you speak and where they work. Testing strategies vary by organization, depending on the Application Under Test (AUT), resource and tool availability, and desired business goals. The reasons for this are many. The business needs to define its quality benchmarks and designate what pillars of quality are most critical to the brand. Further, team dynamics influence testing process flexibility and speed. For example, agile environments demand that load tests be executed at the beginning of the development process and that the application proceeds through a continuous test. Other development environments may not stipulate this requirement.
There are several fundamental facets of a realistic test strategy.
Design Realistic Tests
Testers need to understand how software applications should respond to real-world scenarios; this insight provides the basis for successful performance test design helping teams prioritize what areas of the application are most risky. As part of this process, testers must consider the devices types, environments, anticipated load, and data types that must be supported within the application ecosystem. Then, needing to align this understanding with their preproduction environments to assess what scenarios can be tested in preproduction versus production.
Consider Key Performance Targets
Service Level Targets
- Availability or “uptime”: Amount of time an application is accessible to the end-user.
- Response Time: How long it takes for the application to respond to user requests, typically measured as system response time.
- Throughput: Measures the rate of application events (E.g., the number of web page views within a specified period).
- Utilization: Capacity of an application resource. This has many parameters that relate to the network and servers, such as network bandwidth, system memory, etc.
Define and Quantify Performance Metrics
- Expected response time (time required to send and receive a request-response)
- Average latency time
- Average load time
- Anticipated error rates
- Peak activity (users) at specified points in time
- Peak number of requests processed per second
- CPU and memory utilization requirements to process each request
Be Mindful of the User Experience
The people using the application may reside in different locations (geographically) which can impact bandwidth, data transfer efficacy (packets dropped), and latency. User behavior describes how users interact with your application. Understanding these behaviors and the paths users take to navigate through workflows is an essential underpinning to realistic test strategy. Further, user devices of preference will be varied and possess different hardware, firmware, and operating systems. Realistic performance test design needs to take all these factors into account.
As businesses journey through the cultural and process changes that result in digital transformation, the risk will permeate all areas of application development. To maintain service levels, availability, capacity, and response time, QA teams must focus on application speed and performance. Whenever testing places too little emphasis on these areas, the risk of application failure and revenue loss increases dramatically. To ensure application success, QA teams need to craft test strategies that prioritize performance testing and administer it earlier in the development cycle.