The Differences Between Mobile and Web Performance Testing

If you are testing your mobile applications the same way you’ve been testing typical web applications, you’re missing a big part of the picture – and your users may be the ones who suffer.

Mobile applications don’t perform in the same way as their web-based counterparts. The user experience is affected by a few other factors such as device and network capability. If you are building out a performance testing strategy for your mobile website or native mobile application, avoid migrating your existing test plans to a mobile environment because it’s not that simple. There are extreme differences between performance testing mobile and web applications, and you need a plan that gives those factors the attention they deserve.

Mobile apps require specialized testing strategies along three key vectors that don’t typically apply to traditional web-based apps. In this article we want to equip you with the right knowledge to build a suitable set of test plans. Make sure your users are getting an excellent user experience regardless of the device – PC or mobile.

The Device Makes a Difference

Web servers know what device a user has and will often send users different content – or to a completely different mobile version of the site. Plus, responsive web applications are designed to adjust their look, feel, and behavior based on the size of the screen being used. Couple this with the wide variety of devices on the market and the number of combinations is simply explosive.

This makes performance testing your web or mobile application with different devices extremely important because each device translates the content of the application differently – which could affect the performance in major ways.

You need to have a clear performance testing plan to understand how these iterations affect your application’s performance characteristics. Your mobile performance testing plan needs to take into account different factors that include processing power, screen size, bandwidth capabilities, platform, parallel connections, and more. For example, devices will have varying numbers of parallel connections to servers that can greatly affect the performance of the application.

In order to manage this variety you may want to strategically introduce mobile emulators to your performance testing plan. Emulators can be extremely useful when recording test scenarios for native mobile apps. Although emulators are considered to be close to the real environment, they cannot entirely replace real devices. For example, you may want to know what the application does to CPU usage and battery consumption under load – something which emulators are not able to capture. For more information on this subject, please check out our post on Mobile Emulators.

Key takeaways:

  • Test a wide variety of devices with different screen sizes – not just for functional differences, but for performance differences as well
  • Select a range of physical devices to test various levels of processing power, platform, and parallel connections
  • Incorporate emulators and real devices into your mix to flesh out as much variety in device as possible

Always Consider the Network

Today’s mobile devices generally access the server over networks that are slower than those used by desktop computers. Network conditions have a significant effect on the user experience, and the effect may be more or less pronounced depending on the application. Network characteristics including bandwidth, latency, and packet loss have a huge impact on client response times and on the way the server is loaded. By emulating different network conditions in a test lab environment, you can forecast the effects of changes in network connectivity on the application’s performance. Doing so also allows you to discover application issues in the development cycle, therefore reducing costs.

For example, low bandwidth increases the time it takes to download a resource which then results in higher page load times. If the customer is connected longer, front-end servers hold sockets longer, load balancers have more active TCP sessions, and application servers use more threads.

Mobile networks have limited bandwidth and high latency compared to WiFi and broadband. Since the latency increases with any time added to each request and web pages are composed of many sub-requests, the time required to load a webpage on a mobile device greatly depends on the latency. In fact, latency can become a bigger bottleneck than performance due to physical limitations of networks. Even connections with very high bandwidth can’t get around latency issues.

Limiting bandwidth and simulating latency and packet loss during a load test allows you to check that all of your users, including mobile users, will get the best user experience and acceptable response times while ensuring your servers won’t have problems under load.

Key takeaways:

  • Test across a varying range of connection speeds with different signal strengths
  • Incorporate simulated latency and packet loss to see how applications behave and recover
  • Incorporate regional or geographic-based testing for best results

User Expectations are High

According to a survey conducted by EffectiveUI, a majority of the 780 individuals surveyed will abandon a mobile app if the performance is slow or difficult to use. Similarly, if a web application is too slow, even by 400 milliseconds (according to Google engineers), users will abandon your website.

When you load test your application with user experience in mind you have to take into account a new set of failure scenarios. By that, we mean resetting the criteria of what is considered good performance and what is considered bad performance according to your users. Will your users tolerate a thirty second delay on their device? Or will they switch applications?

Take into account the responsiveness on the device. Is the content being downloaded and accessed quickly enough to keep users happy? If so, you are heading in the right direction and taking into consideration the end user experience as well as the performance of the application.

Key takeaways:

  • Conduct usability tests with real users to understand how they perceive the difference in performance on PCs and mobile devices
  • Adjust your acceptance criteria to incorporate faster user expectations on mobile devices
  • Compare your mobile site responsiveness to benchmarks

Always understand your application from the end user experience and design test scenarios accordingly. It only takes one bad experience to make a customer leave, so don’t let it happen to your business. If you have questions about testing the performance of your apps for mobile users, contact our performance experts at Neotys.

No Responses
  1. June 5, 2014
  2. June 18, 2014

Leave a Reply

Your email address will not be published. Required fields are marked *