An efficient way to highlight the performance variations between two sets of test results is to plot the statistics for a same item (page, request, Container) in both tests simultaneously.
This provides a visual comparison of the application behavior under different scenarios, or further to its modification (e.g. update or optimization).
An efficient way to pinpoint performance problems is to filter a test results. The aim is to limit the test statistics to the items (request, page, Container, Virtual User, Population, and so on) that are showing the problems.
For example, you may narrow down the statistics to a specified time period during the test run; this will display the statistics as if the test had been carried out over that time period only.
NeoLoad provides two advanced statistics:
The statistics that show anomalies, such as a significant rise in response times or the occurrence of errors, can be correlated with the variations in the readings obtained by certain performance Monitors.
These correlations will usually provide an explanation for the performance slowdown and give a clue to the main cause of the problem, be it merely a server setting or the overload of one of the main resources (memory for example).
After multiple test runs, the volume of data can become difficult to manage. It is important therefore to add a short description to each test prior to its running. This description is included in each test summary and the reports generated.
The Results Manager allows the user to delete the results of a previous test session or use them to generate a report (XML, HTML, and PDF).