When measuring database performance is done as part of automation testing, I experience interesting and useful results.
Here are the areas for running one of the test cases:
- Number of SQL statements
- Separately broken down by select, insert, delete, update quantities
- Lock waiting time
- Database server CPU time and percentage
- The number of rows read, the number of rows in the database during the test case.
Why is accuracy an asset?
In a large enterprise application, small code changes are usually reflected in many different services.
For example, new functionality is added to the user interface, and the information is not available with the current service, the developer adds a new search to the general service by adding a new SQL statement to the service to extract the new data. In the local test everything works quickly and conveniently as well as in the acceptance test environment, automation tests etc. look green, tests use a small test environment database.
However, it is not as simple as that.
The service change made for the new column is therefore very poorly implemented, there are iteration lists and several unnecessary sql statements are executed to database.
Problems are only beginning to arise in the process of producing, several batches notice slowness, and web services are also detected slowly.
This will start to find out where the fault and the time will easily pass for several days before the fault is found. In many cases, due to such a small change, a comprehensive performance test is not carried out, but problems only begin to arise in production. Enterprise application comprehensive performance testing usually requires a lot of calendar time and correcting the findings delays the entire delivery schedule.
Now that we are adding database monitoring as part of the application’s automation tests, we can immediately catch the number of SQL statements that have changed in different services. In this case, for example, the change to the general service affected many different use cases.
Another similar problem occurs when a developer changes a sql statement that is in continuous use, for example by adding a new subquery to the query. In this case, select starts scanning the index several times. This type of problem cannot be found in even smaller test databases, as the database engine reads the entire table quickly. Slowness only begins to come when the application is tested in a production-like environment.
More than half of application performance bottlenecks originate in the database, but most application teams have little or no visibility into database performance.
More chances of success
When we add database monitoring as part of the application’s normal automation tests as well as performance tests, we immediately catch up with the number of rows read that have changed in different services compared to the situation before the change. In large enterprise projects, database monitoring as part of performance testing significantly speeds up the correction of performance issues, and this way the entire project has a better chance of success.
What power does mySuperMon provide to Neoload?
mySuperMon is a leader in use case-based database performance monitoring and Neoload is a leader in load testing, but while putting the extra load on application Neoload don’t have the complete details about the database for the specific usecase and that is the important information mySuperMon is providing to the Neoload so user can check the live details and act on it.
With these details mySuperMon also provides the complete statistics to Neoload so users can check the complete status about the Spike reason along with events.
Let’s go with the architectural diagram of how mySuperMon integrates with Neoload. We will check the architectural diagram first and then step by step will see how to configure it.
In the above diagram, we have configured the application context and then use that application context for WebHooks to call the mySuperMon.
The application context will set through the Neoload GUI and the web hook will use that information.
Neoload webhook will communicate with mySuperMon API to Start, Stop and Run Details. That webhook will store the information to the neoload dashboard.
Now will check how to setup mySuperMon plugin to neoload
Add mySupermon plugin to neoload.
mySuperMon team will provide you with the mySuperMon plugin to configure it in neoload GUI.
Paste that provided jar file to PROJECT_FOLDER/lib/extlib folder. You will see the new plugin in neoload GUI -> Action -> Database -> mySuperMon.
Now create a new user path by right-clicking on User Paths -> Create a User Path -> New User Path
It will open one popup now enter plugin name “mySuperMon”, it will create new user Path with name “mySuperMon” along with three folder
Now with the action, you need to drag and drop the SendMySuperMonContext to action.
That will set the context of your application.
Webhooks to send start recording and stop recording calls.
Webhooks communicate with docker hub, just search mysupermon from dockerhub.
When all setup is done, there can be defined database metrics to neoload saas and mySupermon is all the time comparing execution to baseline run.
And if deviation is higher than there is defined in the threshold event is created.
Neoload Saas new event
On Neoload Saas user can see highest durations
mySupermon is providing a new innovation to comparing SQL statements to baseline run.
User can easily see what has changed from baseline and root cause analytics fast.
Rows read value has increased a due new subquery,
old value 1040
new value 1440240
Yellow color means that queries were not present on baseline run.
There is also available graphical explain and suggestion to add a new index.
Too often, performance is only repaired afterwards when fires are put out where the fire is. In addition to this, the indicators will reveal a lot more, for example. Number of sorts, number of sql statements failed, number of commits and rollbacks, time to wait for locks, etc.
AppDynamics “more than half of application performance bottlenecks originate in the database “ we have a solution here available, so teams have full visibility into database performance during tests.
What’s mySuperMon doing?
Product Analysis Service gives you and your organization the following: The first time you run, you’ll see the performance status of the selected test cases. The product lists for you, for example, top10 best/worst performing use cases with comparison. The following periods shall catch those test cases where there have been changes compared to the previous test. With the help of the product, a further analysis can still be carried out on poorly performing services, in which actual SQL statements can be monitored with even more accurate monitors.
Contact me and I’ll tell you more.
CEO & Founder
If you want to know more about Mikko’s presentation, the recording is already available here.