Image Source: thecas.co.uk

In a recent webinar, Neotys Performance Specialist Henrik Rexed answered questions about load and performance testing as it relates to Agile. For the original webinar content, you can watch here. You will learn the background of performance testing and Agile, challenges that arise in testing application performance in an Agile environment, best practices and automation tips.

What is the best practice for load test analysis?

In Agile when you have different sprints, the idea is you have to measure your SLAs (service level agreements) – which will be pass or fail – and compare and measure those over time. If you have an iteration having major issues, then you start with response time or the HTTP response code and go deep on architecture. Involve the developers, in collaboration mode, to go deep on the code to see where the problem lies.

What is the cost of doing performance testing every day?

This is a very good question. It really depends – if you go down to the user stories where you will need to maintain each of those scenarios, it will probably cost more in Agile. But if you have the logic to have a focus on components – REST, web services, etc. – then the maintenance will be very low. You have builds every day or every night, but the updates are very low on the interfaces – so they are very minor updates. Once you have set up continuous integration, you need at least one performance engineer, but the cost is insignificant compared to the old-fashioned waterfall performance testing approach.

Is it interesting to include CPU monitoring in continuous integration? How do you analyze this, in addition to time response results?

With continuous integration you often have an environment that has more monitoring, and often, it is good to know if you are consuming more or less CPU. But it can also been seen as an SLA, so it depends on the SLA you are setting on the application. If the CPU usage is one part of the SLA, then yes, it is probably something you could share on the continuous integration server.

How do you build efficient SLAs?

The idea is that you need to explore a new application to see exactly how each component behaves. From there you start building your SLA. It’s very important to set up your SLAs in the beginning of the project. Maybe the architect has already set up the SLAs, or in the beginning of the project you can see the SLAs and follow them.

In the case of server-client architecture, what is the best tool to measure the impact of change made on the server to the client?

If you are in an Agile environment, normally you are pretty well aware of the builds, and if you are at the component level, you are holding the interface which won’t change a lot. So you don’t necessarily need those kinds of tools. But if you are looking at user stories, then yes, you definitely need tools to monitor changes.

Do you think the key problem to the integration of performance testing into Agile is to convince people that the switch is worth it?

I think it’s all about changing people’s mentality. Yes, you are right, people think involving performance testing at the beginning is a risk because it is hard to set up and difficult to plan. The people always think about testing a user journey, a use case involving a lot of components, which would be true, you need to wait until the end because of the components. Then you need to try and present the fact to switch to component testing, they will see that it’s not that expensive. But you have to ask yourself is this a new app or an existing app? Because this logic makes sense once you have automation. Always automate when you have something in existence, if you don’t have anything in existence, you need to go to exploratory mode.

Want to get more technical?

For a deeper technical dive into Agile and performance testing, watch Henrik’s latest webinar Continuous Performance Testing with NeoLoad. You’ll see how these best practices and automation tips come together in a real-life scenario.

, , ,

Leave a Reply