Load test 101

Standard

Load test can be a specialist trade. There are a lot of wealth of knowledge and tools out there in the market. And it is very important part of making sure your architecture resilient in the face of huge load of request calls

So, where do we start? Basically load test is separated into 2 different categories:

Car Squeeze

In a car squeeze load test, you try to cramp in as much users into the system as possible, slowly one at a time, till you got a response time so bad the app/web is not useable. In a car squeeze way of load test, you set your number of users as high as possible but the user spawn rate as low as possible

Something like this

Screen Shot 2015-10-22 at 5.08.49 pm

With this type of load test, you will know on one certain type of instance, what is the maximum possible limit a cloud instance can hold till the response time goes sky high. So if the marketing ladies tell you to expect a certain number of users, you just take their number and divide your number and roughly you have a system that should theorically hold

Great Sale Rush

Image Great Singapore Sale with many shoppers camping for iPhone 7. Once the time has reached, the doors open and all the mad shoppers flood in. So imagine a load test where you tried to squeeze as much request through the narrow load balancer door as possible.

Such extreme squeeze can result in unforeseen consequences like a complete collapse of the servers due to the sudden huge load. It is always the concern of system architects to deal with this scenario. One way is to use throttling of users to make sure that the incoming load gets even out over time. And if you want to test it on locust, you should try this setting. Not for the shallow pockets though. You need a dozen of CPU intensive instances to pull this off.

Screen Shot 2015-10-22 at 5.13.16 pm

Do remember to activate your autoscaling so as to check if the balancers can spawn instances fast enough to cope with the sudden surge of users wanting to come in. It could be the instances took too long to spawn and cause a performance loss, or instances spin up fast enough but the app within the instance took a long time to warm up.

Do use monit to ensure uptime of services (nginx) to ensure that any moment the daemon wants to take a break, Monit will wake them up

So what else do I have to look at in load testing?

Well users can be very biased towards certain page that have attractive promotions? Or certain API call can be unusually frequent for a long period of time (like ranking in gaming). Hence you need to run through scenarios where for instance certain page or API call have a high probability of 0.8, which is a very high frequency. When run those tests, check for cache (memcached, redis) for any hot key issues and also check the database if there are any performance hits.

Advertisements