Introduction:

We frequently hear the term “Performance” while discussing software programs or anything we purchase, such as cars, smartphones, etc. Always say that the performance of anything which we buy should be good. Because we care about the efficiency and life of the product. In this blog, we gonna know about the Workload approach.

Even though there are many products on the market that are identical. We choose the best one based on how it performs. So by this, we understand how important performance is. Let’s see how to test our application’s performance to make our application the best one. Also to increase our customers. So, we say this kind of testing is “Performance testing”. By simulating a production load, the workload model approach may utilize to performance test the system.

What is performance testing of an application?

Performance testing is a type of software testing and it consists of many types based on the load distribution while testing. We follow the workload model approach as the best approach because it’s one of the standard models for load testing. Let’s understand the workload model approach in a clear and also in an easy way in the upcoming sections.

workloadPerformance testing process

What is the workload model approach?

The workload model is used to determine the user load on the various user profiles of an application. That will be a distributedly test and simulation. In order to replicate the same production twice or three times, it also helps to build real-life user situations, They are used extensively in manufacturing We can make sure the application is reliable for upcoming demands by doing this.

“Problems are common, Approach makes the difference!”

What are the steps in the workload modeling approach?

  • Firstly, collect production load for different user profiles with the request API hits and controller actions.
  • Secondly, identify the test scenario based on the production usage statistics.
  • Thirdly, set up the data for assets and devices based on the production load.
  • In addition to this, set the configuration and pre-conditions based on requirements.
  • Add think time additionally for the process to take place and pre-processors for the samples.
  • Insert Assertions and post processors are based on requirements such as response code, response time limit, processing scripts, etc.
  • Add multiple user profiles with the required user count similar to production to mimic the real-time user actions.
  • Also, validate the test results by monitoring the performance parameters like response time, latency, throughput, etc.

Why workload model and How does it differ from normal load testing?

Workload models are frequently used to determine the system’s availability because they essentially increase the current load to match the anticipated future demand. Because doing so enables us to predict when applications will be available in the future.

If we take the number of user counts used in real-time and increase it and then do the testing it will not be a proper way of testing. To rectify this issue we use the workload model approach. If we predict the application’s performance by this kind of distributed testing result, we would not find out the correct availability of the application for the next 6 months or 12 months.

WorkloadAbsence of performance testing

How is it helpful for our application?

Basically, we follow the workload model approach in which we will be mimicking the real-time user load as we find out the modules with the high load. Because we will be providing a higher load for the highly used modules and a low load for the less used modules. For this, we will be collecting the statistics of user load first for all the modules before testing.

We use various tools like JMeter, IoTIFY, Locust, etc. to perform the load testing of our application which contains Web applications, Mobile applications, and also IoT devices mainly. We are achieving our load test simulation setup by using various tools for different applications based on the tool’s capability. Also, the integration of Grafana with the Jmeter helped us a lot in visualizing the test results in real time.

The main objective is to figure out the best way to identify the testing workflow which would be common for all types of applications and also platforms. However, we are going to face the main challenges in collecting the KPIs (Key performance indicators) on our application, which might be one-time work. Once after collecting the KPIs we can start creating the script to cover all the scenarios that we got from the KPIs.

Can different applications be tested with it?

In this article, we have shown you how to do performance analysis on an application. Not only for a specific application like either web or mobile application. It would be applicable for both scenarios in a distributed manner.

In that case, the Workload model approach is possible for all kinds of applications. Which will aid us in doing the performance testing to figure out the potential performance gaps in the application.  So, the best way to start a performance analysis of an application is to generate a workload model for different kinds of scenarios on different applications to accumulate the necessary information from the production environment which will help us to understand the business steps and end-to-end workflows.

Can this be scaled up for future requirements?

While deploying an application the user count may vary by day, week, or year. Understanding these variations is not an option. For instance, consider an e-commerce site, it will have a steady load on ordinary days. It may reach peak load during some special offers or festival occasions.

One of the major approaches in the workload model is to retrieve the past data(6 – 12 months) and logs from the production. To mine the necessary data or statistics, those data will be provided as input to log analyzer tools (such as WebLog Expert, Nagios, Graylog, and Elastic Stack).

Here, a number of variables are calculated, including peak hour, peak day/week, business flows, and workload. We also get statistics from logs. By using statistics of those past data and analysis results we can scale up our web application for future load requirements by adding more assets and users to the existing scripts.

“Plan before you leap”

How to check the availability of an application?

By focusing on a different set of environments, numbers, and volumes according to production load. The Workload Model approach is successful.

Non-functional requirements were also collected along with functional requirements. This includes response time, user load, etc for the distributed load. Also, collect the CPU utilization and memory usage stats.

PersonaRolesUser CountTransactions in Peak hour
Scenario1UserRole12740
Scenario2UserRole21820
Scenario3UserRole32435
Scenario4UserRole468
Scenario5UserRole53470

Workload distribution based on user load

In this table, “Persona” refers to the user scenarios or user actions in the application, while “Roles” and “User Count” refer to the different types of roles for those scenarios. “Transactions in peak hour” refers to the number of transactions in peak hours.

Availability: Measure how often the application is available for use.

Availability = (MTBF / (MTBF + MTTR)) X 100

NameDefinitionCalculation
Mean Time Between Failure(MTBF)The average time the application runs before getting failed to respond to the requests.Hours / Failure Count
Mean Time To Recovery(MTTR)Recovery time (i.e.) After the failure, the average time required to recover back to start responding to the requests.Repair Hours / Failure Count

Availability table.

Reference links:

  1. For more blogs refer to the engineering blog.

Authors:

THIRUKUMARAN S S

Senior Quality Engineer

Rently Software Development

PAVITHRA D

Senior Quality Engineer

Rently Software Development

PAVITHRA T

Associate Quality Engineer

Rently Software Development

PRAVEEN KUMAR V

Associate Quality Engineer

Rently Software Development

KIRUBHAKARAN R

Junior Quality Engineer

Rently Software Development

Leave a Reply

Login with