Performance Test Plan Template

A performance test plan template that can be used as is or modified to suit your project needs in terms of performance requirements.

1. Purpose

The purpose of this section is to provide a high-level overview of the performance testing approach that should be followed for the <PROJECT> project. This must be presented to all the relevant stakeholders and should be discussed in order to gain consensus.

2. Introduction

As part of the delivery of the <PROJECT>, it is required that the solution meets the acceptance criteria, both in terms of functional and non-functional areas. The purpose of this document is to provide an outline for non-functional testing of the <PROJECT> solution.

This document covers the following:

  • Entry and Exit Criteria
  • Environmental Requirements
  • Volume and Performance Testing Approach
  • Performance Testing Activities

3. Entry Criteria

The following work items should be completed/agreed beforehand in order to proceed with the actual performance testing activities:

  • Non-functional test requirements document provided by <CLIENT>, with quantified NFRs where possible
  • The critical use-cases should be functionally tested and without any critical bugs outstanding
  • Design Architectural Diagrams approved and available
  • Key use-cases have been defined and scoped
  • Performance test types agreed
  • Load injectors setup
  • Any data setup needed - e.g. Appropriate number of users created in <DATASTORE>

4. Exit Criteria

The performance testing activity will be completed when:

  • The NFR targets have been met and performance test results have been presented to the team and approved.

5. Environmental Requirements

The performance tests will be run against a stable version of the <PROJECT> solution (which has already passed the functional tests) and performed on a dedicated production-like environment (pre-prod?) assigned for performance testing with no deployments on that environment during the course of the performance testing.

5.1 Load Injectors

There will be one or more dedicate “load injectors” set up to initiate the required load for performance testing. The load injector could be a VM or multiple VMs that have an instance of JMeter running, initiating the requests.

5.2 Test Tools

Test tools used for Volume and Performance testing will be:

5.2.1 JMeter

An open-source load testing tool. Predominantly used for volume and performance testing.

5.2.2 Splunk

Splunk will be used for logging (Could use another tool - need to confirm with the perf testing team).

6. Volume and Performance Testing Approach

The <PROJECT> solution should be performant enough to manage the following load criteria.

N.B. The numbers in the following table are for sample only - real values should be inserted once finalized by <CLIENT> NFR document.

6.1 Target Service Volumes

Hourly targets are discovered from the current solution for [Y2019]. Cleared other ‘example’ values from plan template.

Since hourly peak values are not high, they will be taken as target for fixed load test. Scaling factor is TBD right now.

6.2 Number of Users

Performance testing will run with a maximum of 1000[?] users. The users will be created in <DATASTORE> beforehand and be accessible via <PROJECT> Login API. Each request will login with different userID.

6.3 Assertions

JMeter tool will be used to execute performance testing scripts. Within the scripts, there will be assertions stated to check for the above metrics as well as some basic functional checks to ensure correct responses are received for each request.

6.4 Load Profiles

The load profiles should be designed to mimic a typical average day’s traffic to <CLIENT> site. Please note that the traffic is only apportioned and limited to the Customer Identity and Access Management part of the site, i.e.

  • Login
  • Register
  • Reset Password
  • Forgot Password
  • Set Customer
  • Get Customer

Below is an example profile for a day:

sample day load

6.4.1 Baselining

The first course of action is to find a baseline. Using only 1 user, we will run a simulation for a period of time (e.g. 5 mins) to get an average of response times for each endpoint. This ensures that with only 1 user we are actually able to achieve the peak requests per second.

6.4.2 Load Testing

After the baseline metrics are gathered, then the same simulation, which simulates a load profile, is run with an increased number of users to test against the target volumes. The idea of this load test is to test the system against a typical day’s load, simulating the ramp-ups, day’s peaks, and ramp-downs.

6.4.3 Stress Testing

The aim of stress testing is to find the breaking point of the system, i.e. at what point does the system becomes unresponsive. If auto-scaling is in place, the stress test will also be a good indicator at which point the system scales and new resources are added. For stress testing, the same simulation used for load testing is used but with a higher than expected load.

6.4.4 Spike Testing

Spike testing introduces a significant load on the system in a relatively short period of time. The aim of this test is to simulate a sales event for example, when a large number of users simultaneously access their account ina relatively short period of time.

6.4.5 Soak Testing

Soak testing will run a load test for an extended period of time. The aim is to reveal any memory leaks and unresponsiveness or errors during the course of the soak test. We would typically use 80% of the load (used for load testing) for 24hrs, and/or 60% of the load for 48hrs.

6.4.6 Saturation Point Testing

In Saturation point testing we keep increasing the load steadily to determine at which point the system becomes unresponsive, i.e. finding the breaking point of the system in terms of load.

7. Performance testing activities

The following activities are suggested to take place in order, to complete Performance Testing:

7.1 Performance Test Environment Build

  • The load injectors should have enough capacity and should be managed remotely. Also, the location of the injectors should be agreed
  • Real-time monitoring and alerting mechanism should be in place and should cover the application, the servers as well as the load injectors.
  • Application logs should be accessible.

7.2 Use-Case Scripting

  • The performance testing tool which will be used is JMeter
  • Any data requirements have been discussed for the use-cases to be scripted

7.3 Test Scenario Build

  • The type of the test to be executed (Load/Stress etc)
  • The load profile/load model should be agreed for each test type (ramp-up/down, steps etc)
  • Incorporate think time into the scenarios

7.4 Test Execution and Analysis

The following tests should be executed in the following order:

  • Baselining Test
  • Load Test
  • Stress Test
  • Spike Test
  • Soak Test
  • Saturation Point Test

Ideally, 2 Test runs of each test type will be performed. After each test run the application might be fine-tuned in order to increase its performance and then another test cycle will commence.

7.5 Post-Test Analysis and Reporting

  • Capture and back-up all the relevant data reports and archive.
  • Determine the success or failure by comparing the test results to the performance targets. If the targets are not met then the appropriate changes should be made and then another test execution cycle will commence. It is unknown how many execution cycles will be needed in order to meet the agreed targets.
  • Document and present the test results to the team.