Application Performance Testing: Level Setting Performance Expectations

backlit keyboard

Performance Testing is a form of software testing that gives insight into the readiness of the application and hardware’s capabilities. 

 

Performance Testing is conducted for a variety of different reasons, including:  

  • Exercising the infrastructure of an application to uncover poor performance 
  • Profiling application characteristics/resource utilization under various conditions 
  • Providing input for capacity planning and/or resource demand management for an elastic cloud-based environment 
  • Validating SLA (service level agreements)  
  • Optimizing performance of an application 
  • Isolating performance issues of software  
  • Uncovering resource contention 
  • Identifying poor application page load or transaction response times, and much more 

 

An application is typically performance tested prior to production deployment. This helps identify performance issues.

Issues, if not caught and resolved in a timely fashion, may disrupt business operations. In severe cases, application outages dues to performance issues can cause significant loss. 

 

The purpose of performance testing is to build confidence in the system architecture’s ability to meet performance expectations (of current and future demands) as defined by the Subject Matter Experts (SME).   

 

Most application teams struggle with setting preliminary performance expectations. This is often a larger challenge for teams deploying new applications as compared to when teamupgrade a mature application.     

Teams working with existing applications and deploying enhancements to production can look at historic production performance data for guidance. This is typically done with the understanding that historic data has limited value. For new applications, however, there is no valid source of information or basis for reference on performance. Performance testing helps close the information gap in both situations. 

 

Determining performance expectation is an integral part of the performance testing process. Education about performance testing also occurs when meeting with stakeholders to discuss performance expectations. There are multiple conversations that take place throughout the course of the project to address performance expectations, as well as discussions on how best to quantify them to become a more tangible and measurable part of the performance test requirements. Once defined, validation of these requirements is essentially what provides the momentum to drive the performance test project forward. 

 

1. Test Planning 

The test planning stage of the project begins with discussing and setting the initial application performance expectations. This stage is before any performance tests are created. 

 

Profiling an application’s performance can require a whole gamut of tests. Asking questions on areas of concern or application pain points and understanding critical, high user, or volume transactions are key areas to focus on when designing good performance tests. A lot of planning and preparation is involved at this stage. Reviewing the nonfunctional requirements of application should be a starting point to begin this process.   

 

Lightbulb Icon It is recommended to not shortchange this part of the process, as teams need to spend the time understanding and signing off on the performance expectation defined in the test plan. 

 

 

2. Initial Round of Tests 

Tests executed at the start of the project also assist in setting performance expectations. This occurs after test development and test calibration is complete. This activity is akin to drawing a line in the sand for future performance evaluatingMetrics generated from these initial tests are used as reference to compare performance against future tests. 

 

 

 3. Execute Different Types of Tests/Analyze Results  

Once the initial round of testing is complete, the goal is to get a reading of the performance with well thoughtout increments of load or configuration changesThe application’s current capabilities are evaluated during these tests and presented to the team for review. The purpose of running these tests is to capture the behavior of the application and to answer the question WHERE WE ARE today in terms of application performance. The metrics generated from the tests provide feedback to help determine next steps.   

 

Performance Testing Graphic

 

4. Troubleshooting Issues/Isolate bottlenecks.  

At this stage, teams have all the information needed to determine WHERE WE ARE in terms of performance, with a sense of direction on WHERE WE NEED TO BE.  A visual view of this is shown above. 

 

If application performance falls short of projections, the focus of the project shifts to discovering the source of contentionDiscussions on how to mitigate or resolved issues to meet previously defined expectations occurs at this stage. Multiple factors may throttle the performance of an application. Development and infrastructure teams often gain insight into the application and even generate new requirements to validate against for future tests. Uncovered performance defects may require the team to focus on replicating specific conditions to isolate the issue and request additional performance validation tests. 

 

Lightbulb Icon This phase can go on for a long time if the end state is not well defined. Teams may run endless number of tests with a variety of configuration to try to meet projections or test requirements. It is essential to get buy-in from the application teams on what plan B or C looks like if performance falls far short of defined expectations. Depending on the project, this process may be challenging and requires much care. A risk of scope creep/scope inflation exists at this stage. 

 

eye icon Bonus Tip: It cannot be emphasized enough that the level of COLLABORATION with your teams is directly proportional to the value derived from any performance testing engagement. This is fundamental for any performance test project.  

 

 

5. Final Test Execution and Review of Performance Metrics 

After the development team has made all the performance improvements/changes, a final test to determine if application meets expectations is executed. This step locks down the final performance of the application. During the results review meeting(s), findings and conclusion are shared. This is the formal stage where teams agree on/sign off on the application performance. The metrics generated from these final tests set expectations for future tests.   

 

Setting and validating performance test expectations is a key part of any performance testing project. WHERE WE ARE and WHERE WE NEED TO BE are two questions at the center of the application performance expectation level setting exerciseThese two questions get continually asked and answered, cyclically, at different points during performance testing and essentially drive the project forward.   

 

Khatija Ali