Core Conversion Misstep #2: No One Knows if the New Core is Actually Faster

This blog is the second in a series about things that are frequently overlooked or forgotten when a Credit Union is considering a Core Conversion. Subsequent blogs will be posted weekly and highlight each item in-depth. Make sure to follow along with the series on Olenick.com.

You are working furiously to plan for the biggest project of the year (decade, perhaps?) and may have already tackled a test and defect management repository decision. The next post in this blog series about often-overlooked components of a core conversion is performance testing. If your members and employees are expecting the same response times in your new system – or better! – performance testing needs to be a part of your project plan.

WHY PERFORMANCE TESTING IS IMPORTANT TO YOUR CONVERSION

More than likely, your new core provider committed to improved performance, meaning faster execution of the calls that are made to your core. This may have influenced your decision to select a particular provider.

The member experience is king, so likely you are expecting that online and mobile banking speeds will improve once your conversion is complete. Also often, credit union employees are clamoring for improved efficiency, making the demand for faster systems one of the biggest opportunities in your conversion project.

It’s likely that your new core will deliver faster response times and improved stability, particularly if your existing core is written on outdated technology, but clear expectations can only be set for members and employees if you use testing to validate.

                                                     

DEFINING PERFORMANCE TESTING

Generally speaking, performance testing is used to evaluate how your new core will respond to varying workload. At a minimum, you want to know three things-

  • First, your new system will continue to deliver the same responsiveness to your employees and members under a typical workload.
  • Second, the software remains stable through the normal ebb and flow of transaction volume. This can be coupled with application server monitoring to ensure the infrastructure is sized appropriately for the expected usage level.
  • And third, you know what the upper threshold is for transaction volume. That threshold, along with your organization’s growth projections, will help you proactively implement critical infrastructure, application, and database changes, preventing future performance issues.

 

When people think of “testing”, they typically think about functional testing. A small number of people execute a large number and variety of test cases. The results are manually evaluated and recorded as a pass or fail.

Alternatively, performance testing involves a large number of simulated users (mimicking your members or employees) executing a small number of test cases. This type of testing groups test execution together to simulate a “day in the life” of your core system as it’s receiving and responding to thousands of calls per day.

Core performance testing strives to answer the following three questions:

  1. Can the core handle peak transaction volume without reduced responsiveness or instability?
  2. Can the core handle a sudden application of high volume and sudden removal?
  3. What is the maximum capacity of the system, in terms of transactions per minute?

 

To answer those questions, you will want to compare the performance test results from the new core to data from your old core. The cleanest way to do this is by running the same performance tests on both cores. However, time and expense often limit an organization’s ability to run testing on both applications. In that case, you can compare the new core performance data to SLA data from the old.

If your comparison shows improved performance in the new system, you can confidently expect that your members and employees will see faster response times.

 

Ramp Up Scenario

Objective:  Measure performance and capacity as system responds to increasing loads. This type of scenario is good for base lining performance and determining capacity.

 

Spike Scenario

Objective:  Measure performance and capacity as system responds to application and removal of load. A good “stress” of system.

 

Long Duration Scenario

Objective:  Measure performance and capacity as system responds to a steady load over a relatively  long  time. This type of scenario is best used for identifying memory leaks.

 

HOW TO PLAN

There are a few best practices to be aware of before you begin planning.

  1. If you can performance test both cores, you’ll need to make sure your performance test tool is compatible with both applications.
  2. You cannot run performance testing if manual test cases are failing. Plan for testing to occur after you have accomplished a fair amount of your functional testing. By then, a base level of application stability has been achieved.
  3. The environment you test in should be nearly identical to production and not used for other testing purposes. (This is one of the hardest best practices to follow.)
  4. You should have monitoring in place in the environment you are testing and the ability to save logs.

 

After considering the best practices above, you are ready to create a performance test plan.

Step ONE: Make a list of the functions you want to test. Credit unions commonly test member-facing applications and functionality that is most critical to day-to-day operations.

You could test functionality that is used most frequently; if there are responsiveness issues, many people and processes could be negatively impacted. Also, review the processes that have had negative performance feedback in the past and consider adding those to your list. Now, prioritize your list.

Performance testing should be narrowly focused to a small subset of the application functions, so be critical when making and prioritizing your list. Ideally, limit your performance testing to 3 or 4 functions.

Step TWO: Define the information you need to collect. If you have Service Level Agreements (SLAs) in place for the processes you plan to test, gather the pass/fail criteria. Prioritize the data you want to obtain, such as response time, error rate, CPU usage and throughput.

Step THREE: Identify your test data. Each process you need to test will have data for the system to interrogate. Examples include a share balance and a recent debit card transaction. You need to make sure the test account/ person that you use has the attributes you are testing.

Some test cases need to key data in a workflow to complete a test case. For example, if you are testing a fee reversal process, your workflow may require you to key in the requested reversal amount.

Pay careful attention to how frequently your test data can be used. If a test member account can only be used once (e.g. there is only one fee to reverse on the account), you will need to identify or create more test accounts.

Step FOUR: Add your plan to the overall project plan. Performance testing is a project within the core conversion project. Document your testing tasks, estimate the time it will take to execute, identify the skill sets and resources you need, and understand the dependencies between tasks. Incorporate this into the larger project plan. If management is unwilling to go-live without performance testing being complete, add it to the project’s critical path.

Step FIVE: Reserve resources. While technical resources are scarce when a complex project is in-flight, they will need to be involved in your performance testing. Plan for resources to provide architecture diagrams and answer questions, define test criteria, approve the test plan, give rights to access the environment, troubleshoot issues that will inevitably arise, and much more. If a database snapshot is required before executing testing, a database analyst will be needed.

                                                      

SELECTING A TESTING TOOL

If your credit union does not already have a performance testing tool, you can find open source and commercial load testing tools available in the market. There are clear pros and cons to both types. An open source tool will be free to use and is often a good choice for light testing needs. You will not, however, have any technical support or training about how to use the tool, and you may find them to be less intuitive. Instructions may be available online, but may not be accurate or current.

Commercial testing tools will offer robust functionality and are more likely to be easy to use. You’ll also have access to technical support and training. But that comes at a cost. These tools often have high licensing and training costs, and the load that you can test is often limited by the type of license you purchase.

During your evaluation process, you may need to do a proof of concept. Your proof of concept should evaluate the tool’s functionality and validate compatibility with your core. If you’ve chosen to test the old core too, you will evaluate compatibility with it as well.

Whether you choose an open source or a commercial testing tool, you will want to compare more than cost. Evaluate the tool’s reporting functionality, debugging functionality, the ease of scripting, and accuracy.

Most importantly, you should know what your credit union’s performance test strategy is and align your tool selection with it. Performance testing can end up being a large investment in licensing expense (if you go down that path) and time for staff training. If your choice aligns with your organization’s long-term strategy, you are more likely to see a positive return.

If this seems overwhelming, there are organizations that can assist with creating your credit union’s performance test strategy, aid in making educated tool-selection decisions, and help evaluate current performance testing processes. 

                                                      

WHAT TO DO WITH PRODUCTION

By now, you may have done a proof of concept, selected a new tool, trained users, defined your test strategy and data, executed tests, and digested the results. Excellent! You can now confidently set expectations for the user experience with your new core.

However, the biggest payoff is in the final step: monitoring performance in your production environment and alerting technical staff when an SLA is missed.

Even if you did not have application SLAs before beginning your performance testing venture, the results you obtained allow you to establish them. Consider your most critical processes by revisiting the prioritized list of functions you already created.  Gather IT, the application owner, and the process owner for each function and gain consensus on reasonable SLAs.

IT will also want to consider monitoring CPU, memory, and IO for all servers. A metric scorecard can be used to document goal versus actual results.

At this point, you should establish a process for monitoring the functions in your new core’s production environment. When an SLA is missed, indicating that there is something causing a slowdown in response times, an alert should be sent to support, so they can quickly investigate the root cause. Early identification can prevent significant issues from becoming visible to your employees and members.

System responsiveness will be one of the first changes your employees and members will see in your new core. The impact of a negative first impression can be long-lasting. Investing time and resources in validating that the vendor’s commitments are tangible will be a significant short-term win for your project. Over the long term, knowing the limits of your new core will help you scale your infrastructure more efficiently- a win for your organization.

In addition to a test repository and performance testing, an often overlooked component to a successful core conversion is validating the integrity of the core data. Sound tedious? Next in this blog series, we will examine the risk an organization bears when they allocate too few resources to ensuring the data is correct, and how to mitigate it.

 

This article’s author calls Olenick her professional home away from home. Olenick is a global software testing firm with headquarters in Chicago, IL. You can learn more about her on LinkedIn.

Special thanks to John Hazel and Mike Willett for contributing to the content of this blog post. Learn more about them on LinkedIn.

Author: Sharon Mueller, Credit Union Practice Lead 

Copyright © 1998 - 2019 Olenick. All Rights Reserved | Terms and Conditions
 
   

This site uses cookies to provide you with a more responsive and personalized service. By using this site you agree to our use of cookies.

Please read our cookie policy for more information on the cookies we use. Olenick’s privacy policy is available here.

More information Ok Decline