5 Frequent Core Conversion Missteps: The Things You Probably Forgot
This blog is the sixth and final in the series about things that are frequently overlooked or forgotten when a Credit Union is considering a Core Conversion. To read each misstep more in-depth, visit Olenick Expertise.
Credit union core conversion projects are enormously complex. By the time the project kicks off, well over a year has likely been spent debating the benefits of one system over another, estimating the many expenses that will be incurred, worrying about how integrated systems will be affected, and wondering how to best prepare your staff for the change.
In our experience, no matter how many weeks, months, and years are spent planning, five conversion components are often overlooked. We call them missteps.
This blog series lays out the details of those five missteps – what they are, the risks that are introduced, and how to thoughtfully incorporate each into your overall plan.
A crucial component of a complex project with many hundreds, if not thousands, of test cases is a repository. While basic tools such as Excel are often successfully used for small test efforts, a conversion project needs more.
A robust tool will detail your test case steps, capture test execution results, track test progress, and help your test manager identify testing risks. In addition, when used effectively, the right tool can save your team countless hours of inefficiency.
There are many repository options available that offer a full suite of testing functionality and robust reporting. You will want to consider using a tool that also has defect management functionality embedded. While separate tools can be used to track testing and defects, there are many advantages to a consolidated system. Read the detailed post to see pros and cons of separate and consolidated repository tools.
The right tool for your credit union depends on a number of factors and their priority in your organization. You’ll consider things like whether or not mobile support is important to your team, the need to monitor incomplete test cases (indicator of testing risk!), and cost.
To prevent buying more in a tool than you need, or not buying enough, consider your credit union’s Quality Assurance (QA) strategy. You want your tools and resources to operate seamlessly in your credit union’s Quality Engineering ecosystem. Take your QA life beyond your core conversion project into account to make the best decision for your credit union.
There are lots of reasons you’ve chosen to do a core conversion, and improved speed for employees and members is probably on the list. Your new core has to be faster, right? You’ll only know for sure if you performance test.
Unlike functional testing, which is often a small group of testers running a large variety of test cases, performance testing simulates a large number of users executing a small number of test cases. The goal is to identify the limitations of your core as it relates to key transactions.
Three questions should be answered with performance testing- 1. Does the new core respond the same, worse or better than your old core? 2. Is it stable when volume varies? 3. How high can the transaction volume get before you see instability?
To answer those questions, you need to create a test plan. There are five steps to doing so, including step one, which is to make a list of the functions you want to test. Overall, performance testing should be narrowly focused, so you will need to be critical about what you will test. Check out the detailed blog post for best practices and recommendations.
There are many open source and commercial tools available for performance testing. In addition to cost, you’ll want to consider the functionality and technical support that you need, how easy it is to script inside the tool, and what your credit union’s performance test strategy is.
Last, a fully-realized performance test strategy involves taking the learnings from performance testing and applying those forward to your production environment. It’s great to know what your new core’s limitations are, but unless you are using that information to prevent production issues and monitor and alert IT staff when something goes awry, all you’ve done is collect data.
Your project has a myriad of complexity to track, and it may seem like the last thing you need to worry about is whether or not the data in the core is correct. It’s actually one of the first things you need to worry about.
Core systems have thousands of rows and columns of table data, and many integrated applications are moving data in and out of those fields. In addition, human beings will decide the system data mapping, leaving you vulnerable to mistakes. While many mistakes can be caught while executing end to end testing, a thorough test plan must account for validating the data accuracy.
There are two things you need to test at this stage- First, that all of the old core data has moved to the right field in the new core, and second, that your new core is looking at the right fields when displaying information.
To plan for core data testing, you will need data dictionaries for the old and new cores, a mapping exercise, and a test strategy. You will want to test many different scenarios, accounting for members, accounts, debit and credit cards, and transactions.
You do not want to find that the “annual percentage yield” and “annual percentage rate” fields were flipped during the mapping exercise. Just because it’s correct on your primary savings share account, does not mean it’s correct on other shares so plan to test thoroughly.
For more information on how to plan for this type of testing, read the detailed blog post here.
You can begin core data testing as soon as a test environment instance of the new core has been installed, and a copy of your production data has been loaded. As with any other testing, log your results and defects, and then retest when fixes are provided.
Downstream testing, the portion of testing that validates the accuracy of data that is sent from your core to another application or database, is as important as any other aspect of testing. Just like with core data testing, thousands of fields have been remapped and modified to suit your new core. You must check the data as it moves between systems.
Downstream testing looks at two things- First, that your core data made it to the correct field in a secondary application, and second, that that data is formatted correctly.
Some organizations underestimate the importance of formatting, but if you found that account numbers were being displayed exponentially – 2.54×1011 versus 254397684343 – in a field that was visible to members, you would have a significant production issue to resolve.
To identify the items to test, consider the data that is sent to third parties and members and your reporting databases. There are many to think about – statements that are mailed to members, invoices, and teller and ATM receipts are just a few. Our blog post on this topic outlines the steps for planning downstream testing.
Your IT department can help you identify all of the instances where data is sent outside of your organization. You’ll also need to work with each business unit to see if they are importing data from your core.
Fortunately, much of your downstream testing can be built into existing test cases by adding one or two steps to your end to end test cases. Friendly reminder: don’t forget to log your defects as you go.
The most often overlooked component of a core conversion is the creation of a regression test suite.
Regression testing is likely to be a final step in your test plan before you go live in production. It’s also done when your core is upgraded, has hotfix or enhancement code applied, and when you make changes to it (e.g. establishing a new integration). Regression testing performs a crucial role – it is used to verify that the previously tested software still performs as expected after changes are made.
You will create at least two regression test suites at the end of your project. Fortunately, this work is not done from scratch. You’ll identify a subset of the conversion test cases you ran and package them together. Your robust test tool can make this process very easy and store the new test suites for future execution.
Regardless of the type of change that is made to your core, you absolutely always run a regression suite that tests critical functionality. Check out the blog post for the two types of test suites and the test cases to include in each.
The regression suite you run depends on the type of change being made to your core. Your regression suite will cover the vast majority of the testing you need to do for that change, but there are sure to be a small number of test case edits and new test cases to author.
For regression test suites to save your Quality Assurance team/ testers a lot of time in the future, they will need occasional maintenance. As business processes change and defects are corrected, someone should be updating the regression suites so they remain accurate.
Regression test suites are best created right after your conversion. While the team’s energy may be low at the end of this exhausting project, their memories are recent, making decision-making more effective and efficient. Be sure to plan for this important final step in your conversion project.
For more information on these five core conversion missteps or to discuss other ways to ensure successful testing of your upcoming software project, contact Olenick.
Olenick is a global software testing firm with headquarters in Chicago, IL.