Test Data Management: Trials, Tribulations, and Tools

Test Data Management


I’m going to make a bold assertion: Our testing is only as good as our test data.

Actually this should not be bold, it should be self-evident; virtually all modern business applications are datacentric. Without good test data we cannot properly test all the features of the software, nor can we verify that the software is handling data correctly.

 

Finding, creating, and managing valid testing data is a constant irritant on all testing projects. Not surprising that the care, feeding, and protection of testing data over the last decade or so has become a discipline within the industry, under the name of Test Data Management (TDM). A web search on the term will find much is being written on the need for, principles of, and best practices for effective TDM, and that several enterprise software system vendors have developed tools for automating, managing, and securing testing data.

 

For the tester in the field, however, this may seem somewhat academic. Adoption of these principles and these tools (which are quite expensive) has not seemed to penetrate the wider market. I certainly have never seen a project that has used them, nor have I done an informal survey of co-workers to find anybody who has. Usually, we are happy to find ourselves on projects that even use Test Planning & Management and Defect Tracking tools, and that practice Test Management at all. The question of data is, at best, an afterthought.

 

So, here’s the dilemma: There’s a discipline of TDM, but your project doesn’t practice it.

There are sophisticated platforms for TDM, but your project doesn’t use one. There are best practices for TDM, but your project has no idea what they are. There are legitimate concerns about the security of Personal Identifying Information (PCI) embedded in the test data, but your project is clueless. There are ways of simulating and mocking interconnected systems that are not available in the testing environment, like order fulfillment or customer history or credit card processing, but your project doesn’t have the tools – and besides. no one understands how that works anyway. There are ways to validate some testing results quickly and efficiently by querying the database, rather than looking at the AUT, but your project does not give your team access to the database.

 

Unfortunately, as with so many other IT concepts, disciplines, and buzzwords – like ‘Agile’, ‘Continuous Integration’ or even ‘Effective Project Management’ – for many of us is more of an ideal than a reality. We must make do with what we have, what our employers or clients can afford and, sometimes, limited by their prejudices and preconceptions.

 

In the real world in which we are often required to work, test data management often begins, and ends, with a copy of the production database. There may be a person, often on the development team, who is responsible for occasionally copying production data into the testing environment… or perhaps they have a backup of that database and periodically wipe the environment clean and then restore the system back to that baseline.

 

However, there can be serious concerns when using a copy of production data:

1) Refreshing or resetting the database is often a complicated process and is therefore not done as frequently as it needs to be done, and the data may be seriously out of date. The refresh/reset itself can be destructive, rolling back or even removing data that we have been using in our testing and possibly undoing weeks or even months of careful data husbandry. Production data does not, of course, include data associated with changes and features that have not yet been deployed to production, which is exactly what we are trying to test.

 

2) The above point may also impact development. Production data is not going to reflect, nor be compatible with, those new features they are in the process of implementing. This means that both developers and testers are not incentivized to refresh and flush the environment of stale and contaminated data, and for the same reason: doing so complicates our jobs.

 

3) Failed tests can spoil data creating invalid records, breaking referential integrity, leaving behind orphaned elements, and perhaps rendering it unusable. This can especially be a problem if development and testing are sharing the same database, and since database resources can sometimes be scarce, this far-from-best practice is all too common. Developer unit testing gone bad can damage a database much more than failed functional, integration or acceptance tests.

 

4) Sometimes (usually) cloned production data is incomplete, if the production database links to and is dependent on other systems and databases that themselves are not part of the testing environment. This can be especially true of external sources and services, like credit card processors. It is also likely to be missing precisely the type of data that needs to be tested most: outliers and edge conditions. Such data is going to be rare and can be exceedingly hard to find in a data set.

 

Another problem with production data (or copies of production data) is an issue that is coming more and more to the forefront: Personal Confidential Information (PCI).

Unless effort has been made to obfuscate or mask data – that is, to somehow alter it so that it no longer contains recognizable names, addresses, account numbers etc. without breaking referential integrity or otherwise rendering the data useless – production data may be loaded with PCI, and not only may we be at risk of violating enterprise security policies, we may be at risk of violating the law! Even taking a screenshot that includes a name and address may be problematic if we then include that in an email, or even a defect report.

 

The biggest challenge with production data, however, can be simply finding in it the data you need. I call the searching and sifting of data – often using the only tool at hand which is the Application Under Test itself – Data Mining. Not to be confused with the process of analyzing large data sets looking for statistical patterns – think of that form of data mining as equivalent of conventional tunnel or pit mining. Our data mining is trying to find nuggets – an individual name or address perhaps – and is therefore more analogous to panning for gold. And, sometimes ends up being just as random and ultimately fruitless.

 

This metaphor is not entirely accurate, however, because the prospector with his pan is usually happy to find anything. For us it’s not enough to merely find a nugget, we need to find a specific nugget. It must be the right size, shape and weight. In data terms that might mean person of a specific gender, specific age, specific marital status, who lives in a specific region, and drives a specific model car manufactured in a specific year.

 

The tools we have available for doing this mining may be limited, or non-existent. The application under test (AUT) may be a resource… or not. Having at least read-access to the back-end data storage is an advantage, though you may not always find it available to your testing team. And, we’re talking about distributed systems that may be calling on multiple linked systems, which you may not have access to all the data you need to find the data matching a specific requirement. A list of customers registered on an e-commerce site, along with their order history, is not likely to be stored in the same database – or even same type of database – as the inventory being offered and sold on the site. Even if the AUT is brand new, it may be linked to and dependent on legacy systems. Many large e-commerce sites rely for order fulfillment on order-entry systems dating back decades, and which may still be running on mainframe type architecture. It may as well be on the Moon.

 

There will be times when we are compelled to ask someone else to find data for us. An insurance quoting system I worked on had no capability to search for existing customers because such functionality was inappropriate for an application of that nature. If we needed a customer like the one I described a few paragraphs ago, we had to ask a business user who had access to other systems, like CRM, billing, order/entry, to find it for us. We might wait days for a suitable customer to come back. We might wait forever and never see a customer come back. We may be sent half a dozen customers, none of which fully meet our criteria because our ‘contact’ either did not understand ‘the ask’, especially if we were asking for something like existing policies of a particular type or insurance claim history, or they understood it and simply could not find it and so sent ‘close enough’. Meanwhile, we had not been able to test – may not ever be able to test – because no customer could be found, or because ‘close enough’ was not.

 

So, we have a lot to not like; issues that make our lives very complicated. But, these are precisely the sort of issues we need to contend with if we’re going to come up with an effective test data management strategy. The good news? These are precisely the sort of issues that commercial test data management tools/systems are intended to deal with. The bad news? Well, as I alluded to near the beginning of this piece, these tools are expensive, relatively new with limited market penetration, and (if my own experience and that of many of my friends and co-workers is any indication) are not likely to be available to you on your current project (and probably not on your next one either).

 

So, having dealt in this piece with our struggles, in future pieces in this series I’ll try and offer some relief in the form of suggestions and strategies for a non-state-of-the-art world. Stay tuned!

 

Dan Kurtz Headshot

Daniel Kurtz    


Don't Miss An Olenick Article!

Subscribe to receive our latest blog articles, right to your inbox.