Defect Management: Metrics and Trends
Reporting Metrics and Identifying Trends
Finally! You have set up your defect repository, trained everyone on it, and are running your triage meetings – now you just need to report on the results of defect management within your organization. How best to do that? Well, we need to answer a couple of questions first:
- What Is Your Goal?
- Who Is Your Audience?
- How, and How Often, Should You Distribute?
- What Do You Present?
I think once these questions are answered, you will have a full picture of why you want to add this powerful tool to your arsenal of defect management.
- What Is Your Goal in Sending Out These Metrics?
This has a fairly simple answer: it is to inform AND improve, or rather to inform IN ORDER to improve. Your goal is to send out information to key decision makers about the progress of the application development efforts as reflected in the resolution of the defects being found during testing. You are doing this so they have the chance to “nip in the bud” any issues that are being exposed by the presentation of the data. By presenting to them the defect data in an easily digestible form, either snapshots of data as it exists today or as it has progressed (positively or negatively) over time, you are giving them information they need to identify and address issues, fix processes, or at least call out weak parts of the development process so that the right people can begin to address a resolution.
Who Is Your Audience for These Metrics?
Generally (as indicated above), your main audience is the Senior Management, who can make decisions to change process and fix issues within the development lifecycle. However, you do have other audiences for specific metrics:
- Department Heads
- For a department head to address issues within their department, a breakdown of metrics by department can be distributed. Then, each department has a focused look at defect progress, both comparing their department’s statistics to others’, and simply giving them a look at what is happening to defect creation within their application development team. This goes beyond just an identification that their department is doing better or worse than others (possibly an indication of better or worse development practices), but perhaps a simple Eureka moment of “Whoa! I didn’t realize we had so many Open/Critical/Old Defects – I better get on that and get it fixed!”
- Developers/Dev Leads
- Developer leads and their teams need to know how they are doing. Are they fixing their assigned defects in a timely manner? Are their fixes being retested and rejected again, and if so, how frequently? How often are defects being found in their code, and is this above or below an acceptable level? If above an acceptable level, can processes be put into place to reduce them, such as specific education, better code reviews or more unit testing?
- Testers/Test Leads
- Test Leads and their teams also need to know how they are doing. They might be interested in how many defects are being logged each day, but this is more of a measure of proficiency of the development organization. Rather, the test team should be more concerned about how quickly they are turning around defects assigned to them as fixed and verifying and closing them. These are metrics that THEY should care about, and the Test Leads should address this if it becomes an issue as defects are sitting in a “Ready to Test” status for very long. Test Leads might also pay attention to the number of defects that appear in the application once in production – an indication of how many defects were NOT detected during the testing effort. If many slip through at all, the Test Leads need to work to determine what changes need to be made to the testing practice and processes to keep that from happening and to tighten up the net and keep defects from escaping into the wild.
How, and How Often, Should You Distribute Metrics?
There are two times to distribute metrics that would not be a surprise to you: daily and weekly. However, I would say that this is also dependent on your audience. For instance, for your Senior Management and probably your department heads, you should be reporting everything we talk about in the next item (#4) daily, but to your Dev team and Test team, you might want to not overwhelm them with details and only send out their metrics on a weekly basis.
Metrics need to go out to the decision makers – the people with the ability act on negative trends quickly, every day. If you wait until the end of the week, any trends that have been presenting themselves throughout the week may be more difficult to resolve.
As to how to distribute them, there are multiple ways. You should present them to your audience(s) to get their feedback. They might include:
- Dropping them in a SharePoint or other shared document repository
- This has the advantage of there being a permanent history of all metrics for a project, accessible by anyone who wants to look at them, and reduces the clutter of everyone’s email inbox.
- Add them as an attachment to a daily email
- The next best way to go. People can request to be added or removed from the email list if they desire, and when they receive the email, they can either store the email in a folder specifically for the metrics for the project or can download the attachment from each email and store that on their local or cloud drive.
- Add them inline in a daily email
- The next best option. This will allow people the option of the email as above, but then they are left only with the choice to save the email in their email application in a folder for metrics for the project, as there is nothing to download from the email to store.
- Creating them in the Defect Repository itself and letting people view them on their own
- Many, if not all, defect repository tools have built in metrics. However, there are at least two downsides to this option. First, they are usually very limited as to the quality of the charts, the ease of creating them, and especially the ease of customizing them to fit with your consumers’ needs. The second, and to me a more critical downside, is that they are impermanent. They are up to date live, changing with the data in the system (a good thing), causing all metrics to be fleeting and not easily reviewed later, if at all. Except for charts of data over time, you will lose yesterday’s data and not be able to go back and see it later.
- Printing them out and sharing them in person
- Okay, just don’t. Save the trees! If you need to plan a rare, in-person meeting with the Sr. Management and present the metrics to them because there are major issues that need to be addressed, and you can’t display the metrics on a big screen in front of everyone, then make it as pretty as you can to impress. But otherwise, save the trees and keep it all electronic.
What Should You Be Presenting to Your Audience?
Here is the meat of the blog – WHAT metrics are you actually creating and sharing with your audience(s)? This goes way back to the second blog in this series – setting up your defect repository correctly in the beginning, making sure you are capturing the information you need in order to report meaningful metrics to those in charge. Below are some ideas of what you might present to all parties interested to get their feedback on what they want to see. In some cases, they will be blown away by your list of options – and others may want to see additional metrics. This is why, way back in blog #2, I suggested thinking ahead on this with the following:
“SO, now you have wracked your brain on fields you think you might need, it is time for you (if you haven’t already) to talk to the business owners and other Senior Management, asking THEM what they want to see, the data they want to capture, and what reports and charts will be of interest and use for them. This may be department specific, specific to the company, or simply a need that this department head has. I would suggest that you have a list of tables/charts ready for them, so you can give examples of metrics you can deliver and what they would look like. This may guide them into selecting metrics that you know are valuable and may also spark ideas in their heads on other metrics they might want to see.”
What do you need to think about when creating your metrics? These include:
- How is the metric meaningful to the organization?
- Does the metric potentially identify problem areas in the process that could be improved?
- Is the metric simple, understandable, logical, and repeatable?
- Does the metric provide timely information?
- Is the metric taken over time to identify trends?
- Is the actual data comparable to the expected data?
- Is the metric unambiguous?
- Is the data easy to collect?
Note that for any given metric not ALL these need to be true, merely that you want to think about this list when you are developing a metric to see if that metric checks one or more of these boxes.
Most metrics for defects will fit one of the following types:
- The count of defects (by type, severity, app location, LOB, developer, defect type, etc.) uncovered to date during this testing effort
- Turnaround time of defects – time to start of refactoring, time to refactor, time to test after fix, total time from Open to Close, count of defects past SLA
- The count of defects (by type, severity, app location, LOB, developer, defect type, etc.) over time, tracked and reported daily from today back to the start of the testing effort
Here are some examples of common defect metrics. By no means a comprehensive list, just some ideas to start your brain thinking:
A basic type – Count how many defects of what Severity are open by each Team. That is, show me the teams that are having the most difficulty. Below we see that Teams 2 and 10 have the MOST open defects that belong to them, but most are Medium Severity. However, Teams 1 and 5 have the most CRITICAL Severity, and Teams 5 and 6 have the most Critical and High Severity open defects.
Similar to the last one, we have the Teams at the bottom and the counts are of Defect Status for each Team. This might answer the question of which team has the most defects still not started (“Assigned” rather than “Fix in Progress”, for instance- Team 2), and which team has the most defects sitting there waiting to be tested (“Ready to Test”- Team 8, followed by Teams 3 and 5).
The count of Open and Closed defects by Severity, showing us that MOST of our defects are only of Medium Severity, and that about half of our defects are still open.
For this graph, we are looking at how many defects, of what Severity, and how many days past the agreed upon SLA. If ALL defects were fixed within their SLA, this chart would be blank. However, this shows that there are a LOT of defects that are 11 days or more past their SLA, and that the oldest defects just keep stacking up and not being addressed and closed.
Below is a very informative chart, counting how many defects are open or closed each day and the number of open defects remaining. This shows that more defects are being opened than closed each day, so the number of open defects is increasing.
This graph counts the number of defects by the determined or suspected defect TYPE. This would inform us WHERE in the process is causing the most defects. In this case, you can see that the Requirement Gathering, the Coding, and the Data Integration processes seem to have lots of room for improvement that could be researched. However, you might also notice that by far most defects have not even had a determination made as to the cause of the defect. THIS shows a need for process improvement on filling out defects or on how best to determine what caused the defect.
This is another set of data over time, showing by team how defects are assigned to each team day by day. This is particularly graph shows that Teams 3 and 6 had a spike of defects and continue to have the most open defects, and that Team 2 has continually maintained only a small number of open defects.
Another one over the time period of the testing effort. This simplifies the data to say we have this many Open defects each day, this many Closed defects, increasing everyday (as more and more defects are fixed and closed), with the goal of eventually meeting up with the third line, the total number of defects that have been logged to date.
A simple one again. This counts the number of currently open defects (as of the socialization of this chart), broken by their Severity.
These are just a handful of the many different charts of data you can display. Keep in mind that:
1) You are making charts for the quick and easy understanding of the current defect situation and trends of defect data, so the appropriate people are informed and can make observations and act if needed.
2) You should vet them throughout the development, so format and layouts are agreed upon. If the CIO wants to see the data in a Pie chart, not a Bar chart, or if they want the colors of the wedges to be a specific way because they will understand it better, make it so. THAT is your goal, so they will understand it better.
3) They need to be easily distributed.
4) You should be able to create these charts from current raw data from the defect repository, quickly and without error. It is preferable to create them automatically with the push of a button, so they can be generated at a moment’s notice.
5) They should be easily modified. That is – the data behind each chart and the charts themselves should be able to be changed at a moment’s notice. I assure you, you will be making lots of changes to them. People will see them and ask for minor changes, alternate charts, different colors, bigger font, or the same thing but for THEIR project. Prepare to be very popular.
So, there it is – Reporting Metrics and Identifying Trends in a nutshell.
Determine WHY you are creating metrics, WHO needs to see them, HOW OFTEN they need to be sent out, and WHAT you should be creating for your audience.
That wraps up my entire Defect Management series of blogs. I hope you have enjoyed them and learned from them, so you feel more comfortable instituting good Defect Management practices and processes within your own organization.