Aggregating Results with Test Management Tools

July 4, 2012

In many cases we’ll need our test management tool to aggregate and report on results from various perspectives. For example we may need to aggregate across different projects, application modules, runs or cycles. Whilst this sounds simple in theory, it’s not always so simple in practice. Much depends on how you structure things within your test management tool and how well that tool can cope with aggregating results.

Another area of complexity can develop when you want to see what the last run status values are for a group of tests across different releases or builds of a product. We argue against The Aggregated Test Management Results Report in this post. However there are times when this can be useful. Perhaps you just need to get a feel for the status of all results across a series of releases.

This sort of information is never quite as simple to gather as you may expect. Complexities arise around how you determine which records should be included in your report. If you’re not grouping by the results based on a single release then what are you grouping them by? A cycle, functional area, date, etc? All of these could be considered valid groupings in certain scenarios. It’s important though not to forget that it can mi-represent the quality of your product at a particular point in time.

The exact quality of an application can only ever be related to a specific build of the product. If you’re aggregating across different builds then, ultimately, you are trying to report on a quality status of a product/build that has never existed.

Either way sometimes it is useful to create statistics based on results aggregated by some other category. Within QAComplete we have a number of options out of the box that we can employ.

1. Within the library we can group by a folder structure. When we view testcases in that folder structure we can see the last run status for all records. This then give us an indication of quality aggregated by functional area across different releases for the last run testcases.

2. On the library test management dashboard we can view the last run status of ALL testcases (and perhaps apply a filter based on functional area or some other grouping). Then, for example, when we drill down on this report we’ll see last run statuses where we had failures across all releases. This can be useful for identifying retests for subsequent builds.

The limitation with QAComplete is that this is all reported based around the ‘last run status’. Usually this is enough but if you need to aggregate by some other category then this becomes more difficult. For example if you define a cycle which covers more than one release/build this concept isn’t supported within QAComplete. In addition to this there is no capability to report across projects. To create cross project statistics requires the creation of custom reports which may mean getting your hands dirty with crystal reports and SQL.

Ultimately with test management we’d argue that you have to be careful with aggregated reports. Especially where you are grouping results across different builds/iterations of the product. This type of aggregation can give a misleading indication of product status. Having said that, often it’s good to see which testcases failed on old builds of the application so that you can create sets of retest for subsequent release to validate fixes and ensure failed testcases are now passing. The subject of identifying testcases for retesting conveniently being the next topic in this series of test management complexity blog posts.