It’s well known that keeping track of results from testcases against builds, configurations, environments, etc can become quite difficult to manage. It’s one of the reasons we usually implement test management tools. These tools enable us to log our results against these different aspects of our testing and then quickly produce traceability reports to show our coverage. It’s just that the permutations and combinations can become quite difficult to track, even with the right tools.
If we have just 1 testcase, which we need to run against 2 platforms, 2 operating systems along with three configurations relevant to our application we already have 12 permutations. That’s 12 permutations for just 1 testcase. And with this example we’re not even considering different versions of the application being checked. This would add even more permutations.
The whole thing can quickly get out of hand. Certainly this will get out of hand if you’re using Excel and don’t have some sort of test management system in place to help. It also helps if we break this down and focus on the relationship between just three variables
- version of application
Where the configuration entity might relate to different operating systems, platforms, environments, etc. So we define one record for each configuration where a configuration could cover Windows XP, Intel, etc. In this way we can manage the traceability matrix a little easier.
We can see this in practice with the test management tool QAComplete in this video. Here we look at how QAComplete deals with these different aspects in practice. We see in this video how QAComplete manages the relationship between sets, configurations and versions of the application.
The benefits of this approach are threefold..
- you can view a configuration and see all past results against a configuration.
- you can view a test case and see each configuration it was run against, along with the result.
- you can see which tests have been run for a particular version of the application against a particular configuration
The key here is to ensure that you define the configurations in a meaningful way. If you get this right (perhaps using custom fields for tracking different aspects like Os, platform, etc) then your reporting and traceability will deliver the information you need.
And this is the crux of this topic. A lot depends on the information you need to get out after the testing is complete. The ways in which the different tools track and report on this information differs. As a consequence the reports your chosen test management tool delivers may or may not meet your needs.