The Aggregated Test Management Results Report

January 11, 2012
On a fairly regular basis we get asked for a specific test management report that will show the aggregated results from multiple runs. The idea behind this is that ultimately you can show that every testcase was run and that every one passed.
So for example on test cycle 1 you may get…
my_test_case_A:   failed
my_test_case_B:  passed
my_test_case_C:  passed
Then on cycle 2 you get….
my_test_case_A:   passed
The aggregated test management information would then show the following:
my_test_case_A:   passed
my_test_case_B:  passed
my_test_case_C:  passed
So we’re looking for the latest result for each testcase and showing that result as if all the tests have been run in one go. Very misleading when you consider it from that perspective.
There is a certain level of attraction to this test management information. It’s one of those reports that can be categorised in the “that gives us and our customer a nice cosy feeling that everything has passed and the product is ready for release” reports.  From the QA engineers point of view it’s nice to show a list of results to your customer that’s simple and lists every case as passed. Customers see this as a nice to have too. However, If I was the client, receiving this, I would be horrified. Results reporting in this way hide key information, and misleads on a number of levels.
Firstly, it demonstrates that little, or no, attention has been paid to regression testing. Assuming you are just re-running failures then you’re probably not re-running tests that passed first time round. If you’ve got to do a rerun then that implies you’ve made changes to the application. If you’ve made changes to the application then you need to be regression testing. As a customer that receives such a management report all I can say is that I’ve got absolutely no visibility of any regression testing that’s taken place. I’m going to start questioning the worth of your current test management tool at this point.
https://www.testmanagement.com/blog/2011/10/questioning-the-worth-of-your-current-test-management-tool/
Secondly it entices the QA team to aim for a goal of 100% of tests completed with 100% passes. This leads to a kind of “Oh how lovely, we have a perfect application to release” syndrome. The team should not be aiming for a goal of 100% passes they should be aiming to find defects. It’s too easy to think testing is complete when you’re inadvertently enticed into producing the perfect report rather than a perfect approach to finding bugs.
Thirdly, it implies that little or no attention is being given to version control of the application under test. Such a report demonstrates that all testcases have been run against “the” application rather than some run again version x of the application and others run against version y of the application. Your product might be under version control but you’re not recording tests against the version they were run against. Without this test management information you have no traceability and no repeatability within your process.
I suspect that most requests for such a report come either from an inexperienced QA team or the end client. A client that’s looking to see information conveyed in this way ought to be educated by the QA team. A team that’s providing information to the client in this way ought to be improving their test management process as a matter of urgency.

On a fairly regular basis we get asked for a specific test management report that will show the aggregated results from multiple runs. The idea behind this is that ultimately you can show that every testcase was run and that every one passed.

So for example on test cycle 1 you may get…

my_test_case_A:  failed
my_test_case_B:  passed
my_test_case_C:  passed

Then on cycle 2 you get….

my_test_case_A:   passed

The aggregated test management information would then show the following:

my_test_case_A:  passed
my_test_case_B:  passed
my_test_case_C:  passed

So we’re looking for the latest result for each testcase and showing that result as if all the tests have been run in one go. Very misleading when you consider it from that perspective.

There is a certain level of attraction to this test management information. It’s one of those reports that can be categorised in the “that gives us and our customer a nice cosy feeling that everything has passed and the product is ready for release” reports.  From the QA engineers point of view it’s nice to show a list of results to your customer that’s simple and lists every case as passed. Customers see this as a nice to have too. However, If I was the client, receiving this, I would be horrified. Results reporting in this way hide key information, and misleads on a number of levels.

Firstly, it demonstrates that little, or no, attention has been paid to regression testing. Assuming you are just re-running failures then you’re probably not re-running tests that passed first time round. If you’ve got to do a rerun then that implies you’ve made changes to the application. If you’ve made changes to the application then you need to be regression testing. As a customer that receives such a management report all I can say is that I’ve got absolutely no visibility of any regression testing that’s taken place. I’m going to start questioning the worth of your current test management tool at this point.

Secondly it entices the QA team to aim for a goal of 100% of tests completed with 100% passes. This leads to a kind of “Oh how lovely, we have a perfect application to release” syndrome. The team should not be aiming for a goal of 100% passes they should be aiming to find defects. It’s too easy to think testing is complete when you’re inadvertently enticed into producing the perfect report rather than a perfect approach to finding bugs.

Thirdly, it implies that little or no attention is being given to version control of the application under test. Such a report demonstrates that all testcases have been run against “the” application rather than some run again version x of the application and others run against version y of the application. Your product might be under version control but you’re not recording tests against the version they were run against. Without this test management information you have no traceability and no repeatability within your process.

I suspect that most requests for such a report come either from an inexperienced QA team or the end client. A client that’s looking to see information conveyed in this way ought to be educated by the QA team. A team that’s providing information to the client in this way ought to be improving their test management process as a matter of urgency.