Understanding the Complexities of Test Management

April 13th, 2012 by Bill Echlin
Over the next seven weeks we’re going to take a look in detail at 7 complex aspects of test management. We’re going to look at how we can address these complexities with some of the common tools available today. In doing so we’ll explore some of the weaknesses and strengths of popular tools and see how best to set them up to overcome these issues. In short we’re looking to help guide you on the best way to implement your chosen test management tool so that it better supports your process.
In this series of weekly blog posts we’re going to examine the following…
At some point in the process we need to assign a test to a tester. The question is are you looking to assign it at the step, case or set level? Do you need to allocate different tests within a set to different qa engineers? How do you track the difference between assignment for writing and executing the testcase? All of these points complicate a seemingly simple concept. When you look at the different ways different tools support this capability you’ll find it’s not quite as straightforward as you might first think.
2. Version Control
We version control the code we release. In many cases we need to be controlling versions of the test cases we  write and execute too. When working with version control you need to consider if the replication of different versions impacts already executed tests. You need to understand if you need to be able to choose the different versions you use at execution time. And finally if you can compare the different versions easily.
3. Parameterisation
Writing similar testcases repetitively becomes tedious very quickly. It’s open to mistakes and is also a poor use of the QA engineers time. To help address this some tools provide the ability to parameterise test cases. You write one testcase then scale up with data that creates many permutations. It’s a nice concept but one that’s difficult to execute effectively. That difficulty sometimes being compounded by the way in which the tools implement this feature.
4. Libraries
The concept of having a library of reusable test cases is as old as the discipline of software testing itself. How your test management tool allows you to manage this library makes a big difference in your capability to implement your QA process. Can updates in the library be replicated to testcases already assigned? Can you replicate updates across projects? Is the version control on the library linked into the executed testcases? All points worth considering when you decide how best to administer your tests library.
5. Result Aggregation
The software products that we’re working on are complex. The complexity of these products needs to be modeled within our test setup to a degree. Core modules of the product we’re QA’ing will have a set of core testcases. Specific modules, specific testcases. When modeling this setup within your test management tool there comes a point where your ability to report on results becomes more complex. Can you aggregate results to deliver the high level reports you need for your clients?
6. Retesting
Tests fail. Bugs get raised. It’s a clear cut part of the process. What’s not quite so clear cut is how we approach retesting of the fixed bugs. Do we delve into our defect tracking tool and look for all bugs that are fixed? Then write testcases for each fixed defect? Or do we search for failed test cases and then re run these? Sounds simple but pulling out that information from your test management tool isn’t always that easy.
7. Configurations and Releases
Every test is run against a release of the product and a specific configuration of the product. Sounds simple. Then you start looking a daily builds, different platforms and operating systems to run the tests on. Before you know it a these few variables start to result in thousands of permutations. Keeping track of this is absolutely key to your test process. How easy is this to track? It largely depends on the capabilities of the tools you use.
Most of these points are core processes that we all follow as a matter of course. There’s more to most of them than first meets the eye though. When we look to support that process with a test management tool we can find that the process the tool imposes doesn’t quite match our requirements. This is the reason it’s important to examine these aspects prior to implementing a tool. Or at least understand the consequences of bending your process to meet the approach your chosen tool imposes. In this upcoming series we’ll examine all of these points in detail and help you find your way through these test management complexities.

Over the next seven weeks we’re going to take a look in detail at 7 complex aspects of test management. We’re going to look at how we can address these complexities with some of the common tools available today. In doing so we’ll explore some of the weaknesses and strengths of popular tools and see how best to set them up to overcome these issues. In short we’re looking to help guide you on the best way to implement your chosen test management tool so that it better supports your process.

In this series of weekly blog posts we’re going to examine the following…

1. Assignment: Assignment of Test Cases in Test Management Tools

At some point in the process we need to assign a test to a tester. The question is are you looking to assign it at the step, case or set level? Do you need to allocate different tests within a set to different qa engineers? How do you track the difference between assignment for writing and executing the testcase? All of these points complicate a seemingly simple concept. When you look at the different ways different tools support this capability you’ll find it’s not quite as straightforward as you might first think.

2. Version Control: Version Control of Test Cases in Test Management Tools

We version control the code we release. In many cases we need to be controlling versions of the test cases we  write and execute too. When working with version control you need to consider if the replication of different versions impacts already executed tests. You need to understand if you need to be able to choose the different versions you use at execution time. And finally if you can compare the different versions easily.

3. Parametrisation: Using Test Case Parameters in Test Management Tools

Writing similar testcases repetitively becomes tedious very quickly. It’s open to mistakes and is also a poor use of the QA engineers time. To help address this some tools provide the ability to parameterise test cases. You write one testcase then scale up with data that creates many permutations. It’s a nice concept but one that’s difficult to execute effectively. That difficulty sometimes being compounded by the way in which the tools implement this feature.

4. Libraries: Test Management Libraries

The concept of having a library of reusable test cases is as old as the discipline of software testing itself. How your test management tool allows you to manage this library makes a big difference in your capability to implement your QA process. Can updates in the library be replicated to testcases already assigned? Can you replicate updates across projects? Is the version control on the library linked into the executed testcases? All points worth considering when you decide how best to administer your tests library.

5. Result Aggregation: Aggregating Results with Test Management Tools

The software products that we’re working on are complex. The complexity of these products needs to be modeled within our test setup to a degree. Core modules of the product we’re QA’ing will have a set of core testcases. Specific modules, specific testcases. When modeling this setup within your test management tool there comes a point where your ability to report on results becomes more complex. Can you aggregate results to deliver the high level reports you need for your clients?

6. Retesting: Identifying Retests with Test Management Tools

Tests fail. Bugs get raised. It’s a clear cut part of the process. What’s not quite so clear cut is how we approach retesting of the fixed bugs. Do we delve into our defect tracking tool and look for all bugs that are fixed? Then write testcases for each fixed defect? Or do we search for failed test cases and then re run these? Sounds simple but pulling out that information from your test management tool isn’t always that easy.

7. Configurations and Releases: Tracking Configurations and Releases with Test Management Tools

Every test is run against a release of the product and a specific configuration of the product. Sounds simple. Then you start looking a daily builds, different platforms and operating systems to run the tests on. Before you know it a these few variables start to result in thousands of permutations. Keeping track of this is absolutely key to your test process. How easy is this to track? It largely depends on the capabilities of the tools you use.

Most of these points are core processes that we all follow as a matter of course. There’s more to most of them than first meets the eye though. When we look to support that process with a  tool we can find that the process the tool imposes doesn’t quite match our requirements. This is the reason it’s important to examine these aspects prior to implementing a test management tool. Or at least understand the consequences of bending your process to meet the approach your chosen tool imposes. In this upcoming series we’ll examine all of these points in detail and help you find your way through these test management complexities.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Trackbacks/Pingbacks

  1.  Test Management Blog » Blog Archive » Assignment of Test Cases with Quality Center
  2.  Test Management Blog » Blog Archive » Versioning with Quality Center

Leave a Reply

Your email address will not be published. Required fields are marked *