The Test Management Octopus

April 5th, 2011 by Bill Echlin

There are some interesting comparisons between Octopuses and Test Management. For starters did you know that an octopus doesn’t have eight arms? Turns out that it has six arms and two legs (How Many Arms Does an Octopus Have?). Either way that’s a lot of limbs to manage. The management of the test process presents most of us with a similar challenge.

The concepts behind test management are simple. Think up a test case, define the steps, write down the expected results, execute and log the result. It all boils down to this. The larger the product we’re testing the more we need to create. You couldn’t wish for simpler foundation. Simple is great, especially in the software development arena where complexity seems to reign.

Simple as in an Octopus has 8 arms is good too. Only it doesn’t. On closer inspection an Octopus effectively has six arms and two legs. Observations have shown that an Octopus uses the two rear most tentacles to get around the seabed. The other 6 for propulsion when swimming and manipulating objects. Research has also shown that when octopuses get in a tangle they use two specific tentacles to help untangle themselves. Simple on the surface. Not so simple when you start to look a little deeper.

This goes for test management too. The principals of writing and managing the documents are simple on the surface. Scale things up and you’ve got 8 tentacles that frequently seem to get tangled up.

Managing this in practice becomes very complex very quickly. The end result never seeming to be as clear cut as the simple foundation. Sticking with Octopus analogy I’d suggest that there are 8 test management tentacles that result in this complexity:

1. Multiple testers adding, deleting and modifying documents concurrently.

2. Evolution of documents over a period of time requiring version control.

3. Running different versions of a test case against different versions of the product, means multiple different results.

4. Linking results to the requirements in order to track coverage.

5. Some organisations impose a review process covering the written documents.

6. Managing variations of very similar test cases which are written for product or module variations.

7. Tracking the execution against the same version of a product being run on different configurations/environments.

8. The need to Identify which test cases need to be re run when a defect is fixed and requires re-checking.

The concept of one test case, one person, one requirement and one product version is clearly easy to comprehend. It’s the eight tentacles that start to add layers of complexity. What happens when multiple testers are adding, deleting and modifying the repository of data concurrently? This delivers similar issues that developers face with source code control. Only it’s more complicated for us when you add in the next tentacle. That tentacle being the need to run different versions of a test against different versions of the product we’re checking. Then link that in to another tentacle where you need to link those cases back to the requirements that they cover (where the requirements themselves are like shifting sands in their own right). You get the picture. The result is eight legs of a test management octopus that are damn difficult to keep track off.

Octopuses are one of the most intelligent forms of sea life. They can manipulate Rubik’s cubes and learn to open jam jars. Whilst the test management process is hardly as simple as opening a jam jar I would suggest that we’ve got a creature which is far more complex and fascinating than many seem to give it credit for.

 

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Manual and Automated Test Management

April 4th, 2011 by Bill Echlin

In this test management webinar recording we look at how to schedule automated tests to run daily during the QA cycle. Automating the execution of your testing and tracking those results in conjunction with your manual test efforts is key to maximising your resources. Integrating this process into an automated build process brings even more benefits with initiation of the automation coming from the completion of the build. With effective reporting and dashboards you can push back on development teams that deliver poorly unit tested builds to your QA team. Taking this approach is a good way to free up your team to concentrate on what is really important; usually the testing of new features not undertaking the unit testing on behalf of the development team.

To run a full regression set on an automated schedule, you should organise your various test by function within TestComplete. Then using the batch job or scheduled tasks in Software Planner you kick off your automation unaided. With Software Planner and TestComplete integrated, the results of these schedules will automatically be recorded by Software Planner. With the capability to automatically send Pdf reports via email you can even send the automation run results directly to the development team. In this way you can reduce some of the more repetitive tasks your team usually get sucked into and push back that work to the development team.

Key to improving the quality of your testing is the development of well written test steps. Make sure you have defined the tests and expected results in Software Planner. After analysing and recording the results, you can automatically generate the defect, and prioritise them for resolution. You can also send an email notification of these results and a link of the defect to your developer, if you so wish. Again this helps to streamline the test process and reduce the usual administrative tasks your team have to contend with.

One simple tip for effectively recording failures is to manually capture a video of the on-screen test using Jing (available for free at http://www.jingproject.com). With Jing you can set the part of the screen you wish to record, record the action, upload it to the web, and copy the link to Software Planner. Again this is another time saver for your team as you reduce the amount of time you spend typing up defect records.

From here you are able to view the results of your execution runs in various dashboards. You can view progress trends and failure trends. The ability to compare the results of your manual and automated test cases is also a useful feature of the dashboards.

Combining Software Planner and TestComplete gives you the capability to track progress of the whole process. You can complete a wide variety of tests automatically and quickly analyse releases delivered to the team for regression issues. Just a few of these simple steps can make significant improvements in your teams efficiency and save valuable amounts time. All in all small improvements, supported by the right test management tools, can deliver significant improvements to the effectiveness of your team.

 

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Manual Test Management – Planning

October 1st, 2010 by Bill Echlin

When it’s time to start planning your manual test efforts, being thorough in your preparation is the best way to ensure success. The most efficient way to begin developing your cases is to have your projects organized with test management planning tools such as Software Planner and TestComplete. These tools can help keep your tests organized, are easy to use, and require little in the way of maintenance. Once you’ve got your projects organized into separate automation and manual folders within the software’s library, you can start creating and executing your manual tests.

The key to planning successful manual execution is to develop requirements that are descriptive enough to include enough information for testers and programmers. Such requirements leave out extraneous information and unnecessary detail. When writing your requirements, make sure to explain any processes or details that you refer to. Although you do want to keep the description as short as possible, don’t leave out any important information. Manual test management tools are useful when writing requirements as they provide full traceability between cases and requirements.

When creating your requirements, it’s helpful to organize them by release within your management tool. Include numbered steps for any logic that needs to be followed, in order for testers to know what to place their focus on. It’s helpful to include a screenshot of any prototypes, so that both testers and developers can see the desired result. Finally, don’t forget to include business rules, so that you can identify the needed fields, define how large they should be and if they are required. This not only gives developers something to code from, but gives testers something to work from.

Once you’ve got your requirements and business rules for each case that you want to create, it’s time to start running the tests. For manual cases, it’s a good idea to run positive, negative and performance tests. Positive tests will cover the application the way it was designed, testing what should happen using valid data. Negative tests try to push the boundaries, covering those conditions as a way of trying to break the code. Performance testing allows you to cover the speed of a program when it’s running. You can review the results of each of these types of cases in your test management tool, and compare pass/fail rates and other data within the program’s dashboard.

By following these best practices for creating your manual test cases, you can help reduce the time wasted in having to rework items for the client before proceeding with a project. This will save your team valuable time and money. With properly scoped requirements and a well-defined prototype, your testers and developers can efficiently code and run cases. This all avoids rework and rerunning, which means reduced timelines and less expenditure.

 

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail

Automated Test Management – Planning

September 20th, 2010 by Bill Echlin

When implemented well, automated testing can save a significant amount of time and money. The key is to start your planning with something small, and with tests that offer a high return on investment. A good example of this is the smoke tests run by the team to check that the latest build meets a minimum standard. Assuming these automated smoke tests pass then the release can be accepted by the QA team. Since these types of tests need to be run frequently they make good candidates for automation and should be factored into your planning efforts.

Building up a breadth of these smokes tests can then help you develop up a very effective regression suite. Planning the development of this regression suite from the start will save significantly over the whole test automation planning process.

Make sure, during the planning phase, that you asses tests based on the frequency that they are run and how much time they take to setup. Tests that require a long time to setup and require high degrees of accuracy are well worth automating. If your team spends a significant amount of time setting up tests then this isn’t effort well spent. Automate the setup and then the execution. Automation tools are great at getting the setup and configuration implemented quickly and reliably. This ultimately leads to avoiding the usual errors associated with human errors.

With the tests to automate identified you’ll need to start structuring your testing. Depending on the size of your team you may want to orgainse your tests based on the type of test (e.g. smoke test) or the functional areas of the application under test. Either way it’s important to get the structure right at the start as it makes future test maintenance much easier.

If you value the effort you put into your test scripts then it’s well worth tying your automation tool into a source code control application. Source code control allows you to step back a version when things go wrong and gives you the ability to see where changes have been made over the life time of the scripts. Developers wouldn’t be without a source code control tool so neither should the QA team.

With any automation tool you’ll want to develop your tests in logical sets. Again this makes the management of the automation tests easier over the long run. It also helps when it comes to managing the log files and results generated from the tests (as these artifacts will be produced with the same structure). This structure will also be replicated in any run histories and dashboards you setup so it’s essential to get this right from the start.

The planning of the automated test management process clearly improves efficiency and accuracy. It’s also key to the subsequent dashboards and test reports that you create. Planning the project need not be difficult or time consuming. In fact the more effort spent planning usually means more time saved when it comes to managing the test automation execution.

Free Test Automation Framework Course
Learn the Practical steps to building an Automation Framework. Six modules, six emails (each email a short course in it's own right) covering Amazon AWS, Jenkins, Selenium, SoapUI, Git and JMeter.
Facebooktwittergoogle_plusredditpinterestlinkedinmail
1 19 20 21