How do you know you’ve sufficiently tested the product your company is eagerly waiting for? This is probably one of the most difficult questions for a project manager to answer. Undoubtedly there is some “grayness” to determining when testing is completed, but that doesn’t mean you can’t add some “science” to determine if your product has been tested “enough.”
Success in testing requires that a testing process be created and followed. Without a process there is no way you’ll be able to determine if the product has been sufficiently tested. Typical testing processes include:
- Testing Strategy document
- Test Plans for each testing phase (for example; unit, integration and system testing phases for IT solutions)
- Developing test cases
- Creating test data
- Executing test cases
- Recording the results
- Evaluating the results
Without a testing process you’re “flying blind” … it’s similar to trying to cross a 4-lane road in the middle of the day on a heavily traveled street. If you have a process to follow (i.e. go to the designated crosswalk, press the pedestrian cross button, and proceed to cross when the walk indictor has illuminated), your chances of making it to the other side of the street are greatly improved. If you attempt to run across the street randomly, the results probably won’t be favorable! Testing without a defined process can be equally catastrophic.
Testing is one area of project management where tools can be immensely effective; specificallyspecifically, a traceability matrix. A traceability matrix can be used to record each requirement and ensures that test cases are built to test every requirement. The traceability matrix will help you determine:
- If a particular requirement is missing a corresponding test case (oops!)
- If a test case is too complicated due to testing many requirements
- If a test case has been created – but does not have a corresponding requirement because the requirement was removed from scope
- If a requirement is too complicated or over-tested because it’s being evaluated in too many test cases
- Testing status per requirement via color coding when a test has successfully completed or when defects have been identified
A traceability matrix can be created using a spreadsheet or any database tool such as MS Access. For a sample traceability matrix you can email me.
Assuming you have a documented (and used!) testing process and you have a traceability matrix in place – you’ve armed yourself with some critical information to determine when you’re done testing. Do all requirements have at least one test case tied to them? If not – you’re probably not testing the solution sufficiently. Conversely, if a given requirement has more than five test cases to evaluate it, that particular requirement is probably being over-tested.
The next characteristic to examine is defects. First, determine the number of defects identified for a given test run (several test cases should be executed during a test run). Next, determine if the number of defects are increasing or decreasing for each test run. The first time the test run is executed, expect to find a high number of defects (because the whole goal of testing is to find defects!). The 2nd run (of the same set of test cases) should have fewer defects. By the time you execute the 3rd run – your test run should be surfacing even fewer errors, and certainly lower severity defects. If the number of defects is going UP per test run, you are obviously not nearing the end of testing. In fact, this may be an indication that when defects are corrected additional errors are being introduced. More than likely the person correcting the original defect did not sufficiently unit test the product prior to returning it to the test run environment.
So, how do you know you’re done testing? Here’s a general rule of thumb. Once each requirement (based on the traceability matrix) has been tested and yields no defects or low severity defects that can be corrected later, the product is ready for “prime time.” Could I test longer and find more defects? You bet – but the time is takes to find those additional defects is usually not worth the team’s time and effort.
If you’re receiving pressure from your management team to reduce the duration of the testing effort to meet schedule constraints (and they are not able to provide you additional resources to perform the testing) plan a meeting with your business management representatives so they can review the traceability matrix and indicate which requirements do not need to be tested prior to implementation. This approach moves the decision making process where it belongs, with the business. The business knows better than you do which requirements are essential for their day-to-day operations. By showing management how the traceability matrix is driving the testing schedule they become better educated on how detailed the testing process can be. Sometimes management will then extend the scheduled implementation date or provide additional testing resources to ensure all test cases are executed as planned. If a decision is made to not test all requirements prior to implementation, a risk strategy should be developed to prepare for the potential impact of having numerous defects post implementation.
Another technique to determine if you’re done testing is to examine the percentage of defects identified based on the number of test cases executed in a given test run. If you executed 100 test cases and more than 10% had high severity defects – you’re not done testing. However, if you executed 100 test cases and you had no high severity defects and only 10% medium to low severity defects identified – you’re probably there! Work with your management team up-front to determine acceptable variances so you’ll know when the product is “stable enough” for implementation purposes. These percentage guidelines will vary greatly based on your industry and the type of product being implemented. The key is having discussions with management early in the project lifecycle (when the test strategy and test plans are being developed) to determine appropriate acceptability or exit criteria for each of the testing phases.
The “timing” of the defect makes a difference too. I EXPECT to find defects in the initial phases of testing. As you get further in the testing lifecycle and you’re doing final Quality Testing, Operations Testing or User Acceptance Testing – only minimal defects should be identified during these stages of the testing lifecycle. If you’re finding numerous or high severity errors – this is an indication that you are NOT ready for implementation and you need to consider doing additional testing prior to implementation (or prepare for more problems when the product is implemented and position the company to provide the additional support necessary to handle the “less than perfect” product.)
Testing is an art – but adding a little bit of science to this area of project management will help you feel more confident that you’ve tested your solution “enough”. If you establish and follow a testing process, you can determine when the solution is stable and ready for implementation.