If the deployment has to be rescheduled because of testing, usually there are two things gone wrong:
- Testing was not effective.
- There are unsolved critical observations (defects, change requests, open issues).
Most of the time testing is not done on schedule, because there are defects that prevent finalizing the testing.
In conclusion: It is important to find the defects that prevent deployment as early as possible, so there’s enough time to fix them.
Here are 3 tips how to do it.
1) Risk analysis
Before testing starts, it’s good to do risk analysis.
- Find out what functionalities are business critical. These have to work perfectly.
- Find out what open defects prevent making a positive decision on implementation. The idea is to find tangible issues. For example, showing mistakes in pricing that clients can see.
- Make a plan about how to find these defects preventing implementation as early as possible.
The results of the risk analysis are to be communicated to the testers. Kick off for testing is a good place for that. The idea is to help testers know why their testing is important and why some of their observations are low priority.
2) Exploratory test order
I’ll give you a simple example on how the testing order effects the speed of testing.
Let’s imagine there are 80 use cases created in the project. From these, 40 cases somehow relate to business critical functions according to risk analysis. One use case always includes 5 test cases. So to make sure the critical functions work, we need to test 200 test cases OK.
So the question is, how to test 200 test cases so that we find the defects, that prevent implementation, first?
This is the testing order I suggest:
- From each of these 40 use cases, we choose 1 test case, that describes the basic functionality of the use case.
In the first few days we will test these 40 test cases and find out do the critical processes work on basic level.
-> Ergo make critical observations preventing implementation.
- The following days we will extend the testing to those use cases, that don’t have open observations.
I only plan 2-3 days of testing on detail. Based on the test results I do the next testing plan. For example, based on the first testing results, if there are 15 use cases that have open defects preventing further testing, we won’t be testing them. We will concentrate our efforts on areas that we can test.
3. Check -steps
Testers coming from the business are fairly good at finding defects. Missing and “invisible” things on the other hand are usually missed completely. This is why I’ve started to add Check-steps to the test cases. Even when the test case includes the expected results, it’s always good to do a comprehensive check.
In this risk analysis we’ve found out what things need to be working. Making the decision on implementation the testing manager needs to prove that we have been testing reliably. Based on the risk analysis I’ll add the Check-steps to test cases. I’ll receive clear information from the critical issues to support the decisions, and usually, when going through check lists, more defects are found.
Observation or defect?
I write about both observation and defect, and this is because not all observations made by the tester, are defects. They can be change requests, open issues, development ideas etc. For example, something forgotten about the specification, that is critical to the business. These other observations can prove to be more difficult than obvious defects.