Software Testing for CIOs
In November's issue of CIO Magazine, there was not one, but two articles on software testing:
I liked Meridith Levinson's "Testing, 1, 2, 3...," which I thought was fairly direct, provided context, and had good principles for the target audience:
I was not as big a fan of "Inside the Software Testing Quagmire" by Paul Garbaczeski. Perhaps it was just the way it was written, but some parts of it rubbed me wrong. I think there were good points, but it seemed to propagate software-testing myths. When talking to executive level management, instead of propagating misconceptions about software development that they can read from any vendor advertisement, I would rather provide them more thought out questions.
For example, Question 2 "Is development complete?" seems to assume that your testers and developers are not working together. That's a problem for me. I'm big on collaboration. Jonathan Kohl has example after example after example of how testers and developers working together can be more powerful then waiting for "controlled" releases.
Another example, Question 3,
sigh
Where to start...
Let's start with comprehensive. What do you mean by comprehensive? Are you talking about testing that covers all the application's functionality? Do you mean stated functionality or actual functionality? Are you talking about comprehensive in terms of classifications of software faults? Are you talking about comprehensive in terms of some IEEE (or some other "official" body) standard for "comprehensive" testing? Perhaps you should define comprehensive, but before you do, check out the measurement problem and the impossibility of complete testing by Kaner and Bach.
Next, repeatable. Why does a test need to be repeatable for it to be effective? How are those two things related? I thought the power of a test was related to its ability to reveal a problem if one existed. Don't get me wrong, I think repeatability can be an important attribute for some tests, but not because it makes those tests more effective. It just makes them easier to run again if I thought they were effective in the first place. Repeatability is meaningless when you talk about the effectiveness of a test. This time look at regression testing by Kaner and Bach or if you're in a hurry just read Bach's To Repeat Tests or Not to Repeat and Nine Reasons to Repeat Tests.
Yesterday I ran a test that uncovered three defects that occur when navigating between pages in a web application. I will most likely never run the test again. Was it effective? I think so. I had a conjecture that a certain type of error might exist in the application based on a conversation with a developer (see question two above). I asked the developer how long it would take him to manually review the code to see if it was a problem; he said it would take him a couple of hours. I ran a test in about thirty minutes and confirmed my conjecture. The developer made some fixes and neither of us thinks that this will ever be a problem again. We will most likely never re-run that test. There was no "set of repeatable test cases" that test case will be a part of. Should I not have executed it?
Continuing with question three, "Is testing ad hoc or disciplined?" Is this implying that a tester who does not document all test cases is not a "disciplined" tester? Perhaps reading Bach's Fighting Bad Test Documentation might offer an alternative.
Garbaczeski goes on to say that "If temporary testers are conscripted from other parts of the organization to "hammer" the software without using formal test cases, it means the organization is reacting to poor testing by adding resources to collapse the test time, rather than addressing the problem's root causes." Of course, it would be impossible for someone to be able to test without using formal test cases. I just wish I knew where I could find some documentation on how one might be able to do that...
Oh wait! Look here:
I actually liked question 4, and trust me - at this point, I was looking for places to disagree with the author. No free rides here!
I also think question 5 is a fair question, I would just caution the reader to understand the full implications of measuring software testers. For a short read, check out my post on The classic problems with scripted testing or for a long read, I recommend Kaner's Measuring the effectiveness of software testers and Measurement of the Extent of Testing and Marick's Classic Testing Mistakes.
- Testing, 1, 2, 3... by Meridith Levinson
- Inside the Software Testing Quagmire by Paul Garbaczeski
I liked Meridith Levinson's "Testing, 1, 2, 3...," which I thought was fairly direct, provided context, and had good principles for the target audience:
1] Respect your testers.
2] Colocate your testers and developers.
3] Set up an independent reporting structure.
4] Dedicate testers to specific systems.
5] Give them business training.
6] Allow business users to test too.
7] Involve network operations.
8] Build a lab that replicates your business environment.
9] Develop tests during the requirements phase.
10] Test the old with the new.
11] Apply equivalence class partitioning.
I was not as big a fan of "Inside the Software Testing Quagmire" by Paul Garbaczeski. Perhaps it was just the way it was written, but some parts of it rubbed me wrong. I think there were good points, but it seemed to propagate software-testing myths. When talking to executive level management, instead of propagating misconceptions about software development that they can read from any vendor advertisement, I would rather provide them more thought out questions.
For example, Question 2 "Is development complete?" seems to assume that your testers and developers are not working together. That's a problem for me. I'm big on collaboration. Jonathan Kohl has example after example after example of how testers and developers working together can be more powerful then waiting for "controlled" releases.
Another example, Question 3,
Are test cases comprehensive and repeatable; are they executed in a controlled environment?
You're really asking: Is testing ad hoc or disciplined?
You're trying to determine: If testing is effective.
Interpreting the response: There should be a set of repeatable test cases and a controlled test environment where the state of the software being tested and the test data are always known. Absent these, it will be difficult to discern true software defects from false alarms caused by flawed test practices.
A related symptom to check: If temporary testers are conscripted from other parts of the organization to "hammer" the software without using formal test cases, it means the organization is reacting to poor testing by adding resources to collapse the test time, rather than addressing the problem's root causes.
sigh
Where to start...
Let's start with comprehensive. What do you mean by comprehensive? Are you talking about testing that covers all the application's functionality? Do you mean stated functionality or actual functionality? Are you talking about comprehensive in terms of classifications of software faults? Are you talking about comprehensive in terms of some IEEE (or some other "official" body) standard for "comprehensive" testing? Perhaps you should define comprehensive, but before you do, check out the measurement problem and the impossibility of complete testing by Kaner and Bach.
Next, repeatable. Why does a test need to be repeatable for it to be effective? How are those two things related? I thought the power of a test was related to its ability to reveal a problem if one existed. Don't get me wrong, I think repeatability can be an important attribute for some tests, but not because it makes those tests more effective. It just makes them easier to run again if I thought they were effective in the first place. Repeatability is meaningless when you talk about the effectiveness of a test. This time look at regression testing by Kaner and Bach or if you're in a hurry just read Bach's To Repeat Tests or Not to Repeat and Nine Reasons to Repeat Tests.
Yesterday I ran a test that uncovered three defects that occur when navigating between pages in a web application. I will most likely never run the test again. Was it effective? I think so. I had a conjecture that a certain type of error might exist in the application based on a conversation with a developer (see question two above). I asked the developer how long it would take him to manually review the code to see if it was a problem; he said it would take him a couple of hours. I ran a test in about thirty minutes and confirmed my conjecture. The developer made some fixes and neither of us thinks that this will ever be a problem again. We will most likely never re-run that test. There was no "set of repeatable test cases" that test case will be a part of. Should I not have executed it?
Continuing with question three, "Is testing ad hoc or disciplined?" Is this implying that a tester who does not document all test cases is not a "disciplined" tester? Perhaps reading Bach's Fighting Bad Test Documentation might offer an alternative.
Garbaczeski goes on to say that "If temporary testers are conscripted from other parts of the organization to "hammer" the software without using formal test cases, it means the organization is reacting to poor testing by adding resources to collapse the test time, rather than addressing the problem's root causes." Of course, it would be impossible for someone to be able to test without using formal test cases. I just wish I knew where I could find some documentation on how one might be able to do that...
Oh wait! Look here:
- HICCUPP
- What is Exploratory Testing And How it Differs from Scripted Testing
- Exploratory Testing Explained
- Exploring Exploratory Testing
- Exploratory Testing
- Five Dimensions of Exploratory Testing
- Touring a new application
- Good books on software testing
I actually liked question 4, and trust me - at this point, I was looking for places to disagree with the author. No free rides here!
I also think question 5 is a fair question, I would just caution the reader to understand the full implications of measuring software testers. For a short read, check out my post on The classic problems with scripted testing or for a long read, I recommend Kaner's Measuring the effectiveness of software testers and Measurement of the Extent of Testing and Marick's Classic Testing Mistakes.