Grouping bugfixes together
Someone was silly enough to let me test on Friday. I know, don't let kids run with scissors, don't play in traffic, and don't let test managers actually test. I get it.
In an effort to get a full 45 minute test session together, I decided to group some similar bugfixes together. It was around five tickets, all in the same component. I noticed something that I did on accident while testing that paid off. On reflection, I should have done it on purpose, but I didn't.
The first bugfix I tested worked. I tried a couple of scenarios around the original bug, those worked too. Happy day. I checked of that ticket and moved on to the next one. I tested the second ticket. It worked too. I tried a couple of scenarios around that ticket, those passed.
After testing the final scenario for the second ticket, it occurred to me that I was in a wonderful position to exercise the function from the first ticket. So, I clicked the button, and was rewarded with an uncaught Java exception. The scenario for my second bugfix was a perfect stress test for the first feature I tested.
The new bug was a different bug than the original one. So that fix still worked. The issue wasn't that I missed something I should have caught while testing for the first fix. What struck me was that I was lucky enough in my random selection of order for retesting the fixes that the system was in a state for me to get some simple extra tests in for the previous feature.
Had I been thinking, I would have ordered the features that way on purpose. That's the piece I missed. It's easy to think that testing bugfixes is less intensive work than setting out to test something that's previously untested. But that's not true. There's just as much opportunity for test design and thinking about the problem.
It was a good reminder for me.
In an effort to get a full 45 minute test session together, I decided to group some similar bugfixes together. It was around five tickets, all in the same component. I noticed something that I did on accident while testing that paid off. On reflection, I should have done it on purpose, but I didn't.
The first bugfix I tested worked. I tried a couple of scenarios around the original bug, those worked too. Happy day. I checked of that ticket and moved on to the next one. I tested the second ticket. It worked too. I tried a couple of scenarios around that ticket, those passed.
After testing the final scenario for the second ticket, it occurred to me that I was in a wonderful position to exercise the function from the first ticket. So, I clicked the button, and was rewarded with an uncaught Java exception. The scenario for my second bugfix was a perfect stress test for the first feature I tested.
The new bug was a different bug than the original one. So that fix still worked. The issue wasn't that I missed something I should have caught while testing for the first fix. What struck me was that I was lucky enough in my random selection of order for retesting the fixes that the system was in a state for me to get some simple extra tests in for the previous feature.
Had I been thinking, I would have ordered the features that way on purpose. That's the piece I missed. It's easy to think that testing bugfixes is less intensive work than setting out to test something that's previously untested. But that's not true. There's just as much opportunity for test design and thinking about the problem.
It was a good reminder for me.