Posts in Software Testing
Grouping bugfixes together
Someone was silly enough to let me test on Friday. I know, don't let kids run with scissors, don't play in traffic, and don't let test managers actually test. I get it.

In an effort to get a full 45 minute test session together, I decided to group some similar bugfixes together. It was around five tickets, all in the same component. I noticed something that I did on accident while testing that paid off. On reflection, I should have done it on purpose, but I didn't.

The first bugfix I tested worked. I tried a couple of scenarios around the original bug, those worked too. Happy day. I checked of that ticket and moved on to the next one. I tested the second ticket. It worked too. I tried a couple of scenarios around that ticket, those passed.

After testing the final scenario for the second ticket, it occurred to me that I was in a wonderful position to exercise the function from the first ticket. So, I clicked the button, and was rewarded with an uncaught Java exception. The scenario for my second bugfix was a perfect stress test for the first feature I tested.

The new bug was a different bug than the original one. So that fix still worked. The issue wasn't that I missed something I should have caught while testing for the first fix. What struck me was that I was lucky enough in my random selection of order for retesting the fixes that the system was in a state for me to get some simple extra tests in for the previous feature.

Had I been thinking, I would have ordered the features that way on purpose. That's the piece I missed. It's easy to think that testing bugfixes is less intensive work than setting out to test something that's previously untested. But that's not true. There's just as much opportunity for test design and thinking about the problem.

It was a good reminder for me.
Planning for reporting
Working on an article related to test analysis and reporting, I got to thinking about some of the questions I try to answer when I report test status. Here are some of the questions I might ask myself during a project:

  • How much testing did we plan to do and how much of that have we done?

  • What potential tests are remaining and in which areas?

  • What is our current test execution velocity and what has it been for the duration of the project? How does that break out across testing area or test case priority?

  • What is our current test creation velocity and what has it been for the duration of the project? How does that break out across testing area or test case priority?

  • How much of our tests have been focused on basic functionality (basic requirements, major functions, simple data) vs. common cases (users, scenarios, basic data, state, and error coverage)vs. stress testing (strong data, state, and error coverage, load, and constrained environments)?

  • How much of our testing has been focused on capability verses other types of quality criteria (performance, security, scalability, testability, maintainability, etc...)?

  • How many of the requirements have we covered and how much of the code have we covered?

  • If applicable, what platforms and configurations have we covered?

  • How many issue have we found, what severity are they, where have we found them, and how did we find them?



What did I miss? What do you ask?