Posts in Software Testing
A simple heuristic for when you get stuck
James Bach gave me a sample application to test. It was a simple application with what turned out to be a fairly straightforward bug in it. After a few minutes of testing I started to have a really hard time coming up with new test cases to try. I was stuck.

James offered the following heuristic:

  1. Look over the test cases you have already executed.

  2. Find a pattern.

  3. In the next test case, violate that pattern.


It worked. I found the bug on the very next test case.

-----------------------

Previous Comments

-----------------------

 



>>>>>Creativity techniques, for heuristics




I've found this page to be useful:
http://www.mycoted.com/creativity/techniques/

It's got links to lots of idea-generating techniques.

 



>>>>>Too much boundary testing



The application was an IP Address syntax checker. Almost all of my test cases were test cases at the numeric boundaries (0 and 255), reserved boundaries (10.0.0.0 through 10.255.255.255; 172.16.0.0 through 172.31.255.255; and 192.168.0.0 through 192.168.255.255), and using real IP Addresses (like the IP Address for the computer I was testing on). When I looked at the tests I had run, I found that I hadn't really been using all the digits available to me. I tried a random test case with various values I had not used and I stumbled across the bug.



 



>>>>>what was the pattern



Could you maybe explain the pattern that you violated?


Assessing Test Coverage
A couple of weekends ago (I'm a little behind) we held the July session of the Indianapolis Workshop on Software Testing. The attendees were:


  • Andrew Andrada

  • Mike Goempel

  • Michael Kelly

  • Jeffrey Mexin

  • Susan Morgan

  • Dana Spears

  • Kim Titus



The topic we focused on for the five-hour workshop was assessing test coverage.



The workshop started with an experience report about managing coverage by Mike Goempel. Mike's experience focused on requirements based coverage with additional exploratory testing based on risk. He shared a spreadsheet he uses to manage the testing process and to help identify holes in coverage. He uses a completeness versus quality technique where completeness is measured in terms of his dashboard and a requirements traceability matrix and where quality is measured by test case reviews, defect counts, and tester interviews. If you have questions about the dashboard, Mike would be glad to answer them via email.

Next, Dana Spears (with some help by Susan Morgan) presented a wonderful experience she had using Mary Decker's risk assessment template. She talked about how she customized the template to allow her to look at a project in terms of risk and to organize her testing in terms of those risks. Coverage in her example was not based on code or requirements, but instead on areas of the application under test where the risks were the highest. She gave several rich examples along with some lessons learned.

In addition, Dana and Susan also briefly addressed some derivative work they did using the risk assessment template that they also presented at the Indianapolis Quality Assurance Association. They used the same template and customized it to help with determining tester to developer ratios.

Finally, I used Brian Marick's agile testing directions chart to talk about how I envision test coverage throughout the project. I like it and he said it better then I can, so I stole it (with proper credit of course). Using a whiteboard and capturing some of the things we had already talked about, I cataloged the coverage techniques we talked about (and some we hadn't) in terms of business facing or technology facing coverage and in terms of support for programming or critique of the product.

It has just occurred to me that I need to start taking a digital camera with me to the meetings.

Next month, maintaining testing skills. Email me if you would like to attend.