Posts in Software Testing
Techniques for Exploratory Testing
Earlier this month we held the July session of the Indianapolis Workshop on Software Testing. The attendees were:


  • Andrew Andrada

  • Charlie Audritsh

  • Howard Clark

  • Michael Goempel

  • Jason Horn

  • Karen Johnson

  • Michael Kelly

  • Steve Lannon

  • Kelvin Lim

  • John McConda

  • Scott McDevitt

  • John Montano

  • Vishal Pujary

  • David Warren

  • Gary Warren



The topic we focused on for the five-hour workshop was 'techniques for exploratory testing'.



The workshop started with John McConda providing an overview of how Mobius Test Labs runs their exploratory test sessions. Early in the talk, John covered the basics of Session Based Test Management (SBTM). He then gave us an in-depth look at a SBTM tool that Mobius has been working on to help them manage their testing. The tool allowed multiple users to manage their sessions, their results, and auto-generated results summaries and metrics.

After John, David Warren presented an experience where he did exploratory testing on an agile project where he worked closely with the developers and customers. His talk was interesting in both it's exploratory testing content as well as it's agile content. David shared how he worked with the customer to help them structure their testing. He spoke about the difficulties of implementing automation when he knew he would be leaving the project and the customer would have to take over the maintenance of the scripts.

After David, I presented some of the heuristics I use when I test. I had a handout from a past talk and I just recycled it for the workshop. I probably could have updated it, but I didn't. I also referenced the Satisfice Heuristic Test Strategy Model, which I use a lot in my testing. For each heuristic, I tried to give a small example of how I've used it in the past. Hopefully it was helpful. I had fun talking about it.

After my experience report, we decided to skip Jason Horn's experience report in favor of an activity. Hopefully Jason can attend the Round-table next month to present his experience there. The activity involved paired exploratory testing for an application no one in the room (other then me) had seen. We tested the Open Certification question server (under development by Tim Coulter). Tim was kind enough to give us access to his development stream and gave us an idea of what types of bugs he would be interested in from us.

Armed with a mission given to us by the project coordinator, we paired off and got some testing done. There were seven groups. Each group shared a laptop, and some of the groups turned on BB TestAssistant to record their testing. We captured a couple of videos that you can watch here, here, here, and here. I find them easy to watch on 4x speed.

After about 25 to 30 minutes of testing, we stopped and debriefed as a group. We attempted to capture what techniques and tools were used during testing. We came up with the following list (there are some duplicates and incomplete ideas in there). At next month's round-table, I want to review this list with the group to see what patterns we can draw out of it.

Finally, I want to again remind workshop attendees to please log their bugs. I know some people already have. Tim would appreciate the feedback.

This month's IWST was hands down the most fun I've had at a peer workshop. I think we had an incredible energy level, we ran out of time (we skipped an entire ER and stopped the testing exercise 30 minutes early), and we had a full house. Thanks to everyone who attended. I hope to see you all again at next month's round-table where we can pick it up again. If you would like to attend the round-table, details can be found here. Round-tables are completely open to everyone. Just show up...
What is a boundary (part 2)?
In this post, I gave a working definition of a boundary. That definition was "a boundary is any criteria by which I factor my model of what I'm testing."

James Bach challenged me to come up with three specific examples and to tell him how they are boundaries. With that, I pulled out my moleskin and drew three different models: a UCML model, a system diagram, and the model I used to test the time-clock application.

I quickly came up with a list of 16 factors based on those three models. It became apparent to me that only 5 of those sixteen factors was a boundary. So much for that definition.

As I looked over the list I tried to figure out what was unique about the actual boundaries I identified. Then I thought about something Julian Harty once said to me about boundary testing. He asked the question, "do all boundaries have a quantifiable component?" (Or it was close to that in my memory. If I misquoted him, he'll let me know and I'll update the post.)

When he asked that, I immediately said no. He then asked me for an example, and I struggled to find one. At WHET #4, Rob Sabourin gave a beautiful example of boundary bugs from an experience testing Arabic to Latin text conversion. So I still think it's no, and I now have an example. However, it's still an excellent question, and remembering it gave me an insight to my working definition.

I have a new working definition:

"A boundary is any manipulatable criteria used to factor the model I'm using for the product I'm testing."

In the definition above, manipulatable means I can change it and it means I can measure/monitor it. Using that definition, when I went aback to the factors I identified from my three models, I was able to include those I felt were boundaries.

(At some point, if I think of it, I'll scan and post the moleskin pages.)