Posts in Experience Report
Interviews at peer-workshops
Last month I had the pleasure to facilitate the Workshop on Regulated Software Testing . At the workshop I had the pleasure to see something I've not previously seen at a workshop. During the workshop Geordie Keitt , the head of the AST eVoting SIG ,  interviewed Jim Nilius former Senior Director of Voting System Test Lab at SysTest Labs Incorporated. It was a full-on Barbra Walters style interview on the topic of testing voting systems. And it was awesome.

For peer workshops of that type, you typically have two or three types of presentations. The most common is an experience report. That's where someone in attendance gets up and tells a story about a real project they worked on or a real problem they solved. You can tell it's an experience report because they use words like "I" and "we" a lot. The idea is that through sharing of actual experiences, followed by open and honest questioning, everyone can learn more about what works and what doesn't, and why.

Other types of presentations can include problem-solving opportunities where an attendee relates an issue they are struggling with right now and attendees try to help generate ideas for what to try next. Workshops can also include research reports (describing original research that you conducted or significantly participated in) or position papers (which might be on a topic you feel strongly about, but may not have a specific experience to share).

What made the interview so great was that Geordie controlled how we learned about the topic. Through his series of planned and ad-hoc questions, he drew out Jim's stories and experience. It was also entertaining (since both of them have a health sense of humor). After the interview, Jim remained in front of the workshop for a session of open-season questioning where any attendee could as a question they thought Geordie had missed.

I suspect I'll be adding the interview format to the IWST website . I encourage other workshop organizers to do the same. I'm not sure everyone would be as successful and Geordie and Jim, but it worked really well - and it was a nice change of format. I'm thinking it might also be a great way for someone who might not be comfortable enough to share an experience report to still share their experience.

I believe Geordie and Jim will have some follow up work on the topic to publish at some point. I'll like to it when it comes out.
Grouping bugfixes together
Someone was silly enough to let me test on Friday. I know, don't let kids run with scissors, don't play in traffic, and don't let test managers actually test. I get it.

In an effort to get a full 45 minute test session together, I decided to group some similar bugfixes together. It was around five tickets, all in the same component. I noticed something that I did on accident while testing that paid off. On reflection, I should have done it on purpose, but I didn't.

The first bugfix I tested worked. I tried a couple of scenarios around the original bug, those worked too. Happy day. I checked of that ticket and moved on to the next one. I tested the second ticket. It worked too. I tried a couple of scenarios around that ticket, those passed.

After testing the final scenario for the second ticket, it occurred to me that I was in a wonderful position to exercise the function from the first ticket. So, I clicked the button, and was rewarded with an uncaught Java exception. The scenario for my second bugfix was a perfect stress test for the first feature I tested.

The new bug was a different bug than the original one. So that fix still worked. The issue wasn't that I missed something I should have caught while testing for the first fix. What struck me was that I was lucky enough in my random selection of order for retesting the fixes that the system was in a state for me to get some simple extra tests in for the previous feature.

Had I been thinking, I would have ordered the features that way on purpose. That's the piece I missed. It's easy to think that testing bugfixes is less intensive work than setting out to test something that's previously untested. But that's not true. There's just as much opportunity for test design and thinking about the problem.

It was a good reminder for me.