Posts in IWST
Automation and performance logging
This past weekend we held the September 2009 IWST workshop on the topic of “Automation and performance logging.” The workshop participants were:

  • Sameer Adibhatla

  • Randy Fisher

  • Mike Goempel

  • James Hill

  • Jason Horn

  • Gabriel Issah

  • Michael Kelly

  • Natalie Mego

  • Chad Molenda

  • Cathy Nguyen

  • Charles Penn

  • Debby Polley

  • Kumar Ramalingam

  • Tam Thai

  • Brad Tollefson

  • Chris Wingate


The first experience report for the day came from Jason Horn. Jason is a longtime veteran of intimidating speakers who come after him, and he lived up to his reputation. Jason's talk was about a logging framework he and his team developed within VSTS to help work around some issues they were seeing in response times generated by the tool while doing load testing. As a team, they had been creating reusable performance test modules. Jason would develop them while doing functional testing, and pass them along to the performance testers who would run them under load. An unintended side-effect of this framework-based approach, was that their response times got bloated while running under load. They included the time it took to process framework code.

Not to be defeated, the team did what any self respecting team of coders would do. They... ummm... wrote their own logging framework within VSTS. And I'm not talking about some wimpy line of text out to a log file somewhere. I mean they created a database, hooked into the VSTS object model, and siphoned off information about test execution opportunistically. Jason's talk was detail-filled; he walked us through sequence diagrams, object models, the database schema, and was even kind enough to share the code.

At the end of the day, the team was able to get everything they needed. They had test code reusability (Jason would develop the initial code for automation -> it would get leveraged for load testing -> it wasn't throw-away code like you'd see with most recorded performance test scripts), they had homegrown debugging logging for script errors and suspicious execution times, and they had performance test results they could trust.

The second talk came from a couple of guys from a local insurance company: Randy Fisher, Tam Thai, and Kumar Ramalingam. They shared some proof-of-concept code that they were working on for a centralized transaction logging database. Randy recently centralized test automation and performance at the company he works at. They primarily use HP Quality Center, and they've hit some issues around reporting on test execution across the enterprise. It's a big company (think hundreds of testers), with all the budgeting joys of big companies, so metrics on who's running what tests are important to them.

The team had pulled together a simple QTP function to log transactions to a centralized database. They would then go back through existing scripts adding this function call, and use it in all new scripts developed. It would allow for basic metrics on "we ran these tests on these dates." We talked about some other metrics they might include over time to better leverage the data. We also gave some feedback on the code, providing some advice on better connections to SQL, alerting, etc.... It was a good problem for starting some meta conversations around what to log and report on, and I think they got some good tips for a cleaner implementation.

After Randy, Tam, and Kumar finished their experince report Brad Tollefson shared his experince using Watin. Brad first gave a general overview of how his company does testing - you can find that in his short handout. He then went into the specifics of using Watin to test their web interface. I think one of the coolest things I saw was that Brad had written his own Windows client to manage Watin test cases. Someone in the Watin community should ping him about that and see if he can share it. (You can see a screen shot in the handout linked above.)

We spent some time talking about what Brad logs and how he uses that information. His automated tests appear to serve as an early performance warning. I like that. We also discussed how Brad might speed up his tests by using some different methods and settings in Watin. A couple good tips in there by Chad Molenda and Chris Wingate around changing typing speed, populating by value and then triggering events, and running with the browser hidden.

Finally, James Hill presented an overview of his work on CUTS. He actually pulled together what I think is one of the better presentations I've seen and used it as a backdrop for the overview portion of his talk. After the slides where done, he jumped into the gory details of how CUTS does logging. Not only did he go into how CUTS manages the problem of large scale distribution of clients, but he also went into some of the theory of why they chose to use execution traces, why they build the logs in reverse, and he walked us through a good bit of the logging code. I'm not sure I followed all of it, but I think I kept pace with most of it. I found a lot of parallels in the CUTS architecture and other performance test tools.

Next month we have a panel of speakers talking about time management. I hope to announce details soon (this week maybe). It should be fun. It's also open to the public. I'm hoping to get around 40-50 people in attendance. It's a topic several people have asked for, and it should be useful for anyone working in software development - not just testers.

Thanks to all who attended the workshop.
Interviews at peer-workshops
Last month I had the pleasure to facilitate the Workshop on Regulated Software Testing . At the workshop I had the pleasure to see something I've not previously seen at a workshop. During the workshop Geordie Keitt , the head of the AST eVoting SIG ,  interviewed Jim Nilius former Senior Director of Voting System Test Lab at SysTest Labs Incorporated. It was a full-on Barbra Walters style interview on the topic of testing voting systems. And it was awesome.

For peer workshops of that type, you typically have two or three types of presentations. The most common is an experience report. That's where someone in attendance gets up and tells a story about a real project they worked on or a real problem they solved. You can tell it's an experience report because they use words like "I" and "we" a lot. The idea is that through sharing of actual experiences, followed by open and honest questioning, everyone can learn more about what works and what doesn't, and why.

Other types of presentations can include problem-solving opportunities where an attendee relates an issue they are struggling with right now and attendees try to help generate ideas for what to try next. Workshops can also include research reports (describing original research that you conducted or significantly participated in) or position papers (which might be on a topic you feel strongly about, but may not have a specific experience to share).

What made the interview so great was that Geordie controlled how we learned about the topic. Through his series of planned and ad-hoc questions, he drew out Jim's stories and experience. It was also entertaining (since both of them have a health sense of humor). After the interview, Jim remained in front of the workshop for a session of open-season questioning where any attendee could as a question they thought Geordie had missed.

I suspect I'll be adding the interview format to the IWST website . I encourage other workshop organizers to do the same. I'm not sure everyone would be as successful and Geordie and Jim, but it worked really well - and it was a nice change of format. I'm thinking it might also be a great way for someone who might not be comfortable enough to share an experience report to still share their experience.

I believe Geordie and Jim will have some follow up work on the topic to publish at some point. I'll like to it when it comes out.