Posts in IWST
Performance data analysis and interpretation
Last weekend we held the February session of the Indianapolis Workshops on Software Testing. The attendees were:

  • Jason Halla

  • Marc Labranche

  • Denise Autry

  • Charlie Audritsh

  • John McConda

  • Andrew Andrada

  • Dana Spears

  • Michael Kelly

  • Kurt McKain


The topic we focused on for the five hour workshop was performance data analysis and interpretation. The following is a summary of presentations and ideas shared.

First Charlie Audritsh shared a presentation that centered on an experience he had performance testing. While Charlie's presentation is very generic in terms of content, his actual story was quite interested and generated a lot of discussion.

Charlie shared a recent project he struggled with and how overcame that problem. He needed to develop a test or series of tests that would prove the system could handle 44,000 transactions per month during normal business hours. Doing the math and working backwards, he came up with the conclusion that this would be equivalent to a load of 100 users with a total of 10 transactions a minute. We kept him honest and made him work the math for us... it seemed to make sense. The number of users was a function of how many users would need to be on the system to actually generate the 10 transactions a minute, the number 100 was not significant for any other reason.

Using this logic, Charlie determined that if his test ran for one hour and completed a total of 600 transactions, it passed. Well, it failed. The rest of the talk (and questions) focused on some of the techniques he used to debug some of the problems found and how he was able to isolate some of the bottlenecks. We beat this horse to death, and Charlie was a good sport about it.

Following that, I presented some slides on scatter charts. The first half of the presentation was lifted with permission from Scott Barber's paper Interpreting Scatter Charts. I mixed in some simple examples of my own and talked about how I apply some of the patterns Scott identifies. The second half of the presentation was a combination of examples from one of my past projects and a host of examples shared with me by Scott and Richard Leeke. I tried to make these as interactive as possible and on average I think most found it valuable.

We spent the last two hours of the workshop on a problem solving opportunity presented by Marc Labranche titled Measuring performance in a partially indeterminate real-time system. Marc is an embedded systems programmer and he needed some help in determining how to best performance test his system. Marc was easily the star of the show, and it was somewhat unanimous that we were all jealous that he had such a cool problem to work on. If you review his problem statement, you'll know what I mean. I started a wiki to capture some of the ideas the group came up with and over the next few weeks we should be fleshing these ideas out.

I thought this workshop went better then the first one. We had great mix of beginners and experienced performance testers, and we had a developer, an architect, and an IT person to help round out the group of testers. I think the content was more challenging then the January workshop and I think the facilitation was a little smoother (I'm still working out a style).

My only complaint was that we had six people cancel at the last minute or just not show up. I don't really know how to prevent this with a free, local, five hour workshop (any ideas are welcome). But I will have to think of something. It was a little de-motivating for me, and I think the discussion could have been richer with a room full of people. Even so, I still think the workshop was a success.

Next month's topic is on unit testing. I'm intimidated by the topic and in the effort it will take to bring the developers and testers in the community together. I think that if we can bring the right group of people together, we might be able to do some really great sharing between the two roles.
Database Testing
Last weekend we held the first Indianapolis Workshops on Software Testing. The attendees were:

  • Mike Kelly

  • Jason Horn

  • Steve Lannon

  • Denise Autry

  • Linda Ellison

  • Mike Goempel

  • Rick Wellinghoff


Note that all attendees were from the Indianapolis area.

The topic we focused on for the five hour workshop was database testing. The following is a summary of presentations and ideas shared.

First Mike Goempel shared a checklist and charter he uses when he performs exploratory testing on databases (and database related items). His story covered his experience on past projects using both formal and informal testing methods and it focused mostly on techniques he uses to gather information on the database and where he might be able to find errors. Specifically Mike shared some great ideas for and experiences around interviewing DBAs, architects, and developers for possible trouble points.

Jason Horn then presented a homegrown tool he developed to help both developers and testers work with and test the database. His tool "SQL Test Harness," should be moving to an open source community soon (Jason still has some changes he wants to make first). The main function of the tool is to provide information on the database. There are functions for searching the structure of the database for all instances of a specified object, getting the definition for objects and getting meta information on objects. And finally the tool allows you to execute some database commands using its interface (which is great if you are not comfortable with SQL).

There are some limitations Jason hopes to work out when he has more time (or with contributors), but I found it immediately useful. The tools also has some neat learning features that Jason hopes to talk about within the soon-to-be-added online help, but for now, it simply shows you the SQL it generates and executes, so you can modify and execute it yourself if desired.

After that, as a group we did a walkthrough of using an enterprise automation tool for database testing. Using IBM Rational Robot and a sample Access database, we created a couple of scripts that queried the database at runtime and compared the values in the database with the values show in the sample application's GUI. We did not spend a lot of time looking at script implementations, but there was a general interest in doing something in a scripting language.

Overall, I thought the workshop went well. There was a general agreement that the content was challenging, very context specific, and that the format worked well (the format is loosely based on the LAWST style of workshops). However some time was spent at the end of the workshop for a discussion as to why attendance was not as high as the workshop organizers had hoped.

It was the impression of the group that database testing is a challenging and intimidating type of testing. Most testers (in the Indianapolis market) are not very comfortable with their technical skills, and the level of technical competency involved in meaningful database testing is typically enough to discourage most people. This sparked a list of testing skills we thought someone interested in database testing might want to practice in order to gain more confidence:

  • Boundary testing (see Goempel's checklist for more detail)

  • Smoke testing

    • Database unit testing

    • Confirming database structure

    • A simple test to ensure database connectivity



  • Testing the persistence of data

  • Database corruption and security testing (they tend to go hand in hand)

  • Database performance testing


Next month's topic is on performance testing. Hopefully we will have a better turnout and learn just as much.