Last weekend we held the February session of the Indianapolis Workshops on Software Testing. The attendees were:
The topic we focused on for the five hour workshop was performance data analysis and interpretation. The following is a summary of presentations and ideas shared.
First Charlie Audritsh shared a presentation that centered on an experience he had performance testing. While Charlie's presentation is very generic in terms of content, his actual story was quite interested and generated a lot of discussion.
Charlie shared a recent project he struggled with and how overcame that problem. He needed to develop a test or series of tests that would prove the system could handle 44,000 transactions per month during normal business hours. Doing the math and working backwards, he came up with the conclusion that this would be equivalent to a load of 100 users with a total of 10 transactions a minute. We kept him honest and made him work the math for us... it seemed to make sense. The number of users was a function of how many users would need to be on the system to actually generate the 10 transactions a minute, the number 100 was not significant for any other reason.
Using this logic, Charlie determined that if his test ran for one hour and completed a total of 600 transactions, it passed. Well, it failed. The rest of the talk (and questions) focused on some of the techniques he used to debug some of the problems found and how he was able to isolate some of the bottlenecks. We beat this horse to death, and Charlie was a good sport about it.
Following that, I presented some slides on scatter charts. The first half of the presentation was lifted with permission from Scott Barber's paper Interpreting Scatter Charts. I mixed in some simple examples of my own and talked about how I apply some of the patterns Scott identifies. The second half of the presentation was a combination of examples from one of my past projects and a host of examples shared with me by Scott and Richard Leeke. I tried to make these as interactive as possible and on average I think most found it valuable.
We spent the last two hours of the workshop on a problem solving opportunity presented by Marc Labranche titled Measuring performance in a partially indeterminate real-time system. Marc is an embedded systems programmer and he needed some help in determining how to best performance test his system. Marc was easily the star of the show, and it was somewhat unanimous that we were all jealous that he had such a cool problem to work on. If you review his problem statement, you'll know what I mean. I started a wiki to capture some of the ideas the group came up with and over the next few weeks we should be fleshing these ideas out.
I thought this workshop went better then the first one. We had great mix of beginners and experienced performance testers, and we had a developer, an architect, and an IT person to help round out the group of testers. I think the content was more challenging then the January workshop and I think the facilitation was a little smoother (I'm still working out a style).
My only complaint was that we had six people cancel at the last minute or just not show up. I don't really know how to prevent this with a free, local, five hour workshop (any ideas are welcome). But I will have to think of something. It was a little de-motivating for me, and I think the discussion could have been richer with a room full of people. Even so, I still think the workshop was a success.
Next month's topic is on unit testing. I'm intimidated by the topic and in the effort it will take to bring the developers and testers in the community together. I think that if we can bring the right group of people together, we might be able to do some really great sharing between the two roles.
- Jason Halla
- Marc Labranche
- Denise Autry
- Charlie Audritsh
- John McConda
- Andrew Andrada
- Dana Spears
- Michael Kelly
- Kurt McKain
The topic we focused on for the five hour workshop was performance data analysis and interpretation. The following is a summary of presentations and ideas shared.
First Charlie Audritsh shared a presentation that centered on an experience he had performance testing. While Charlie's presentation is very generic in terms of content, his actual story was quite interested and generated a lot of discussion.
Charlie shared a recent project he struggled with and how overcame that problem. He needed to develop a test or series of tests that would prove the system could handle 44,000 transactions per month during normal business hours. Doing the math and working backwards, he came up with the conclusion that this would be equivalent to a load of 100 users with a total of 10 transactions a minute. We kept him honest and made him work the math for us... it seemed to make sense. The number of users was a function of how many users would need to be on the system to actually generate the 10 transactions a minute, the number 100 was not significant for any other reason.
Using this logic, Charlie determined that if his test ran for one hour and completed a total of 600 transactions, it passed. Well, it failed. The rest of the talk (and questions) focused on some of the techniques he used to debug some of the problems found and how he was able to isolate some of the bottlenecks. We beat this horse to death, and Charlie was a good sport about it.
Following that, I presented some slides on scatter charts. The first half of the presentation was lifted with permission from Scott Barber's paper Interpreting Scatter Charts. I mixed in some simple examples of my own and talked about how I apply some of the patterns Scott identifies. The second half of the presentation was a combination of examples from one of my past projects and a host of examples shared with me by Scott and Richard Leeke. I tried to make these as interactive as possible and on average I think most found it valuable.
We spent the last two hours of the workshop on a problem solving opportunity presented by Marc Labranche titled Measuring performance in a partially indeterminate real-time system. Marc is an embedded systems programmer and he needed some help in determining how to best performance test his system. Marc was easily the star of the show, and it was somewhat unanimous that we were all jealous that he had such a cool problem to work on. If you review his problem statement, you'll know what I mean. I started a wiki to capture some of the ideas the group came up with and over the next few weeks we should be fleshing these ideas out.
I thought this workshop went better then the first one. We had great mix of beginners and experienced performance testers, and we had a developer, an architect, and an IT person to help round out the group of testers. I think the content was more challenging then the January workshop and I think the facilitation was a little smoother (I'm still working out a style).
My only complaint was that we had six people cancel at the last minute or just not show up. I don't really know how to prevent this with a free, local, five hour workshop (any ideas are welcome). But I will have to think of something. It was a little de-motivating for me, and I think the discussion could have been richer with a room full of people. Even so, I still think the workshop was a success.
Next month's topic is on unit testing. I'm intimidated by the topic and in the effort it will take to bring the developers and testers in the community together. I think that if we can bring the right group of people together, we might be able to do some really great sharing between the two roles.