Posts in Performance Testing
Late nights, early mornings, too many projects, and Hyperion IR Web Client
I'm no stranger to work. I'm what some might call a workaholic. However, recently I've been working too much. Between my day job, writing, and AST I'm a bit frazzled. One of the nice things I've found about our community is that when the going gets tough, people are there to help.

In AST, Scott Barber and Cem Kaner have picked up some of my slack, and I'm very appreciative. In my writing, my now long-time co-author has been patient and encouraging as we work through (yet another) dry spell. And at work I've recently had the pleasure of interacting with Alex Podelko as I've tried to struggle through too many projects (often at odd hours).

I've been doing a lot of multi-project multi-tasking (read more about that here). It's kinda like putting your brain in a blender. A couple weeks ago I remember interacting with one of my team members and them stopping the conversation because I couldn't focus long enough to process the information they were giving me. Not good...

Recently, these behaviors became manifest as I was working with one of our performance testers to create some performance scenarios for Hyperion IR Web Client. We couldn't see traffic in the recording for the scenarios once the Web Client was loaded. Thinking we had an incorrect setting, like all performance testers we turned to Google for the answer. You can't throw a rock and not hit Alex's name if you Google performance testing and Hyperion. I've read plenty of Alex's writings, so I figured this was as good a time as any to introduce myself.

Alex was incredibly helpful in helping me work through my problem. He asked questions, sent me code, and looked at my screenshots. At one point Alex very gracefully took at step back and asked me what exactly was I trying to record. I think he figured out that I clearly wasn't thinking, and he very nicely pointed that out. Web Clients run on the client side (hence the nifty naming convention); after you load a document nothing is communicated to the server until you request data. Data conversion, building graphs, filtering, etc… all happen on the client side.

I was trying to record client-side activity with a performance test tool. I would like to think that on a good day, I would know better. As I continue in my transition from technical leader to manager, I wonder if this is one of the common ways in which managers lose their ability to contribute productively to technical solutions. It's not that I don't still possess the technical skills; I'm sure I do. It's that I can't focus on anything long enough to get the clarity of thought one needs when solving difficult technical problems. I'm sure Rothman's written about this phenomenon extensively (and I'm sure I've read it – I just can't remember right now).
Scripting for Testers and Test Automation with Open Source Tools

This weekend we held the final IWST workshop for 2007. The topic of the five hour workshop was 'Scripting for Testers and Test Automation with Open Source Tools.' What a great workshop! We packed in seven experience reports (with open season questioning) and we talked a bit about IWST for 2008. Attendees for the workshop included:




  • Andrew Andrada

  • Charlie Audritsh

  • Michael Goempel

  • Michael Kelly

  • John McConda

  • Chad Molenda

  • Cathy Nguyen

  • Charles Penn

  • Mike Slattery

  • Gary Warren

  • Chris Wingate

  • Jeff Woodard


We opened the workshop with an experience report from Charles Penn on his experience learning Watir. Charles covered some of the resources he used to learn the tool and he shared the details of his experience trying to get popups to work. Charles currently uses Watir for some of his regression testing. He has a suite that he runs daily over lunch. The code Charles used for his popups can be found here: Pop Up Clicker and a Test Logging Example


Following Charles, Jeff Woodard shared an experience where he used AutoIT to automate the import and export of test cases with IBM Rational ManaulTest. Jeff worked at a client that required the use of Rational ManaulTest. He felt the tool restricted his productivity. So he created some simple scripts that allowed him to work in his tool of choice for test case design and documentation, then all he had to do was run the scripts to "check in" the tests. With about 2 days of scripting, he estimates he shaved over a week from his workload doing administrative testing tasks. He managed over 700 manual test cases using AutoIT. For those who might be interested in his solution, the AutoIT code can be found here: export and import


Next, Cathy Nguyen shared how she currently uses Selenium for her test automation. The application Cathy tests is very Ajax intensive, and Selenium was one of the tools that she could get to easily work with the application. She uses the FireFox recorder, then takes those tests and converts them to C# for integration and management in Visual Studio Team System. With a bit of help she was able to create an 'MS Unit Test' converter, allowing her to integrate her Selenium tests with the developer tests already in use. Results can be tracked in the team SharePoint site and the scripts can be stored in source control. One of her next steps is to get the scripts running as part of the continuous integration process. The code that Cathy used to convert the tests to MS unit tests can be found here: tbd


After Cathy, Chris Wingate and Chad Molenda shared a joint experience of using Watir for smoke testing multiple projects in multiple environments, with several builds a day. The testing load on the smoke test scripts required them to migrate the smoke tests from IBM Rational Robot to Watir (faster execution, parallel execution, and easier code maintenance). They experienced similar popup issues, but also had some threading issues and dealing with multiple concurrent browsers on the same machine. They are working on creating a new method that allows you to manage dialogs based on the parent window instead of the user32.dll. I believe they plan on sharing that code back with the Watir community when they are complete; so look for that code in a future Watir release.


John McConda then presented an experience of using RadView WebLOAD on a client project. WebLOAD has an open source model that allows for some features in the open source version with a commercial version for clients who need more features/users. John said that he was able to get it to work with some Ajax and did a quick demo against the Mobius Labs website.


Following John, Charlie Audritsh presented some scripting tips for performance testers. First he showed a trick he uses to script usage model paths using case statement with a random number generator. This simple technique allowed him to reduce the number of scripts required to implement his workload model, and also allowed him to implement fractional percentages for user groups/activities within the model. After that, Charlie showed a store procedure he wrote to help with performance test logging. He uses it for debugging and for detailed logging when he needs it (turns it off when he doesn't). An example of both techniques Charlie shared can be found here: dec_iwst_audritsh.wri


Finally, Mike Slattery shared a number of tools he has tried for his testing, including Jacobie, webunitproj, jwebunit, htmlunit, httpunit, and Selenium. He found he had some common problems he had to work through, regardless of the tool. Those included popups, difficulty in having the same tests do double duty as functional and performance tests, difficulty in testing stylistic aspects of the application, and general test code fragility. Mike did say something about the ability to build your own recorders in Selenium, something I wasn't aware of. I know I need to check that out.


On the topic of future IWST workshops, I'm sure we will do something for 2008. I don't think they will be every month, but we'll see what we can do. I plan on starting some discussions on the IWST mailing list and we'll see what the community has energy for. I will also get all this info (code and such - when I get it) up on the IWST site at some point. I think I need to upgrade and redesign a bit in the coming months.


 



Thanks to all who participated in the 2007 Indianapolis Workshops on Software Testing. The full list of attendees for 2007 include:




  • Andrew Andrada

  • Charlie Audritsh

  • Dave Christiansen

  • Howard Clark

  • Lisa Etherington

  • Michael Goempel

  • Jason Horn

  • Karen Johnson

  • Michael Kelly

  • Steve Lannon

  • Kelvin Lim

  • Baher Malek

  • John McConda

  • Scott McDevitt

  • Rick Moffatt

  • Chad Molenda

  • John Montano

  • Cathy Nguyen

  • Charles Penn

  • Kenn Petty

  • Vishal Pujary

  • Mike Slattery

  • Michael Vance

  • David Warren

  • Gary Warren

  • Chris Wingate

  • Jeff Woodard