Posts in Tools
Scripting for Testers and Test Automation with Open Source Tools

This weekend we held the final IWST workshop for 2007. The topic of the five hour workshop was 'Scripting for Testers and Test Automation with Open Source Tools.' What a great workshop! We packed in seven experience reports (with open season questioning) and we talked a bit about IWST for 2008. Attendees for the workshop included:




  • Andrew Andrada

  • Charlie Audritsh

  • Michael Goempel

  • Michael Kelly

  • John McConda

  • Chad Molenda

  • Cathy Nguyen

  • Charles Penn

  • Mike Slattery

  • Gary Warren

  • Chris Wingate

  • Jeff Woodard


We opened the workshop with an experience report from Charles Penn on his experience learning Watir. Charles covered some of the resources he used to learn the tool and he shared the details of his experience trying to get popups to work. Charles currently uses Watir for some of his regression testing. He has a suite that he runs daily over lunch. The code Charles used for his popups can be found here: Pop Up Clicker and a Test Logging Example


Following Charles, Jeff Woodard shared an experience where he used AutoIT to automate the import and export of test cases with IBM Rational ManaulTest. Jeff worked at a client that required the use of Rational ManaulTest. He felt the tool restricted his productivity. So he created some simple scripts that allowed him to work in his tool of choice for test case design and documentation, then all he had to do was run the scripts to "check in" the tests. With about 2 days of scripting, he estimates he shaved over a week from his workload doing administrative testing tasks. He managed over 700 manual test cases using AutoIT. For those who might be interested in his solution, the AutoIT code can be found here: export and import


Next, Cathy Nguyen shared how she currently uses Selenium for her test automation. The application Cathy tests is very Ajax intensive, and Selenium was one of the tools that she could get to easily work with the application. She uses the FireFox recorder, then takes those tests and converts them to C# for integration and management in Visual Studio Team System. With a bit of help she was able to create an 'MS Unit Test' converter, allowing her to integrate her Selenium tests with the developer tests already in use. Results can be tracked in the team SharePoint site and the scripts can be stored in source control. One of her next steps is to get the scripts running as part of the continuous integration process. The code that Cathy used to convert the tests to MS unit tests can be found here: tbd


After Cathy, Chris Wingate and Chad Molenda shared a joint experience of using Watir for smoke testing multiple projects in multiple environments, with several builds a day. The testing load on the smoke test scripts required them to migrate the smoke tests from IBM Rational Robot to Watir (faster execution, parallel execution, and easier code maintenance). They experienced similar popup issues, but also had some threading issues and dealing with multiple concurrent browsers on the same machine. They are working on creating a new method that allows you to manage dialogs based on the parent window instead of the user32.dll. I believe they plan on sharing that code back with the Watir community when they are complete; so look for that code in a future Watir release.


John McConda then presented an experience of using RadView WebLOAD on a client project. WebLOAD has an open source model that allows for some features in the open source version with a commercial version for clients who need more features/users. John said that he was able to get it to work with some Ajax and did a quick demo against the Mobius Labs website.


Following John, Charlie Audritsh presented some scripting tips for performance testers. First he showed a trick he uses to script usage model paths using case statement with a random number generator. This simple technique allowed him to reduce the number of scripts required to implement his workload model, and also allowed him to implement fractional percentages for user groups/activities within the model. After that, Charlie showed a store procedure he wrote to help with performance test logging. He uses it for debugging and for detailed logging when he needs it (turns it off when he doesn't). An example of both techniques Charlie shared can be found here: dec_iwst_audritsh.wri


Finally, Mike Slattery shared a number of tools he has tried for his testing, including Jacobie, webunitproj, jwebunit, htmlunit, httpunit, and Selenium. He found he had some common problems he had to work through, regardless of the tool. Those included popups, difficulty in having the same tests do double duty as functional and performance tests, difficulty in testing stylistic aspects of the application, and general test code fragility. Mike did say something about the ability to build your own recorders in Selenium, something I wasn't aware of. I know I need to check that out.


On the topic of future IWST workshops, I'm sure we will do something for 2008. I don't think they will be every month, but we'll see what we can do. I plan on starting some discussions on the IWST mailing list and we'll see what the community has energy for. I will also get all this info (code and such - when I get it) up on the IWST site at some point. I think I need to upgrade and redesign a bit in the coming months.


 



Thanks to all who participated in the 2007 Indianapolis Workshops on Software Testing. The full list of attendees for 2007 include:




  • Andrew Andrada

  • Charlie Audritsh

  • Dave Christiansen

  • Howard Clark

  • Lisa Etherington

  • Michael Goempel

  • Jason Horn

  • Karen Johnson

  • Michael Kelly

  • Steve Lannon

  • Kelvin Lim

  • Baher Malek

  • John McConda

  • Scott McDevitt

  • Rick Moffatt

  • Chad Molenda

  • John Montano

  • Cathy Nguyen

  • Charles Penn

  • Kenn Petty

  • Vishal Pujary

  • Mike Slattery

  • Michael Vance

  • David Warren

  • Gary Warren

  • Chris Wingate

  • Jeff Woodard

Windows command line tools
From Lesson 2 - Basic commands in Linux and Windows I learned three cool new Windows tools I didn't know about.

tracert host
Show the route that packets follow to reach the machine "host". The command tracert is the abbreviation of trace route, which allows you to learn the route that a packet follows from the origin, (your machine) to the destination machine. It can also tell you the time it takes to make each jump. At the most, 30 jumps will be listed. It is sometimes interesting to observe the names of the machines through which the packets travel.

route print
Display the routing table. The command route serves to define static routes, to erase routes or simply to see the state of the routes.

netstat
Displays information on the status of the network and established connections with remote machines.



When I tried "tracert www.michaeldkelly.com" I got over 30 hops. PerfTestPlus.com had 21 (it was twice as fast as mine - darn hosting company) and Satisfice.com had 14 hops (twice as fast as PerfTestPlus).

My next step is to figure out how to actually use this information. All with time. This at least gives me a place to start.