Posts in IWST
Test Automation
Last weekend we held the November session of the Indianapolis Workshop on Software Testing. The attendees were:

  • Andrew Andrada

  • Charlie Audritsh

  • Michael Kelly

  • Mike Slattery

  • Dana Spears

  • Gary Warren

  • Chris Wingate


The topic we focused on for the five-hour workshop was automated testing.

We started with an experience report by Mike Slattery. Mike has a presentation he did at the Indianapolis Java User Group on the same experience and that presentation is available here: http://www.searchsoft.net/ijug/ftest.ppt.

In his experience report, Mike did an excellent job painting a picture of his project (developers, TDD, XP, etc...) where they tried to use tools that maximized their test coverage while minimizing their effort. He shared both successes and failures, which he describes as:
Our mistakes made in respect to TDD (lots of overlap in these items):

  • No test due to lack of specification (our biggest issue!)

  • Tests running too slow

  • Lack of mocks

  • Writing code w/o an already existing test

  • Trying to satisfy more than one test at a time


Our success list is basically the inverse of the above list.

He also shared an excellent list of resources:

Strangling legacy apps:
http://www.testing.com/cgi-bin/blog/2005/05/11
http://www.samoht.com/wiki/wiki.pl?Strangling_Legacy_Code

Use Case templates:
http://alistair.cockburn.us/usecases/uctempla.htm
http://alistair.cockburn.us/crystal/books/weuc/weuc0002extract.pdf

Tools mentioned:
http://jwebunit.sf.net
http://sf.net/projects/webunitproj
http://www.graphviz.org
http://selenium.thoughtworks.com

During his talk, I pulled out three principles he seemed to use when deciding to automate:

  1. The team makes decisions on automation - not one person.

  2. Take use cases (stories) and directly convert them into tests.

  3. Work incrementally. Write a story (use case), write a test, code. Repeat.


That was followed by me sharing a recent experience testing web services. A project team I was on just wrapped up web service project and are ramping up on another one. We used several tools to various success and failure.

The most effective system test tool we found was Junit with XMLunit. Our test cases were request XML files with corresponding expected result XML files. Using Junit we called the web service. Junit then passed the response and the expected result XML to XMLunit. XMLunit processed them both through an XSLT (removing time-date stamps, values we didn't care about, etc...) and performed a diff. Results went to a results file with errors.

We also used a couple of performance test tools (Mercury and Rational) for some systems tests. They worked ok, but were more maintenance. SOAPScope was *invaluable* as was XMLSpy. Toward the end, a developer and I started writing some Ruby code to do some smoke testing. That worked very well and we are using that on the next project.

We tried creating a couple of test JSPs to facilitate the testing - give it more of a test tool GUI feel. That didn't work so well. The code was buggy. Maintaining the test cases was harder. It just gave us more problems then it solved.

It also took us a while to figure out that we should put the test cases (XML) under source control. Don't ask me why - it just did. So now we have that figured out.

Note, other then myself, none of the 10-15 testers using Junit knew java (or any language really). They never actually saw any code. In fact, I think all of them learned how to read XML on that project. They called Junit using an executable that they could pass the files into. For the most part, they didn't know they were using Junit. Don't let the J in Junit scare you away from using it.

For the most part, we did everything we would normally do. Our test cases were documented in the XML pairing (request and response). We gave them a naming convention to reflect test case numbers and referred to those test case numbers in TestManager and ReqPro (traceability and such).

Test case execution worked just like normal. In any given cycle, we tried to get through specific test cases targeting specific functionality. Reporting was the same. We took the same test case metrics, same defect metrics, etc.... It's worth noting that our defect counts were lower. I don't know if that's because of the technology chance, the developers doing something different, or something else, but my guess is that it's because there's not as much low hanging defect-fruit in web services. No GUI, no abundance of bugs. There were still enough to keep everyone busy. :)

After that we had some general discussion of test script maintenance led by Charlie Audritsh. We talked a lot about how to deal with changes made in test code, changes made to application code, environment changes, and tool issues. It was a good way to wrap things up.

Next month security testing. Let me know if you would like to be there.
Maintaining your testing skills
A couple of weekends ago we held the August session of the Indianapolis Workshop on Software Testing. The attendees were:


  • Taher Attari

  • Charlie Audritsh

  • Laura DeVilbiss

  • Mike Goempel

  • Michael Kelly

  • Dana Spears



The topic we focused on for the five-hour workshop was maintaining your testing skills.

Our workshop started with a brainstorm of how we all maintain our testing skills. The results of that (most of them anyway) were captured and are available for download on the IWST website. I will summarize our findings. We all pretty much use five types of resources (not counting mentors) to maintain our skills: websites, books, tools, groups of people, and magazines. The following are our "top five" for each group. Top five does not imply some type of ranking or rigorous method for selection. They are simply the resources that most of us use most often.

Websites:

  • www.Stickyminds.com

  • www.Kaner.com

  • www.Testingreflections.com

  • www.jrothman.com

  • www.PerfTestPlus.com



Books:

  • Lessons Learned in Software Testing by Kaner, Bach, and Pettichord

  • Testing Computer Software by Kaner, Falk, and Nguyen

  • Quality Software Management: Systems Thinking by Weinberg

  • How to Break Software by Whittaker

  • Conjectures and Refutations: The Growth of Scientific Knowledge by Popper



Tools:

  • The IBM Rational tools and RUP

  • Watir and Ruby

  • WebGoat

  • Logic Puzzles

  • FireFox Web Developer



Groups:


Magazines:

  • Better Software

  • Software Test and Performance

  • CIO

  • Fast Company

  • Wired



Following the brainstorm, I related some of the stuff James Bach and I talked about when I went out there last month. We ended up working through some of the testing problems he gave me, including one of James Lyndsay's "machines": http://www.workroom-productions.com/black_box_machines.html. These things are great!

Also worth noting, Laura DeVilbiss started a blog following the workshop. Check it out and leave her a comment. She also has some great stuff on Can a bad user interface be the gateway to horrible user behavior?

Next month - test patterns. Let me know if you are interested in attending.