Last weekend we held the November session of the
Indianapolis Workshop on Software Testing. The attendees were:
- Andrew Andrada
- Charlie Audritsh
- Michael Kelly
- Mike Slattery
- Dana Spears
- Gary Warren
- Chris Wingate
The topic we focused on for the five-hour workshop was automated testing.
We started with an experience report by Mike Slattery. Mike has a presentation he did at the Indianapolis Java User Group on the same experience and that presentation is available here:
http://www.searchsoft.net/ijug/ftest.ppt.
In his experience report, Mike did an excellent job painting a picture of his project (developers, TDD, XP, etc...) where they tried to use tools that maximized their test coverage while minimizing their effort. He shared both successes and failures, which he describes as:
Our mistakes made in respect to TDD (lots of overlap in these items):
- No test due to lack of specification (our biggest issue!)
- Tests running too slow
- Lack of mocks
- Writing code w/o an already existing test
- Trying to satisfy more than one test at a time
Our success list is basically the inverse of the above list.
He also shared an excellent list of resources:
Strangling legacy apps:
http://www.testing.com/cgi-bin/blog/2005/05/11http://www.samoht.com/wiki/wiki.pl?Strangling_Legacy_CodeUse Case templates:
http://alistair.cockburn.us/usecases/uctempla.htmhttp://alistair.cockburn.us/crystal/books/weuc/weuc0002extract.pdfTools mentioned:
http://jwebunit.sf.nethttp://sf.net/projects/webunitprojhttp://www.graphviz.orghttp://selenium.thoughtworks.comDuring his talk, I pulled out three principles he seemed to use when deciding to automate:
- The team makes decisions on automation - not one person.
- Take use cases (stories) and directly convert them into tests.
- Work incrementally. Write a story (use case), write a test, code. Repeat.
That was followed by me sharing a recent experience testing web services. A project team I was on just wrapped up web service project and are ramping up on another one. We used several tools to various success and failure.
The most effective system test tool we found was Junit with XMLunit. Our test cases were request XML files with corresponding expected result XML files. Using Junit we called the web service. Junit then passed the response and the expected result XML to XMLunit. XMLunit processed them both through an XSLT (removing time-date stamps, values we didn't care about, etc...) and performed a diff. Results went to a results file with errors.
We also used a couple of performance test tools (Mercury and Rational) for some systems tests. They worked ok, but were more maintenance. SOAPScope was *invaluable* as was XMLSpy. Toward the end, a developer and I started writing some Ruby code to do some smoke testing. That worked very well and we are using that on the next project.
We tried creating a couple of test JSPs to facilitate the testing - give it more of a test tool GUI feel. That didn't work so well. The code was buggy. Maintaining the test cases was harder. It just gave us more problems then it solved.
It also took us a while to figure out that we should put the test cases (XML) under source control. Don't ask me why - it just did. So now we have that figured out.
Note, other then myself, none of the 10-15 testers using Junit knew java (or any language really). They never actually saw any code. In fact, I think all of them learned how to read XML on that project. They called Junit using an executable that they could pass the files into. For the most part, they didn't know they were using Junit. Don't let the J in Junit scare you away from using it.
For the most part, we did everything we would normally do. Our test cases were documented in the XML pairing (request and response). We gave them a naming convention to reflect test case numbers and referred to those test case numbers in TestManager and ReqPro (traceability and such).
Test case execution worked just like normal. In any given cycle, we tried to get through specific test cases targeting specific functionality. Reporting was the same. We took the same test case metrics, same defect metrics, etc.... It's worth noting that our defect counts were lower. I don't know if that's because of the technology chance, the developers doing something different, or something else, but my guess is that it's because there's not as much low hanging defect-fruit in web services. No GUI, no abundance of bugs. There were still enough to keep everyone busy. :)
After that we had some general discussion of test script maintenance led by Charlie Audritsh. We talked a lot about how to deal with changes made in test code, changes made to application code, environment changes, and tool issues. It was a good way to wrap things up.
Next month security testing. Let me know if you would like to be there.