Posts in Test Automation
When you have data - graph it
I recently visited a friend who presented me with the following problem from Mensa Logic Puzzles:
There is logic behind the distribution of numbers in the grid. Work out what it is and then fill in the missing numbers.

mensa_fig1.JPG



When he first presented me with the problem, I spent about half and hour staring at the grid. I attempted to perform all sorts of arithmetic operations on the data to try to find the pattern - but with no luck. After half and hour I gave up and told him I would sleep on it and solve it the next day.

The next day, when he gave me the problem again, I thought of something James Bach told me the last time we got together, "If you have data then graph it." With that in mind, I flipped open the laptop, fired up Excel, and came up with the following:

Solution One
mensa_fig2.JPG
The first thing I attempted to do was sum up the rows and columns. I found a repeating pattern for the rows: 35, 27, x, 35, x, 35. I quickly found that I could make that 35, 27, 35, 35, 27, 35. But that solution had two problems. First, the columns didn't add up to and sort of pattern. Second, the numbers in row five could be several variations to get six (2 and 4, 3 and 3, 4 and 2, 1 and 5, etc...).

Solution Two
mensa_fig3.JPG
Next I tried looking at evens and odds. I was looking for some sort of pattern to the way they were distributed. I quickly gave up on that...

Solution Three
mensa_fig4.JPG
Finally, I color coordinated the data. I gave each number its own color. It was then that I noticed that each color seemed to appear the number of times as the number I chose it to represent. When I looked at the difference between the number and the number of times the color appeared, I had a total difference of four. So I figured I had my numbers, now how do I put them in?

I then noticed that no color was touching itself. When I started plugging in the numbers I thought solved the problem, there was only one way they would fit that would keep them from touching their own color.

I had solved the problem. I gave him my answer, and he was shocked to learn that I had solved it correctly.

When trying to notice patterns in your testing, if you have data - graph it.
Test Automation
Last weekend we held the November session of the Indianapolis Workshop on Software Testing. The attendees were:

  • Andrew Andrada

  • Charlie Audritsh

  • Michael Kelly

  • Mike Slattery

  • Dana Spears

  • Gary Warren

  • Chris Wingate


The topic we focused on for the five-hour workshop was automated testing.

We started with an experience report by Mike Slattery. Mike has a presentation he did at the Indianapolis Java User Group on the same experience and that presentation is available here: http://www.searchsoft.net/ijug/ftest.ppt.

In his experience report, Mike did an excellent job painting a picture of his project (developers, TDD, XP, etc...) where they tried to use tools that maximized their test coverage while minimizing their effort. He shared both successes and failures, which he describes as:
Our mistakes made in respect to TDD (lots of overlap in these items):

  • No test due to lack of specification (our biggest issue!)

  • Tests running too slow

  • Lack of mocks

  • Writing code w/o an already existing test

  • Trying to satisfy more than one test at a time


Our success list is basically the inverse of the above list.

He also shared an excellent list of resources:

Strangling legacy apps:
http://www.testing.com/cgi-bin/blog/2005/05/11
http://www.samoht.com/wiki/wiki.pl?Strangling_Legacy_Code

Use Case templates:
http://alistair.cockburn.us/usecases/uctempla.htm
http://alistair.cockburn.us/crystal/books/weuc/weuc0002extract.pdf

Tools mentioned:
http://jwebunit.sf.net
http://sf.net/projects/webunitproj
http://www.graphviz.org
http://selenium.thoughtworks.com

During his talk, I pulled out three principles he seemed to use when deciding to automate:

  1. The team makes decisions on automation - not one person.

  2. Take use cases (stories) and directly convert them into tests.

  3. Work incrementally. Write a story (use case), write a test, code. Repeat.


That was followed by me sharing a recent experience testing web services. A project team I was on just wrapped up web service project and are ramping up on another one. We used several tools to various success and failure.

The most effective system test tool we found was Junit with XMLunit. Our test cases were request XML files with corresponding expected result XML files. Using Junit we called the web service. Junit then passed the response and the expected result XML to XMLunit. XMLunit processed them both through an XSLT (removing time-date stamps, values we didn't care about, etc...) and performed a diff. Results went to a results file with errors.

We also used a couple of performance test tools (Mercury and Rational) for some systems tests. They worked ok, but were more maintenance. SOAPScope was *invaluable* as was XMLSpy. Toward the end, a developer and I started writing some Ruby code to do some smoke testing. That worked very well and we are using that on the next project.

We tried creating a couple of test JSPs to facilitate the testing - give it more of a test tool GUI feel. That didn't work so well. The code was buggy. Maintaining the test cases was harder. It just gave us more problems then it solved.

It also took us a while to figure out that we should put the test cases (XML) under source control. Don't ask me why - it just did. So now we have that figured out.

Note, other then myself, none of the 10-15 testers using Junit knew java (or any language really). They never actually saw any code. In fact, I think all of them learned how to read XML on that project. They called Junit using an executable that they could pass the files into. For the most part, they didn't know they were using Junit. Don't let the J in Junit scare you away from using it.

For the most part, we did everything we would normally do. Our test cases were documented in the XML pairing (request and response). We gave them a naming convention to reflect test case numbers and referred to those test case numbers in TestManager and ReqPro (traceability and such).

Test case execution worked just like normal. In any given cycle, we tried to get through specific test cases targeting specific functionality. Reporting was the same. We took the same test case metrics, same defect metrics, etc.... It's worth noting that our defect counts were lower. I don't know if that's because of the technology chance, the developers doing something different, or something else, but my guess is that it's because there's not as much low hanging defect-fruit in web services. No GUI, no abundance of bugs. There were still enough to keep everyone busy. :)

After that we had some general discussion of test script maintenance led by Charlie Audritsh. We talked a lot about how to deal with changes made in test code, changes made to application code, environment changes, and tool issues. It was a good way to wrap things up.

Next month security testing. Let me know if you would like to be there.