SOA Testing
In April we held the first IWST workshop for 2008. The topic of the all day workshop was ‘SOA Testing.’ We were able to get through five experience reports (with open season questioning) and we looked at some specific examples in iTKO LISA . Thanks to Mobuis Labs for hosting us and providing breakfast. Attendees of the workshop were:
- Ken Ahrens
- Michael Goempel
- Ingrid Grant
- Michael Kelly
- John McConda
- Matt Osgatharp
- Eddie Robinson
- Jeff White
- Jeffrey Woodard
- Christina Zaza
The first ER came from Ken Ahrens and in it, Ken related an experience where he participated in an SOA evaluation project for a government agency. Ken gave a great overview of the technologies involved, and was kind enough to set the stage for some of the beginners in the room by defining all the acronyms (SOAP, JMS, MQ, ESB, etc...). He also talked a bit about some of the different models for SOA (pub/sub, sync/async, etc...). In the project, they compared different SOA technologies and products to determine which was the best fit for the client. For example, they tested different ESBs and ESB configurations to determine which would perform best in the client's context.
During the presentation, Ken spent some time talking about the current state of SOA and some of the common challenges he's seen testing in SOA environments. I particularly found his discussion of some of the common themes and "Three C's" interesting. Dave Christiansen and I gave a webinar on some of the challenges of testing SOA a few months ago and if I could go back, I would frame some of the challenges using the "Three C's" instead of the way we did it. It's a useful model for talking about how SOA testing is different from some more traditional manual testing contexts.
After Ken's talk, Tina Zaza presented her experience doing component level testing for a financial services application. Tina and her team did manual testing of several services during her first project as a tester. She used Excel to manage test coverage and traceability. Tina struggled with some of the short-term thinking of the contract staff she worked with; which lead her to rework and lost time. She found that inter-team communication was initially a big challenge (getting the BAs, Devs, and Testers to all talk). They have since tried to solve that challenge by getting the developers to review the test case XML before the initial test cycle, asking "Is this the right XML?" They also were challenged with managing the volume of XML, encountering challenges with naming, tracking versions, showing relationships, and backup/recovery.
Tina's talk got praise from almost everyone during checkout. It resonated with me because many of the challenges Tina faced on the project are universal challenges and aren't specific to SOA. It highlighted what common testing problems SOA fixes, creates, and doesn't change at all. Tina is also very candid, which made it easy to ask questions.
After Tina's talk, Eddie Robinson presented an experience report on testing for a big-three automaker centered on software for supplier management. Eddie presented a tool be built in Excel to convert a largely manual process to an automated execution and test management framework. In his project, there was a nightly FTP from the client. A VBS program pulled those files and another program persisted those changes into a staging database. Eddie's spreadsheet then executed a large number of queries (each query representing a test case) and checked those actual results against expected results. A summary tab in the spreadsheet rolled up the results and presented a summary of execution to date. It also had some columns to help track issues logged in UAT as well as functional testing.
After Eddie's experience report I talked about a project where Jason Horn and I did what we affectionately call "bulk testing" with some simple Ruby scripts. The concept is relatively straightforward. If you are creating a series of web services that need to support a known data set (say, a database full of existing production data) and you want to test a significant portion of that dataset (but not necessarily under load), then you're bulk testing.
We created a process that would:
- take production data
- scrub it to remove any protected information
- generate request XMLs based for each record
- submit those requests to a service
- check the response for exceptions
- persist the response
- check the persistence status for exceptions
- log any issues
- summarize results
It's a fairly simple process - only a couple hundred lines of code all said and done. We created tens of thousands of test cases (if you view each request as a test). We found a lot of defects with this testing and also ended up providing the development team with a prioritized list (based on number of records failed per defect) for them to work from. It was one of the best examples of one-time automation I've ever done: low cost to create the automation, high value return, and mothball the scripting assets when complete.
After my talk, Ken gave a second experience report on service-oriented virtualization. Ken's customer was unable to run performance tests on a regular basis before they implemented virtualization. That is, testing was an "event" where tens of teams had to get together to make a load test happen, and the high cost meant that this only happened a few times a year. After they implemented the virtualization performance tests were run daily. Ken figures the client got a 100x multiple in the number of tests they could run.
Virtualization at the message level gave them greater ability to experiment with issues such as "what if this key service slows down significantly"? Or try different data scenarios, such as what if the lookup returns 900 records instead of 10 records? These are things they couldn't do in their typical load test environment.
They were also able to mock new services quickly, even before the code had been implemented. This let performance testing start earlier in the SDLC. It also let people figure out misunderstandings between teams earlier in the projects they were working on because testing was going on with the mock assumptions in place. Given the amount of performance testing I've been doing on services, I thought that one was the coolest ER. It gave me a lot to think about.
The next workshop is Friday, September 26 on the topic of "Testers who write code." If you have an ER or would just like to attend, drop me a line .