Effective corrective action for test projects in trouble
This weekend we held the January session of the Indianapolis Workshop on Software Testing. The attendees were:
The topic we focused on for the five-hour workshop was effective corrective action for test projects in trouble.
The workshop started with me sharing an experience from a recent project where I both created a troubled test project and how I got myself out of it. I talked about some of the reasons why we became troubled, and focused on the two things that really helped us get clarity in what we were doing and relieved some of the pressure:
Adding the right testers at the right time, means we didn't add testers when the project manager wanted to, but we waited for when we (the test team) felt like they would be most effective. We wanted our testing to be far enough along that we knew the testers we added would be able to be effective by being able to work with stable software in a stable environment. In addition, the testers we added were not new to the software. We were able to borrow testers from other project teams for short periods of time.
When we feel behind schedule, we added visibility to the progress of testing by switching to daily milestones. We talked in terms of features tested, and not in terms of numbers of tests executed. We wanted to be clear about what the status meant, and what we were focused on. The daily milestones were welcomed by both project management and the business customer who was involved. In addition, the test team was able to self organize around the clear deliverables for each day.
After my experience report, Baher Malek related three short experiences around increasing feedback. His three main techniques for increasing feedback (and really, his experience report covered mostly the first two) were:
Baher's opening focused on what he called maintaining the project into existence. This included needing to allow for technology only iterations early in the project (infrastructure build out, software and hardware upgrades, etc...) as well as merging more new development iterations with the regular production support releases.
His second technique, joint elaboration, made me think a lot about what Brian Marick called example driven development. Joint elaboration is a process where the testers, developers, and business (specifically, the business person doing the customer acceptance testing) sit down together and develop the specific tests that will be used to drive acceptance. This allows the developers and testers to focus their development and testing on the specific scenarios that the user has helped specify. This topic also brought up some interesting discussion on anchoring bias and how it might affect the functional and user testing efforts.
Finally, Baher related a short experience using code coverage to drive some early feedback based on unit and functional test results. That was not nearly as interesting as the other two topics, so we didn’t talk about it as much. I'm encouraging him to write up both of his ideas on maintaining projects into existence and joint elaboration. Watch the IWST site for updates. If he writes something up, we'll like to it there.
After that, Mike Goempel related an experience where he came on to a large and out of control test project and had to reign things back in. Mike outlined the process he used. He certainly made it seem like a science (which I think it was given his context). He and his team were very systematic in rolling out changes and control across a test program.
Mike's experience report sparked a brainstorm on how we actually recognize when a test project is in trouble. The results of that can be found here.
Next month we try something new. We will host an open roundtable to discuss our findings (summarize the ERs, talk about the brainstorm, etc...) and talk about if and how we've implemented anything we've learned. This roundtable is open to anyone. More information can be found here.
The next workshop is in March on the topic of Security Testing Web Applications. Let me know if you might be interested in attending and/or sharing an experience.
- Andrew Andrada
- Charlie Audritsh
- Lisa Etherington
- Michael Goempel
- Michael Kelly
- Baher Malek
- John McConda
- Kenn Petty
- Vishal Pujary
The topic we focused on for the five-hour workshop was effective corrective action for test projects in trouble.
The workshop started with me sharing an experience from a recent project where I both created a troubled test project and how I got myself out of it. I talked about some of the reasons why we became troubled, and focused on the two things that really helped us get clarity in what we were doing and relieved some of the pressure:
- We added the right testers at the right time.
- We added visibility to the progress of testing by switching to daily milestones.
Adding the right testers at the right time, means we didn't add testers when the project manager wanted to, but we waited for when we (the test team) felt like they would be most effective. We wanted our testing to be far enough along that we knew the testers we added would be able to be effective by being able to work with stable software in a stable environment. In addition, the testers we added were not new to the software. We were able to borrow testers from other project teams for short periods of time.
When we feel behind schedule, we added visibility to the progress of testing by switching to daily milestones. We talked in terms of features tested, and not in terms of numbers of tests executed. We wanted to be clear about what the status meant, and what we were focused on. The daily milestones were welcomed by both project management and the business customer who was involved. In addition, the test team was able to self organize around the clear deliverables for each day.
After my experience report, Baher Malek related three short experiences around increasing feedback. His three main techniques for increasing feedback (and really, his experience report covered mostly the first two) were:
- Maintaining the project into existence
- Joint elaboration
- Using code coverage
Baher's opening focused on what he called maintaining the project into existence. This included needing to allow for technology only iterations early in the project (infrastructure build out, software and hardware upgrades, etc...) as well as merging more new development iterations with the regular production support releases.
His second technique, joint elaboration, made me think a lot about what Brian Marick called example driven development. Joint elaboration is a process where the testers, developers, and business (specifically, the business person doing the customer acceptance testing) sit down together and develop the specific tests that will be used to drive acceptance. This allows the developers and testers to focus their development and testing on the specific scenarios that the user has helped specify. This topic also brought up some interesting discussion on anchoring bias and how it might affect the functional and user testing efforts.
Finally, Baher related a short experience using code coverage to drive some early feedback based on unit and functional test results. That was not nearly as interesting as the other two topics, so we didn’t talk about it as much. I'm encouraging him to write up both of his ideas on maintaining projects into existence and joint elaboration. Watch the IWST site for updates. If he writes something up, we'll like to it there.
After that, Mike Goempel related an experience where he came on to a large and out of control test project and had to reign things back in. Mike outlined the process he used. He certainly made it seem like a science (which I think it was given his context). He and his team were very systematic in rolling out changes and control across a test program.
- What's the project environment?
- What's our status?
- What are our capabilities?
- Where is the test leadership and how is it coordinated?
- Develop a (new) strategy and plans.
- Execute on those plans, rolling out large changes slowly.
Mike's experience report sparked a brainstorm on how we actually recognize when a test project is in trouble. The results of that can be found here.
Next month we try something new. We will host an open roundtable to discuss our findings (summarize the ERs, talk about the brainstorm, etc...) and talk about if and how we've implemented anything we've learned. This roundtable is open to anyone. More information can be found here.
The next workshop is in March on the topic of Security Testing Web Applications. Let me know if you might be interested in attending and/or sharing an experience.