Posts in Test Planning
Finding the magic in your magic numbers
Today's tip comes from a post I read by Scott Berkun:
The simplest, sanest step in the world, a step few people do, is when a project ends go back and review your estimates and compare against the reality. Put it up on a big whiteboard, sit down with a handful of your team leaders, and discuss two things: what factors explain the difference, and what smarter things you might have done to have a better schedule.

That's an excerpt of his recent post on Magic Numbers of Project Management. Read the post for full details on the tip, but follow the advice. I suspect you're asked to estimate your work on a regular basis, and understanding where the magic comes from is an important part of that task.
Understand how you're going to approach your testing
I do a lot of exploratory testing. So when I'm doing test planning, you'd think the "Approach" section of my test plan would be the shortest, right? Wrong...

I see a lot of value in thinking about, describing, and writing out how I'll approach my testing. So much so, that when I'm getting ready to execute what I think will be a particularly challenging charter, I'll take a few minutes to outline how I'm going to approach my testing. For some tasks, I might even write a short procedure so I don't mess something up if there's a factor or variable I want to control for while I'm testing.

My rule is that I should always be able to articulate how I'm approaching the problem. If I can't do that, I've got no business getting started with my testing. It means I've got some additional research to do before I'm ready. If I can outline what I'm going to do, than I'm ready.
Digging into a project
I recently started (another) new project. I find that whenever I start a new project, there are a few places I begin digging in:

  • finding whatever documentation I can (requirements, diagrams, contracts, project plans, emails, meeting notes, etc...) - it's a mad dash to get anything that's been written down about the project

  • finding (or making) a list of who's doing what - many times new projects come with new faces and names, keeping everyone straight can take some effort early on

  • starting a new notebook or electronic notes file - I write down my open questions, my assumptions, and my ideas around testing/design/time-lines


When a new project gets dumped on your lap, where do you start first?
Is your testing process this clear?
In my opinion, one of the biggest success factors for a centralized testing organziation is a clear engagement model. One that starts with new project intake and ends with project closeout, follow-up, and portfolio management. I've tried to tackle this problem at two different organizations, and at each one I struggled to get clear and easy to follow definition around what we did.

Take a look at how HUGE laid out their process. I understand that it's marketing material, so it's obviously going to be pretty, but look past that. They lay out seven clear phases for engagement. Each phase has a summary of the steps that will be undertaken in that phase.

If you're centralized group offers products and services, imagine that your products and services are listed next to those phases. Those are the various products and services teams could expect during each phase. I find it to be a very simple visualization for what your organization might offer to project teams.
Don't forget that some tests are open ended
A lot of testing literature talks about inputs and expected results. And that's all well and good, but you can't forget that there are multiple reasons to test and multiple ways to test, and that for some of them you can't accurately predict expected results. I find that a lot of early tests that happen in a project are focused on expected results. "We built the system to do X, does it do it?" However, over time the nature of the testing changes. It becomes more "what if" focused. Examples include, "How many users can we support on our current production configuration?" or "What happens to our users when that batch process runs?" or "What kind of errors might we see in the logs on a typical day in production?"