Posts in Test Planning
Draw it five times before writing it down
Any time I'm planning a testing approach for a project, I try to come up with a way to model it visually (flow charts, system diagrams, sequence diagrams, venn diagrams, other...) so I can quickly explain what I think we're testing and how we're testing. I usually end up drawing my picture of our testing on whiteboards, flip charts, and the back of napkins and scrap paper hundreds of times in a project. Over time, my diagram changes as I add more detail and my story of our testing gets richer.

The more I draw the picture of our testing, the more feedback I get. Its for this reason I make sure I draw it a minimum of five times, for at least five different audiences, before I commit my picture to any formal medium (like Visio or Word). History tells me I don't know anything until I'm at least at version 0.5 of my picture. And even then, things are still changing regularly. But after five, I normally have the outline.

As I draw, I'm telling the story of what our testing will look like. I always tell the entire story, even if I think the person I'm telling it to may already know what we're going to do. Early on, I hear "you can't do that" or "that's not how it works" at least once every five minutes. If I don't hear someone saw I can't do what I'm thinking, or express some other concern, I start to wonder if they're really listening to me. Because for any project of even medium complexity, testing is messy (environments, domain knowledge, resources, orders of magnitude estimates, etc...).

On a side note, my iPhone makes a great repository of photos of draft versions of the picture so I can see how it evolves over time. After the first week of a project, I almost have a flip-book for how our test approach unfolded. Kinda neat.
Order of magnitude estimates
I'm not a big fan of throwing around numbers when talking about testing. I understand that saying things like "We have 100 test cases" doesn't really communicate anything other than a number. However, I have found that using order of magnitude numbers can help communicate size and scope to other project/test managers when planning the project.

I find that as we're developing resource plans and course-grain timelines, orders of magnitude can help imply level of effort. If we're talking about scripted or automated tests, I'll talk about tens or hundreds of tests. If we're talking about sessions or charters, I'll talk in powers of two (2, 4, 8, 16, or 32).

Sometimes, when we think everyone's on the same page, we can quickly gauge each others understanding of how much testing might be involved. If we're talking about testing something and I'm not sure we have the same understanding of the amount of work involved, I might ask "How many tests do you think that is?" If you say tens of tests, and I'm thinking hundreds, that's a good indicator you and I need to get on the same page. We've identified an opportunity for talking about what our "tests" entail and what level of coverage we really want.
Matrices to understand large projects
When working large test projects, I have a couple of matrices that I try to develop early on to better understand who's doing what, where are they doing it, and when they are doing it. These include matrices that show:

  • type or phase of testing by owner (including specific scope of coverage by team)

  • type or phase of testing by environment (including data needs/integration dependencies)

  • type or phase of testing by iteration (including dates, resources, and summaries of coverage for each iteration)


For me, these three views of the project can often paint a summary picture of everything that's happening. This allows me to quickly communicate to other project managers involved in the project or program. It also allows me to communicate updates in a format that those same PMs will take the time to look at. (I find very few people even bother to look at a 30+ page document that's been updated.)
Capture the testing workflows
When I start test planning for a large testing project (large to me is defined a couple of ways: length, number of people, or budget), often I'll start like everyone does by creating a test strategy. For me, early on ideas are fuzzy. I'm learning about the project goals. I'm often learning about the company and their business. Or I'm learning their processes, tools, regulations, and people. There's a lot of ambiguity about what we're going to do and how we're going to do it.

One thing I've found that really helps at this stage of the project is to create testing workflows for each type/phase/stage of testing that we'll be doing on the project. When I say workflow, I mean a simple one page flowchart diagram of the activities involved in testing some "thing." From where the requirements are coming from, to how we're capturing tests or test ideas, to how we're executing them and storing results. I'll often put the key decision points in there and show how those affect process flow.

I'll then circulate these workflows among the project team to solicit feedback. You'd be amazed how helpful this is for everyone - not just me. Often, other people in the project don't know exactly what the testing team is doing. They are work from assumptions that have on what you need, when you need it, and what you'll do with it. Detailed workflows like this dispel those assumptions.

Another quick tip: For all those testing project managers out there, workflows like this serve another purpose. Once you have them done, you basically have a work breakdown structure for each type of testing you'll be doing on the project. And it's in a much more accessible format than a mpp file.
Prioritizing your exploratory tests for a given session
Today's tip comes from Christina Zaza. Tina gave an experience report recently at IWST, and in that talk she outlined some of the different ways she prioritizes her tests when she sits down to do exploratory testing:

  1. run the faster tests (or quick tests) first to get them out of the way

  2. run the higher risks tests (or tests more important to the business) first, since they will likely yield the most actionable results

  3. group features together to reduce context switching while testing