Posts in Test Planning
Clarifying your charter
When I'm teaching exploratory testing, I find that one of the most difficult things to learn can be chartering. If you're not practiced at moving from the abstract to the specific it can be very difficult. It becomes even more difficult to figure out what will actually fit in your session time box.

Here are some tips for clarifying the purpose of your testing:

  • Don't feel like you need to develop all your charters in one sitting or all of them upfront. Be comfortable with charters emerging slowly over time. While you'll need some charters defined upfront so you can get started, often you'll find that you charter-base will fill in as you go.

  • Take the time to state the mission (or purpose) of the charter as clearly as possible. Don't say "Test the portal for reporting accuracy" when you can instead say "Test reports X, Y, and Z for errors related to start and end time selection criteria, summing/totally, and rounding." In my experience, the more specific you are, the better your testing will be. If you need to take an extra 30 to 120 seconds to get more specific, take them.

  • Similar to the last tip, if you can't tell one mission from another, you've not done a good enough job defining it. If you have charters to "Test feature X," "Stress test feature X," or "Performance test feature X" can you tell me what the differences are between tests? Couldn't some stress tests be a subset of  simple feature testing? Couldn't some performance tests be a subset of stress testing? If you can't compare two missions side-by-side and have clear and distinct test cases come to mind, than you might benefit from spending a little bit more time refining your missions.

  • Finally, while you're testing go back and make sure you're mission is still correct. There's two goals here. First, you want to make sure you're on mission. If you need to test for rounding errors in reporting, but you find you just can't stop testing filters and sorting, than create the charter for testing filters and sorting and execute that charter instead. You can always go back to the charter for testing for rounding errors. Second, if you find as you test that you can better clarify your original mission, add that clarity as you go. It will help you when you go back to write new charters. The more clear you can make it, the easier it will be to recall what you actually tested three days later when you're looking back and trying to remember what work you still have in front of you.

Test in parallel with debugging and fixing
While it's inspired by something Ross Collard once said to me, this tip came home for me this week while talking with a fellow test manager who was struggling to find enough time to test. Don't forget that while developers are fixing bugs for the next build, you can likely keep testing. Major blocking bugs aside, if you've still got the code there's likely something you can keep moving forward on. Just because you're turned around enough issues for them to work a new build, doesn't mean you have to stop.

Even if your process states you shouldn't be submitting bugs against a known bad build, you can continue to:

  • learn about the product;

  • develop test ideas;

  • execute tests for areas you believe to be stable;

  • develop test assets (like automation, or performance tests);

  • etc...


All of those require some version of the product, even if it's not the final version of the product.
Understanding where your testing fits
One of the things that can be difficult while testing in sprints is to know where your testing fits in the bigger picture. If you're testing a feature for the first time that was just developed in the sprint you're testing, it can be hard to know when it will be released, what it will be released with, and what other testing might take place. This creates a strong need for coordinated test planning than I've seen in more traditional methodologies.

When testing as part of a sprint, the focus is on the specific features being developed. While that might include some regression testing of areas around the change, it's likely going to be shallow. Someone on the team (a test manager, a lead, or someone in the role of test planning for a release) needs to keep the big picture in view. While each tester is focused on their upcoming features, someone else needs to look across the software and identify what risks might be introduced from a wider perspective.

This doesn't mean you need reams of documentation for each release with hundreds of test cases. However, I don't know what software you're testing, so it might mean that... But for me, I think it means you simply need a light test plan for each release where each of the testers can see where their individual features fit, where someone can look for interdependent risks, and where quality criteria other than basic functionality can be evaluated more easily/clearly.

Is there a way to capture that information in one or two pages or charts? That would be ideal. We'll see if anything occurs to me. If you think of something, or already have something, please share.
Outline a delivery calendar for your sprint
When working in sprints, it can be helpful to create small delivery calendars for the team. If you're the tester on the team, this gives you an idea of when stories will be available for you to test. Once you've created the calendar, if you see all the work coming in towards the end of the week, you can ask the team to try to deliver some of them earlier in the sprint so you can get your hands on software sooner.

I use this to think of task-out as a two pass algorithm. The first pass on the calendar starts at the beginning of the sprint and goes to the end. Here the developers are looking to make sure they can deliver all their stories. The second pass starts at the end and goes to the beginning of the sprint. Here the testers are looking to make sure they have enough time to test something once they get it. If a large story gets delivered the last day of the sprint, there likely won't be enough time to test it (much less process any issues that get uncovered).
Using factors of 10 to change what we do
Sometimes we need to reframe our problems to devise better solutions, for instance when we're overloaded with work and need to take on new interests or responsibilities. I've found one useful technique is to use a 'power of 10' to help reassess my current approach. Here are some examples of how it works for me:

I imagine I'm now responsible for ten times the amount of work, with my current work limited to one tenth of my commitments, which allows me a maximum of 1/2 day per week for it. What are the most useful and important things I can do with my time to help and support my current project? Then I try to make sure that, from now on I do that work first each week. Of the remaining work, inherently it's less important so I try to pick the next most important tasks and do them next, etc. Eventually, say after 2 days per week I decide the rest of my old work is no longer useful so I don't do it. Now I have 3 days to take on new work, learn new skills, fix problems, etc.

Another use of powers of 10 is to imagine I have 10x the number of testers, machines, etc. - how would I use them to improve my testing? Generally I find I have a list of possible ideas with some idea of how much they'd help. Now I pick a few of these and implement them (possibly in the time I've freed up from the first example :)

Note: try using powers of 5 (to match one per day of the working week), 100 (e.g. test machines), etc. to help breakthrough your current mindset and challenges.