Look for overlap in the schedules
Whenever I'm managing a large testing project, I try my best to break the testing into different often overlapping iterations. I do this by looking at different types of testing (functional, performance, user acceptance, etc...) as well as different variations on testing scope (functionality by delivery date, by project priority, by component/module grouping, etc...). These iterations are often keyed primarily by code or environment delivery date, but they can also be keyed from resource availability. The idea isn't unique at all: take a big bloated project, and try to break it up into smaller more manageable units of work. A tale as old as waterfall time...
However I set it up, I always go back and double check to make sure I can manage all the overlapping iterations. I've gotten sideways before where by trying to collapse the testing schedule, I've overbooked either testers or environments. Or I've inadvertently over-committed another group (like development or the business) because they need to support the testing effort in some way.
For each activity/iteration, ask yourself:
However I set it up, I always go back and double check to make sure I can manage all the overlapping iterations. I've gotten sideways before where by trying to collapse the testing schedule, I've overbooked either testers or environments. Or I've inadvertently over-committed another group (like development or the business) because they need to support the testing effort in some way.
For each activity/iteration, ask yourself:
- Who will be doing this testing and where will they be doing it?
- What will they really need to get started (code, data, etc...)?
- What support will they likely need and how much of it will they need?
- What holidays or other project/company events fall inside those iterations that you're not thinking of because they are months away?