Planning for reporting

Working on an article related to test analysis and reporting, I got to thinking about some of the questions I try to answer when I report test status. Here are some of the questions I might ask myself during a project:

  • How much testing did we plan to do and how much of that have we done?

  • What potential tests are remaining and in which areas?

  • What is our current test execution velocity and what has it been for the duration of the project? How does that break out across testing area or test case priority?

  • What is our current test creation velocity and what has it been for the duration of the project? How does that break out across testing area or test case priority?

  • How much of our tests have been focused on basic functionality (basic requirements, major functions, simple data) vs. common cases (users, scenarios, basic data, state, and error coverage)vs. stress testing (strong data, state, and error coverage, load, and constrained environments)?

  • How much of our testing has been focused on capability verses other types of quality criteria (performance, security, scalability, testability, maintainability, etc...)?

  • How many of the requirements have we covered and how much of the code have we covered?

  • If applicable, what platforms and configurations have we covered?

  • How many issue have we found, what severity are they, where have we found them, and how did we find them?



What did I miss? What do you ask?