Posts in Heuristics
Let your bugs have social networks
One of the things I really like about JIRA is how much linking it allows. (Other tools do this too, but I wanted to namedrop the tool because they do it particularly well.) From a story, I can link related stories, defects, CM tickets, deployment tickets, etc.... Basically whatever ticket type I want. This is great, because over time I've developed some risk heuristics based on the number of links a ticket has:

  • If it has a lot of links to other stories, I likely need to test more around business functionality concerns.

  • If it has a lot of links to other bugs, I likely need to test more around technical functionality concerns.

  • If it has a lot of links to CM tickets, I likely need to test more around deployment and configuration.


I've also developed some similar heuristics around estimating how long work will take based on links, how much documentation there will be to review, etc...

JIRA also shows you how many people have participated in a ticket. That is, it tracks who's "touched" it. I have similar heuristics around that. The more people involved, the longer stuff will take, the more likely there was code integration, etc...

What does the social network of your tickets tell you about your testing?
What's in a smoke test?
When figuring out what to smoke test for a release/build/environment, I run down the following list in my head:

  • What's changed in this build/release?

  • What features/functions are most frequently used?

  • What features/functions are most important or business critical to the client?

  • Is there technical risk related to the deploy, environment, or a third-party service?

  • Are there any automated tests I can leverage for any test ideas I came up with in the previous questions?


Based on my answers to those questions, I come up with my set of tests for the release. If it's a big release (aka more features or more visible), I'll do more smoke tests. It's been my experience that different releases often require different levels of smoke testing.
Before you consider yourself "done" testing, go back and double check
Many teams track release status by tracking the status of tickets in the release. A ticket might be a story, feature, or some other unit used to tie work to a release number. That is, if a release has 10 tickets in it (regardless of if those tickets are bugs, features, or other), the release isn't done until those 10 tickets are done. For most teams, testing is part of the ticket workflow.

When you're working in this type of environment, you often do your test planning for the release upfront. You might charter testing for certain tickets together or break up testing across multiple testers by assigning testers specific tickets. When you do this, you always run the risk of someone adding another ticket into the mix before you've finished your testing. While most teams (hopefully) have some communication methods for this type of change, sometimes tickets sneak in.

Whenever I'm testing a release like this, the last thing I do before I call my testing "done" is to go back and make sure no new tickets were added. Most tools make this very easy to do. All I'm trying to do is reconcile my testing efforts with the release before I hand it off to the next stage. Occasionally, I catch something that got slipped in that my testing missed.
Don't make me think
In the interview linked below, usability expert Tim Altom provides a heuristic for usability testing. When testing he invokes the simple philosophy he picked up from Steve Krug's bestselling book on usability, Don't Make Me Think.

"If anything in the interface will make a user pause and have to think, it should be flagged for possible change," Tim said. "Contrary to popular belief, nobody really wants to think while working with a tool. Thinking is hard work in itself, and should reserved for the job, not for the tools."


More in this interview/article on usability testing.
Finding the magic in your magic numbers
Today's tip comes from a post I read by Scott Berkun:
The simplest, sanest step in the world, a step few people do, is when a project ends go back and review your estimates and compare against the reality. Put it up on a big whiteboard, sit down with a handful of your team leaders, and discuss two things: what factors explain the difference, and what smarter things you might have done to have a better schedule.

That's an excerpt of his recent post on Magic Numbers of Project Management. Read the post for full details on the tip, but follow the advice. I suspect you're asked to estimate your work on a regular basis, and understanding where the magic comes from is an important part of that task.