Posts in Heuristics
Don't get trapped by reuse
I'm a big fan of reusing previous testing artifacts. If you have a test bed ready and available, I'm interested in taking a look at it if it can help me shave time off my project. However, don't let past testing artifacts trap you or bias you. Not only can they have the wrong focus, but they run the potential of slowing you down instead of speeding you up. If you have a lot of rework to do, or if you need to spend a lot of time disambiguating coverage, you might spend more time reviewing than you would have spent just starting from scratch.

Here are some things I look at when trying to determine if an existing test bed is a time sink:

  • There's no clear summary of what's included in the test bed.

  • No one on the team was around when the test bed was originally used. It's coverage is hearsay.

  • A preliminary review of the test bed shows that several different types of updates are required to make them useful (data elements, screen shots, endpoints, etc...).

  • 100% of the original test bed development and execution was done by an external provider and the artifacts were "turned over" at the end of the project.

Quick tests for landing pages
Mary Flaherty wrote an article recently titled Are Your Landing Pages Driving Clients Away? In the article, she provides some great tips, that I think also make great test ideas. Here's a quick summary:

  • Does the landing page have a clear call to action? In her words, "Where's the doorbell?" Test ideas around this might include asking users if they know what they're suppose to do next once they hit the landing page. Or watching where they go and seeing if there's convergence or not.

  • Does the landing page present all the options in a manageable way? That is, can you quickly see what the options are, and find what you might need? In her words, "Which door is which?" Test ideas around this might include asking users to find specific information and seeing how long it takes for them to find it from landing - again looking for convergence. I might also ask users to write down what they think the site contains without navigating past the landing, and time how long it takes them to do it.

  • If you collect visitor information, only ask for what's absolutely necessary.  Test ideas include looking at the business process being served by collecting the information and evaluating how it would change if something wasn't collected. Is that ok? Less might be more at this early stage...

  • How is information laid out and what's the quality of the copy editing. She identifies this as the risk of "TMI." Test ideas include making sure the site has copy guidelines (and they are followed), ensuring the writing is appropriate for the audience (jargon, concepts, structure, etc...), and making sure the layout is visually appealing. Mary also talks about dumping unneeded animations, choosing relevant images, and keeping information up to date and relavent. All of these are possible tests and/or heuristics for possible problems with the site.


Check out the full article for more ideas.
Manual calibration
Many times when performance testing, I'll manually calibrate my performance test script by loading the page manually and comparing those load times with those generated from my test tool. Doing this with a load of one user allows me to get an idea of how much my tool is skewing the numbers. Often, I find the tool is just a tad bit worse in terms of response times. If it's too far off, that normally tells me something is wrong.