Sometimes as testers it can be easier to explain issues away, rather than address them. One such bias that often comes into play when this occurs is the fundamental attribution error. This is where we attribute events or behaviors to external factors ("the task, other people, fate") or internal factors ("ability, mood, effort") in an effort to explain something away and make it irrelevant. Found a bug but in the "wrong" environment? Think there might be a security issue, but don't think you're "qualified" to look for security problems? An extension of this can be attributing your own behavior to situational factors, but attributing others’ behaviors to personal factors. For example, "If I miss that bug it’s because the system is so complex that no one could ever find it except by luck, but if Jimmy misses that bug it’s because he’s incompetent."
In Negotiation, Lewicki, Saunders, and Barry look at the bias of the "law of small numbers." I think this is an important bias for testers to be aware of. Decision theory indicates that people have a tendency to draw conclusions from small sample sizes. The book Freakonomics hits on this several times. “This fallacy results in a tendency to believe that a small sequence of events is representative, while ignoring base rate data from a larger universe of events.”
As testers, we often interact with an applicaiton in many ways that the larger population won't. We find possible issues, and then we're asked to draw conclusions. It can be easy to believe our experiences while looking for problems might be representative of the bigger picture. Likely, they aren't. It doesn't mean they are never representative, but we need to remember to account for the big picture when we provide our feedback.
As testers, we often interact with an applicaiton in many ways that the larger population won't. We find possible issues, and then we're asked to draw conclusions. It can be easy to believe our experiences while looking for problems might be representative of the bigger picture. Likely, they aren't. It doesn't mean they are never representative, but we need to remember to account for the big picture when we provide our feedback.
I've always viewed the scene "sword in a field" from the movie The Messenger as a great example of illustrating confirmation bias. That's where we seek out supportive information that confirms the choice we’ve committed to. You can watch the scene here:
Sword in a Field
"You didn't see what was... you saw what you wanted to see."
How often as testers do we do that? We do it on the micro-level (as we try to explain specific bugs) and at the macro-level (when we think about "best practices"). For those who don't like YouTube as a source of learning, you might take a look at Automation Bias in Intelligent Time Critical Decision Support Systems by M.L. Cummings - but I promise you it's a good movie clip.
Sword in a Field
"You didn't see what was... you saw what you wanted to see."
How often as testers do we do that? We do it on the micro-level (as we try to explain specific bugs) and at the macro-level (when we think about "best practices"). For those who don't like YouTube as a source of learning, you might take a look at Automation Bias in Intelligent Time Critical Decision Support Systems by M.L. Cummings - but I promise you it's a good movie clip.