Posts in Software Testing
What is a boundary?
Today was the second day of the WHET #4 (Workshop on Heuristic and Exploratory Testing). At the workshop, the question of "What is a boundary?" came up. Here's my answer...



"A boundary is any criteria by which I factor my model of what I'm testing."

Here's how I think that definition helps me think about boundary testing:

1) It acknowledges that my ability to identify and work with boundaries is limited to my ability to model the problem. That is, even if there are other boundaries I'm not testing, I wouldn't know because they are outside of my model. As my model is incomplete or wrong, so shall my testing be incomplete and wrong.

2) It acknowledges that no boundary exists in isolation. All boundaries identified were made distinct from the whole. That means, other boundaries in the system have the ability to affect any other boundary in the system. Not always the case, but always possible. Most importantly, it facilitates me recognizing those relationships.

3) It allows for any criteria: technology based, user expectation based, requirements based, etc.... It doesn't anchor my thinking to any one specific type of criteria. In this case, it's no different then James Bach's use of "data" in his definition or Doug Hoffman's use of "influencers" in his.

4) It's useful in the way that I think about my testing. If someone asks me to identify the boundaries of the system, I start with a model. When I think I'm done identifying the boundaries, I can go back to my model to double check my work. I would be more challenged to do that if I didn't use the definition in this way. If I identify a boundary without first having a thought /explicitly/ about my model, then I often do a weak analysis. I've seen myself do this. By thinking of it this way, it triggers my thinking to not to settle for a cursory analysis of the problem.
Automation Bias
Yesterday at WHET #4, James Bach mentioned automation bias. A bit of research turned up Automation Bias in Intelligent Time Critical Decision Support Systems by M.L. Cummings.


While humans are typically effective in naturalistic decision making scenarios in which they leverage experience to solve real world ill-structured problems under stress, they are prone to fallible heuristics and various decision biases that are heavily influenced by experience, framing of cues, and presentation of information. For example, confirmation bias takes place when people seek out information to confirm a prior belief and discount information that does not support this belief. Another decision bias, assimilation bias, occurs when a person who is presented with new information that contradicts a preexisting mental model, assimilates the new information to fit into that mental model. Of particular concern in the design of intelligent decision support systems is the human tendency toward automation bias, which occurs when a human decision maker disregards or does not search for contradictory information in light of a computer-generated solution which is accepted as correct. Operators are likely to turn over decision processes to automation as much as possible due to a cognitive conservation phenomenon, and teams of people, as well as individuals, are susceptible to automation bias. Human errors that result from automation bias can be further decomposed into errors of commission and omission. Automation bias errors of omission occur when humans fail to notice problems because the automation does not alert them, while errors of commission occur when humans erroneously follow automated directives or recommendations.


Lots of good stuff in there:


  • confirmation bias takes place when people seek out information to confirm a prior belief and discount information that does not support this belief

  • assimilation bias occurs when a person who is presented with new information that contradicts a preexisting mental model, assimilates the new information to fit into that mental model

  • automation bias occurs when a human decision maker disregards or does not search for contradictory information in light of a computer-generated solution which is accepted as correct