Today was the second day of the WHET #4 (Workshop on Heuristic and Exploratory Testing). At the workshop, the question of "What is a boundary?" came up. Here's my answer...
"A boundary is any criteria by which I factor my model of what I'm testing."
Here's how I think that definition helps me think about boundary testing:
1) It acknowledges that my ability to identify and work with boundaries is limited to my ability to model the problem. That is, even if there are other boundaries I'm not testing, I wouldn't know because they are outside of my model. As my model is incomplete or wrong, so shall my testing be incomplete and wrong.
2) It acknowledges that no boundary exists in isolation. All boundaries identified were made distinct from the whole. That means, other boundaries in the system have the ability to affect any other boundary in the system. Not always the case, but always possible. Most importantly, it facilitates me recognizing those relationships.
3) It allows for any criteria: technology based, user expectation based, requirements based, etc.... It doesn't anchor my thinking to any one specific type of criteria. In this case, it's no different then James Bach's use of "data" in his definition or Doug Hoffman's use of "influencers" in his.
4) It's useful in the way that I think about my testing. If someone asks me to identify the boundaries of the system, I start with a model. When I think I'm done identifying the boundaries, I can go back to my model to double check my work. I would be more challenged to do that if I didn't use the definition in this way. If I identify a boundary without first having a thought /explicitly/ about my model, then I often do a weak analysis. I've seen myself do this. By thinking of it this way, it triggers my thinking to not to settle for a cursory analysis of the problem.
"A boundary is any criteria by which I factor my model of what I'm testing."
Here's how I think that definition helps me think about boundary testing:
1) It acknowledges that my ability to identify and work with boundaries is limited to my ability to model the problem. That is, even if there are other boundaries I'm not testing, I wouldn't know because they are outside of my model. As my model is incomplete or wrong, so shall my testing be incomplete and wrong.
2) It acknowledges that no boundary exists in isolation. All boundaries identified were made distinct from the whole. That means, other boundaries in the system have the ability to affect any other boundary in the system. Not always the case, but always possible. Most importantly, it facilitates me recognizing those relationships.
3) It allows for any criteria: technology based, user expectation based, requirements based, etc.... It doesn't anchor my thinking to any one specific type of criteria. In this case, it's no different then James Bach's use of "data" in his definition or Doug Hoffman's use of "influencers" in his.
4) It's useful in the way that I think about my testing. If someone asks me to identify the boundaries of the system, I start with a model. When I think I'm done identifying the boundaries, I can go back to my model to double check my work. I would be more challenged to do that if I didn't use the definition in this way. If I identify a boundary without first having a thought /explicitly/ about my model, then I often do a weak analysis. I've seen myself do this. By thinking of it this way, it triggers my thinking to not to settle for a cursory analysis of the problem.