Focalism
In his post titled Be aware of Bounded Awareness, Albert Gareev brings up the bias of focalism. In the post, Albert quotes Dolly Chugh and Max Bazerman:
I find that many executives in large IT organizations have a specific focal event in mind when they think about software testing. That is, they always remember the "one big issue" that was missed: a performance issue that brought down the system for a half a day, or an accounting package bug that forced them to restate some part of their quarterly earnings, or some other equally traumatic event. These focal events seem to forever bias them to focus on that specific type of issue going forward - to the detriment of other possible issues. Ironically, the other issues are probably more likely to occur now that so much focus in on the problem from the focal event.
I also find that start-ups and software companies are less likely to suffer from this. Either they see problems potentially hiding under every rock, or they see none. It's almost like they're paranoid or oblivious. Perhaps oblivious is the wrong word - let's try overly optimistic. Because there is rarely a well established status quo in these companies, I think disruptive events do less scarring on those who need to decide where to allocate limited resources.
How does this affect your testing?
It can sometimes be hard to know what other parts of the organization are doing to take corrective action after the focal event. For example, if there was a security breach, does that mean you need to do more security testing? Or has the infrastructure team stepped up and instituted a bunch of new hardware builds, alerts, and processes. It depends on what the root cause of the event was and what other corrective actions are being taken. You may need to do more testing, you may not need to.
If you're in an organization where focalism has biased the overall testing process, you need to take steps to make sure there is balance in the testing approach. This isn't always easy, especially if you have layers and layers of management to work through or if you have teams that don't talk to one another. I think that's another reason why large organizations tend to suffer from this bias more than smaller organizations. In larger organizations communication is more difficult and distortion is high.
If you need to help people understand where the focal event fits in, try using Scott Barber's FIBLOTS mnemonic. Each letter of the mnemonic can help you think about a different aspect of risk:
Often, the focal event will fit under business-critical, and certainly becomes a stakeholder-mandate (after the event). However, make sure the organization isn't loosing focus on the other items in the list. Using something like this to frame the discussion can be helpful, because it doesn't attempt to downplay the criticality of the event. Instead, it says "Yea... we know that was a big miss and a painful event for the company. We don't want it to happen again. But we also don't want one of these other items to impact us in the same way."
"'Focalism' is the common tendency to focus too much on a particular event (the 'focal event') and too little on other events that are likely to occur concurrently. Timothy Wilson and Daniel Gilbert of the University of Virginia found that individuals overestimate the degree to which their future thoughts will be occupied by the focal event, as well as the duration of their emotional response to the event."
I find that many executives in large IT organizations have a specific focal event in mind when they think about software testing. That is, they always remember the "one big issue" that was missed: a performance issue that brought down the system for a half a day, or an accounting package bug that forced them to restate some part of their quarterly earnings, or some other equally traumatic event. These focal events seem to forever bias them to focus on that specific type of issue going forward - to the detriment of other possible issues. Ironically, the other issues are probably more likely to occur now that so much focus in on the problem from the focal event.
I also find that start-ups and software companies are less likely to suffer from this. Either they see problems potentially hiding under every rock, or they see none. It's almost like they're paranoid or oblivious. Perhaps oblivious is the wrong word - let's try overly optimistic. Because there is rarely a well established status quo in these companies, I think disruptive events do less scarring on those who need to decide where to allocate limited resources.
How does this affect your testing?
It can sometimes be hard to know what other parts of the organization are doing to take corrective action after the focal event. For example, if there was a security breach, does that mean you need to do more security testing? Or has the infrastructure team stepped up and instituted a bunch of new hardware builds, alerts, and processes. It depends on what the root cause of the event was and what other corrective actions are being taken. You may need to do more testing, you may not need to.
If you're in an organization where focalism has biased the overall testing process, you need to take steps to make sure there is balance in the testing approach. This isn't always easy, especially if you have layers and layers of management to work through or if you have teams that don't talk to one another. I think that's another reason why large organizations tend to suffer from this bias more than smaller organizations. In larger organizations communication is more difficult and distortion is high.
If you need to help people understand where the focal event fits in, try using Scott Barber's FIBLOTS mnemonic. Each letter of the mnemonic can help you think about a different aspect of risk:
- Frequent: What features are most frequently used (e.g., features the user interacts with, background processes, etc.)?
- Intensive: What features are the most intensive (searches, features operating with large sets of data, features with intensive GUI interactions)?
- Business-critical: What features support processes that need to work (month-end processing, creation of new accounts)?
- Legal: What features support processes that are required to work by contract?
- Obvious: What features support processes that will earn us bad press if they don't work?
- Technically risky: What features are supported by or interact with technically risky aspects of the system (new or old technologies, places where we've seen failures before, etc.)?
- Stakeholder-mandated: What have we been asked/told to make sure we test?
Often, the focal event will fit under business-critical, and certainly becomes a stakeholder-mandate (after the event). However, make sure the organization isn't loosing focus on the other items in the list. Using something like this to frame the discussion can be helpful, because it doesn't attempt to downplay the criticality of the event. Instead, it says "Yea... we know that was a big miss and a painful event for the company. We don't want it to happen again. But we also don't want one of these other items to impact us in the same way."