Posts in Software Testing
Anchoring Bias
I suffer from an anchoring bias.

We all do to some extent, but I really suffer from it. We "anchor" our thinking relative to the limited information we have available or to our experience (1). I had a recent experience that nicely illustrates my frustration with my bias.



I was working on a test strategy for the second iteration of a project. I was struggling (for maybe a day or so) to wrap my mind around the problem and how we were going to test the new challenges this iteration was offering us. I "knew" what we needed to do, but I just couldn't seem to find a way to articulate that in terms of a strategy.

My problem was that I was thinking in terms of the strategy I had developed for the first iteration. I had that strategy open on my desktop as I worked and I referred back to it for ideas. When day three came around, a project stakeholder called an early morning meeting to talk about test planning. I still had nothing...

As I walked up the stairs I tried to think what I would report. What was I going to say? Give me more time? I thought I had spent too much time on it already. There had been too many false starts.

That's when the strategy hit me. In the stairwell somewhere between the first and second floor I had the solution. After I got to the meeting room, I pulled out a sheet of paper, drew a quick chart and jotted some notes. Once the conversation began I referred to the chart every now and then, but for the most part just talked about my ideas. I was finally able to articulate my strategy.

What happened was I was finally away from the stupid first iteration strategy. I didn't have it in the front of my mind. All I had in the stairwell was pressure and future embarrassment. I stopped thinking about how to tell people how we will solve the problem and just thought about the problem. A solution came.

I think I need a good way to identify when I'm suffering from an anchoring bias. I know that for the next few weeks I will be actively thinking about it, but give it a month and I will have mostly forgotten the experience. What heuristics can I use to identify a bias of this type? How do I know when I'm relying too heavily on a specific past experience, a past artifact, or even a template?

Any ideas?



1. Chapman, Gretchen B., and Eric J. Johnson. Anchoring, Confirmatory Search, and the Construction of Values. Organizational Behavior and Human Decision Processes Vol. 79, No. 2, August, pp. 115-153, 1999
Why don't we practice in software development like we practice other skills?
Recently, I have been meeting on a regular basis with some coworkers to learn Ruby and Watir. We are working through a book on Ruby (about two chapters a week) and we regularly (or we try anyway) to apply that information to our creating of Watir scripts. One of the guys even started to make an update to the Watir source code, only to find that the update was just included in the latest release.

It reminds me of the team I worked with when I was managing a group of test automaters at Lincoln. We were all new to the tools, all new to the programming language, and all new to the company. It was all new! It was great. We learned as a team. We took time on a regular basis to experiment and share those findings with the team. We would download shareware and trial versions of other tools and see if we could use them to do something we were currently doing better or if we could do something we couldn't currently do due to a tool limitation.

On Fridays, I encouraged the team to take a couple of hours and try to learn something new: either new in our "chosen" toolset, or new in other tools, or just new in general. We passed around articles on test automation. We took time to ask each other questions and review each other's work. In general, we took the time to practice. That's what we did then and that seems to be what we are doing now. In Practicing the art of testing I listed some other examples of practicing software testing.

I'm thinking about this because a friend of mine (Julian Harty) is working on a very cool article where he is drawing a parallel between performance testing and music. I just read the first draft of the article and there is something that stands out in my mind. Professional musicians spend more time practicing then they do playing. To become masters of their craft, they practice everyday. To become the best musical group (as opposed to the best musician) they practice as a group (orchestra, band, etc...).

According to The New Brain, when practicing the goal isn't just repeating the same thing again and again. A musician does not play a scale over and over just for the sake of playing the scale. When I repeat a scale while playing the guitar, it's not so I learn the scale. I know the scale. It's so I can get my fingers to know the scale. I want them to move faster and with more confidence. I am attempting to achieve a higher level of control over my performance. If I can better develop my fingering technique on that scale, I can better control my fingering in other aspects of my playing.

Each time I practice, I'm interested in doing some specific thing better. In martial arts, when we would practice we wouldn't focus on the very large topic of "fighting", we would focus on a specific kick for that hour or a specific self-defense technique. We would then follow that focused time with 15 minutes of general sparring. By improving one specific technique at a time, you gradually improve your overall ability over time. In music, I might focus on a specific song for an hour, or a specific type of music (jazz, rock, ska, etc...). I don't focus on "playing better." Not only is it not practical, it would be less effective.


"In order to achieve superior performance in a chosen field, the expert must counteract the natural impulse to gain an automated performance as soon as possible."


The more I think about this the clearer it becomes. When I first started learning guitar, I wanted to play songs. Specific songs. And I wanted to play them well. So I would practice a song here, or a song there. Always the same set of songs and always in the same style (mostly punk). Over time I played those songs rather well. Had I continued down that path, my guess is I could have mastered those songs - automated performance.

One day I wanted to play something else. I tried and I failed. I would not bring my mind and fingers to play a different type of music, or a different song. To do so, I would have to start over from the beginning and repeat the whole process for the new song. I was always attempting to automate performance. What I needed to learn to do was to learn technique, not automation, so over time, it would only take me minutes - not days - to be able to learn a song. Good guitar players can hear a song once and immediately do a reasonable job playing the song (assuming it's not BB King or some other insanely good guitarist). I could not. I had focused to much on automation and not enough on superior performance.

So what does all this have to do with testing and software development? On most projects I work on, it's a heads down race to the end of the project: too little time, too few people, too much work. After one iteration or project ends, it's a heads down dash on the next one. No one ever makes a conscious decision to practice. Practice could be as simple as a training class, allowing a couple of hours to work with new tools or learn something new about another, write some code in a new language, etc....

Let's compare a project team to an orchestra (to steal more from Julian). Our project team plays a concert everyday. Each month, some people leave the project team and some new people join the project team. We don't let the new people practice with us before the concert, we sit them down and tell them to play. If our conductor notices problems, the testers are consistently behind their project estimates, there is not time where the testers can practice to improve that specific problem. Over time, while playing the concert they can adjust and try new things, but you hear those changes within the context of the orchestra and the effect is often something that sounds kinda funky.

I've asked several people today why we don't practice. All of them have thought about it, but no one has really provided an answer yet. Certainly I know that we are not an orchestra. Knowledge workers are not athletes. No analogy is perfect, but it does raise some interesting issues in my mind. Individually we can all practice. Who does or who should get the project team to practice? How do they practice? What would that look like? Is that what we call continuous process improvement?

I would love to hear any thoughts or experiences anyone may have as to their own practice sessions or any practicing that you have done in groups.