Posts in Software Testing
Pair testing
Pair testing can be challenging for a several reasons, not the least of which is just finding the time for it. But I still try to do it when I can. It's more difficult now that I'm a manager. I haven't really tested in months, much less pair tested. I need to work on that... but I have other things I need to work on too. Always to much to do.

When I'm pair testing with someone for the first time, I'm looking for some specific things:

  • How does the person I'm pairing with like to work? Do they take notes realtime? Do they use multiple screens and laptops, multitasking their work? Do they use tools? If so which ones?

  • How does the person I'm pairing with like to communicate? Do they talk out loud? Are the good at stream of conscious speaking? Do they IM a lot? Do they write stuff down, and if so, what?

  • How does the person I'm pairing with like to test? Do they use scripts or charters? Do they have data handy? In what format, a spreadsheet, text files, database, other? Can I see them using specific test patterns, heuristics, or techniques? As their testing unfolds, is it always the same or does it change? How does it change?

  • What does the person I'm pairing with value? What's a bug to them? How do they assess priority? Who do they view as their stakeholder(s)? What information is credible? What oracles are credible? What parts of the system do they focus on?

  • How does our dynamic come into play while we are pairing? If they are a programmer (testing their own code or not), what words do they use or concerns do they exhibit that would tell me that by their testing? If they are a tester, what words do they use or concerns do they exhibit that would tell me that by their testing? If I manage them, does that affect their behavior in a noticeable way? (And how would I notice?)

  • How do I behave while we're testing? Do I take control? Not speak up? How do I feel as we test (about the product, our testing, my interaction with the other person)?

  • What do we do after we test? Do we combine and clean up our notes? Does one of us write a script (manual or automated) for anything we tested? What defects get written up, which ones don't, and what information goes in them? What follow up charters do we think of? Do we both do a debrief?


I suppose that somewhere in there I also find time to think about the testing that I'm suppose to be doing. But I can't be bothered by those details. :)
Testing labels can get in the way of talking about testing
Through answering a lot of questions on SearchSoftwareQuality.com, I've found that terms like integration testing, unit testing, system testing, and acceptance testing often get in the way of talking about testing. There are many of these terms, and they often mean different things to different people.

I suspect we use those labels in an attempt to do one (or more) of the following:

  • imply chronological position within a phased life-cycle

  • imply a set of common testing practices done within that phase (automation of unit tests, traceability back to requirements, client interviews and surveys, etc...)

  • imply who is doing the testing (developers, testers, customers, etc...)

  • imply some concept of what risks we might be looking for (a technical concern, how two parts interact, if we've implemented what the customer wants, etc...)

  • imply some concepts of what areas of the application we are covering (code, interfaces, data, features, subsystems, etc...)

  • imply some concept of what oracles we might use in our testing (asserts, specifications, competitor or past products, etc...)


I'm sure they imply other things as well. (Let me know if you think of some I missed. I'll add it to the list with attribution.)

The reason I think this is important is because while these labels can help, they can also hinder. The can confuse the issue we are talking about since each term comes with it's own baggage. Often, I find it easier to just talk about testing (oracles, coverage, risk, technique, etc...) as applied to specific problems then to talk in abstract labels we often apply to them.

I wonder if you've encountered the same issue?