Connect

Phone: 317-709-2419
Calendar: michaeldkelly
Email: mike@michaeldkelly.com
Twitter: michael_d_kelly
LinkedIn Profile: michaeldkelly

Adproval
Monday
Dec272010

Software Testing Lightning Talks from IWST

Last month we held the November session of the Indianapolis Workshops on Software Testing (IWST). At the workshop we tried something a bit new. Instead of working with the same topic for five hours, we had each attendee present at least one short five minute talk on a topic of their choice. Some attendees presented more than once. The participants in the workshop were the following:

  • John Galligan

  • Mike Goempel

  • Jason Horn

  • Michael Kelly

  • Ananth Krishnamoorthy

  • Panos Linos

  • Patrick Milligan

  • Jayadev Reddy

  • Chris Scales


In addition to the format switch-up, we also introduced some podcast equipment into the room to capture the talks and the follow up discussion. Below, you'll find links to the audio for each talk along with some notes the IWST organizers put together to help people follow up on some of the topics discussed.

MCOASTER by Mike Kelly

Experiences Testing In Scrum by Patrick Milligan

More Experiences Testing In Scrum by Patrick Milligan

  • The basics of Scrum on Wikipedia.

  • Burn-down Charts on Wikipedia.

  • Checkout the TWiki website for more information on that tool.

  • Checkout the Bugzilla website for more information on that tool.

  • The MoSCoW acronym on Wikipedia.

  • Checkout the Subversion website for more information on that tool.

  • Checkout the Watir website for more information on that tool


Screen Recording APIs by Jason Horn

Testing Mobile by Jayadev Reddy

  • The topic of using emulators for testing came up during the discussion. This post contains a nice listing of some emulators out there today with links for where to find them.


Challenges With Hiring Testers by Mike Goempel

Book Review: How To Break Web Software by Chris Scales

One Trick Ponies by Mike Kelly

  • Checkout the Stella website for more information on that tool.

  • Checkout the Website Grader website for more information on that tool.

  • Checkout the BrowserMob website for more information on that tool.

  • Checkout the spoon website for more information on that tool.

  • Checkout the WebPagetest website for more information on that tool.

  • Checkout the Browsershots website for more information on that tool.

  • Checkout the Web Developer extension website for more information on that tool.

  • During the talk Mike mentioned the concept of blink testing.

  • Checkout the QuickTestingTips.com blog for short writeups on these (and similar) tools.


Testing In The Cloud by Jason Horn

  • Jason works for BlueLock, a company that offers hosting/services for the cloud. You can learn more about their products on their website.

  • Jason mentioned VMware and Microsoft Hyper-V as examples of private clouds.

  • Jason talked a bit more about Sauce Labs. Check out their website for more on their hosted Selenium testing services. (There is a free plan available.)


ROI of Test Automation by Ananth Krishnamoorthy

Heuristics For Creating Automated Regression Tests by Mike Kelly

  • Mike mentioned that this content was co-developed with David Christiansen.

  • For more on all pairs testing (and other combinatorics techniques) checkout the Pairwise Testing website.

  • During the questions and answers, Pat Milligan mentioned the tool Hudson. Learn more about Hudson and continuious integration on the Hudson website.


The Omega Tester by Jason Horn

If you like the format, you're in luck... We think we'll be doing something like this a couple times in 2011. Enjoy the audio and if you have any questions or suggestions let me know. You can also help us plan the 2011 workshop topics on the IWST Meetup site, under Ideas.
Monday
Dec272010

Testing Podcast

I've started adding selected IWST talks from the 2010 workshops to the Testing Podcast. You'll start to see them come out once a week for the next several weeks. Going forward, when podcast material becomes available, I'll be posting that content on both the IWST website and Testing Podcast.
Thursday
Oct282010

Focalism 

In his post titled Be aware of Bounded Awareness, Albert Gareev brings up the bias of focalism. In the post, Albert quotes Dolly Chugh and Max Bazerman:
"'Focalism' is the common tendency to focus too much on a particular event (the 'focal event') and too little on other events that are likely to occur concurrently. Timothy Wilson and Daniel Gilbert of the University of Virginia found that individuals overestimate the degree to which their future thoughts will be occupied by the focal event, as well as the duration of their emotional response to the event."

I find that many executives in large IT organizations have a specific focal event in mind when they think about software testing. That is, they always remember the "one big issue" that was missed: a performance issue that brought down the system for a half a day, or an accounting package bug that forced them to restate some part of their quarterly earnings, or some other equally traumatic event. These focal events seem to forever bias them to focus on that specific type of issue going forward - to the detriment of other possible issues. Ironically, the other issues are probably more likely to occur now that so much focus in on the problem from the focal event.

I also find that start-ups and software companies are less likely to suffer from this. Either they see problems potentially hiding under every rock, or they see none. It's almost like they're paranoid or oblivious. Perhaps oblivious is the wrong word - let's try overly optimistic. Because there is rarely a well established status quo in these companies, I think disruptive events do less scarring on those who need to decide where to allocate limited resources.

How does this affect your testing?

It can sometimes be hard to know what other parts of the organization are doing to take corrective action after the focal event. For example, if there was a security breach, does that mean you need to do more security testing? Or has the infrastructure team stepped up and instituted a bunch of new hardware builds, alerts, and processes. It depends on what the root cause of the event was and what other corrective actions are being taken. You may need to do more testing, you may not need to.

If you're in an organization where focalism has biased the overall testing process, you need to take steps to make sure there is balance in the testing approach. This isn't always easy, especially if you have layers and layers of management to work through or if you have teams that don't talk to one another. I think that's another reason why large organizations tend to suffer from this bias more than smaller organizations. In larger organizations communication is more difficult and distortion is high.

If you need to help people understand where the focal event fits in, try using Scott Barber's FIBLOTS mnemonic. Each letter of the mnemonic can help you think about a different aspect of risk:

  • Frequent: What features are most frequently used (e.g., features the user interacts with, background processes, etc.)?

  • Intensive: What features are the most intensive (searches, features operating with large sets of data, features with intensive GUI interactions)?

  • Business-critical: What features support processes that need to work (month-end processing, creation of new accounts)?

  • Legal: What features support processes that are required to work by contract?

  • Obvious: What features support processes that will earn us bad press if they don't work?

  • Technically risky: What features are supported by or interact with technically risky aspects of the system (new or old technologies, places where we've seen failures before, etc.)?

  • Stakeholder-mandated: What have we been asked/told to make sure we test?


Often, the focal event will fit under business-critical, and certainly becomes a stakeholder-mandate (after the event). However, make sure the organization isn't loosing focus on the other items in the list. Using something like this to frame the discussion can be helpful, because it doesn't attempt to downplay the criticality of the event. Instead, it says "Yea... we know that was a big miss and a painful event for the company. We don't want it to happen again. But we also don't want one of these other items to impact us in the same way."
Monday
Sep272010

Software Testing Centers of Excellence (CoE)

This weekend we held the September session of the Indianapolis Workshops on Software Testing (IWST) at Butler University. The topic of the five-hour workshop was software testing Centers of Excellence (CoE). The participants in the workshop were the following:

  • Andrew Andrada

  • Patrick Beeson

  • Howard Clark

  • Matt Dilts

  • Randy Fisher

  • Mike Goempel

  • Rick Grey

  • James H. Hill

  • Michael Kelly

  • Panos Linos

  • Natalie Mego

  • Hal Metz

  • Patrick Milligan

  • Charles Penn

  • Brad Tollefson

  • Bobby Washington

  • Tina Zaza


We started the workshop by going around the room and asking each person to comment on what they thought a Center of Excellence was, and what their experience was with the topic. In general, there were only a handful of people who had either worked in a formal Center of Excellence or had experience building one out. The overwhelming feeling in the room was one of, "I'm here to learn more about what people mean when they use that term." and "It all sounds like marketing rubbish to me." Okay, perhaps that rubbish part was me embellishing, but I think other than me thought it - even if they didn't phrase it that way.

The first experience report came from me. I briefly presented Dean Meyer's five essential systems for organizations and shared some experiences of how I've used that model to help a couple of clients build out or fix their testing Centers of Excellence. I use the ISCMM mnemonic to remember the five systems:

  • Internal Economy: how money moves through the organization

  • Structure: the org chart

  • Culture: how people interact with one another, and what they value

  • Methods and Tools: how people do their work

  • Metrics and Rewards: how people are measured and rewarded


If you're not familiar with Meyer's work, I recommend his website or any of the following short but effective books on the topic:

I didn't really provide any new insights into how to use the five systems. If you read the books, or review the material on the website, you'll see that Meyer uses these systems help diagnose and fix problems within organizations. That's how I use them as well - I just focus them on problems in testing organizations. I then provided some examples of each from past clients.

Meyer also spends a good deal of time talking about "products." A product is what your organization offers to the rest of the organization. In a testing CoE, that might be products around general testing services, performance testing, security testing, usability testing, or test automation. Or it might be risk assessments, compliance audits, or other areas that sometimes tie in closely with the test organization. I personally use this idea of products as a quick test for identifying a CoE.

Meyer defines products as "things the customer owns or consumes." In his article on developing a service catalog, he points out that:
"...an effective catalog describes deliverables -- end results, not the tasks involved in producing them. Deliverables are generally described in nouns, not verbs. For example, IT sells solutions, not programming."

I believe that if your organization does not offer clear testing products, then it's not a CoE. It's just an organization that offers staff augmentation in the area of software testing. There is no technical excellence (in the form of culture, methods and tools, or metrics and rewards) that it brings to bear in order to deliver. To me, the term Center of Excellence implies that the "center" - that is the organization which has branded itself as excellent in some way - has some secret formula that it bakes into its products. It they delivers them to the organization by delivering those products.

After my experience report, Randy Fisher offered up his experiences on vendor selection criteria. Randy's company ( a large insurance company) is going through the process of deciding if they should build a CoE themselves, or if they should engage a vendor to help them build out the initial CoE. For Randy and his team, the business case for moving towards a CoE is to allow them to leverage the use of strategic assets (people, process and technology) to achieve operational efficiencies, reduce cost, improve software quality, and address business needs more effectively across all lines of business.

Randy and his team started with an evaluation pool of several vendors, and using the following weighted criteria narrowed that list down to two key vendors:

  • Understanding of company’s objectives

  • Test Process Improvement (TPI) Strategy

  • Assessment phase duration

  • Output from Assessment phase

  • Metrics/Benchmarking

  • Experience in co-location

  • Risk Based Testing Approach

  • Standards, Frameworks, Templates

  • Consulting Cost

  • Expected ROI

  • Expected Cost Reduction

  • Special Service Offerings/ Observations


After this initial evaluation, Randy offers the following advice for those who are looking to undergo a similar exercise:

  1. Have specific objectives in mind based on your organization when you meet with the vendors (this list contains a sampling of what I used…)

    • Create a benchmark (internally across systems and with peers in the industry) to facilitate ongoing measurement of organizational test maturity

    • Develop a roadmap for testing capability and maturity improvement

    • Leverage experience and test assets including: standards, frameworks, templates, tools etc.

    • Assess the use of tools, and to a perform a gap analysis to determine the need for additional tooling

    • Define touch points and handoffs between various groups (upstream/downstream) as they relate to testing

    • Assess test environments and create the appropriate standards and tools to address preparation, setup & maintenance

    • Utilize the knowledge of the vendor (both functional and insurance) to facilitate the creation of an enterprise test bed and test data management process

    • Assist with the improvement of capacity planning for test teams

    • Document the test strategy and process differences between groups



  2. Choose your selection criteria based on that factors that are important to you – nobody knows you like you do...

  3. Talk to as many vendors as you can.

  4. Don’t be afraid to negotiate cost and participation level for the engagement.


During the discussion that followed Randy's experience report, there were some interesting questions asked about his goals. That is, what pain are they trying to solve by moving to a CoE? Randy indicated that predictability (times/dates, quality, etc...) were big factors from a project perspective. He also indicated that he wanted his testers to have better tools for knowledge sharing. At the end of the day, he hopes a CoE makes it easier for them to do their jobs. Hal Metz had an interesting insight that for him, the goal should be to create an organization that enables the testers to increase their reputation (either through technical expertise or ability to deliver).

After Randy's experience report, Howard Clark shared an actual example of a slide deck he helped a client prepare to sell a test automation CoE internally. The slide deck walked through step-by-step what the executive would need to address and how building out the CoE would add value in their environment. I'd LOVE to share the slides, but can't. Howard has committed to distil those slides down in either a series of posts on his blog, or in a doctored set of slides. Once I get more info, I'll post an update here.

Either way, I think Howard's talk did a great job of moving the conversation from the abstract to the specific. This was a real business case for why they should build one, what it should look like, and what the challenges would be. I liked it because it used the client's language and addressed their specific concerns. That's one reason why I'm sort-of glad he can't share the slides. It's so specific, it would be a tragedy for someone to pull down those slides and try to use them in their context.

That idea, CoE's are always specific to a particular company's context, was something Howard tried to nail home throughout the day during his questions and comments. I think it's a critical point. No matter what you think a CoE is, it's likely different from company to company. And that's good. But it creates a fair bit of confusion when we talk about CoEs.

Finally, when we were all done presenting, Charles Penn got up and presented a summary of some of the trends he noticed across the various talks and discussion. In no particular order (and in my words, not his):

  • The building out of a CoE almost necessitates the role (formal or informal) of a librarian. Someone who owns tagging and organizing all the documents, templates, and various other information. It's not enough just to define it and collect it - someone has to manage it. (Some organizations call them knowledge managers.)

  • CoE seems largely to just be a marketing term. It means whatever you want it to mean.

  • There seems to be a desire to keep ownership of CoEs internal to the company.

  • There are assorted long term effects of moving toward a CoE model, and those need to be taken into account when the decision is made. It's not a 6 month decision, it's a multi-year decision.

  • There seem to be A LOT of "scattered" testers. That is, testers who are geographically dispersed within the various companies discussed. A large focus of the CoE model seems to be finding ways to deal with that problem.


There were more, but I either didn't capture them or couldn't find a way to effectively share them without a lot of context.

All said and done, it was a great workshop. We had excellent attendance and Butler was great. I hope they have us back for future workshops. We now need to start the planning for 2011. Our current thoughts are for around four workshops. We already have one topic selected given the amount of energy for the topic (teaching software testing - I'll need to let the WTST people know we are doing a session on that), but that leaves three workshops currently up in the air. I'd like to try to do one on testing in Rails, but given how the one earlier this year fell flat, perhaps that's not a good topic.

If you'd like to know more about IWST, checkout the website: www.IndianapolisWorkshops.com

If you'd like to participate next year or have ideas for a topic, drop me a line: mike@michaeldkelly.com
Wednesday
Nov252009

Managing focus when doing exploratory testing

Last weekend we held the final Indianapolis Workshops on Software Testing for 2009. The topic for the workshop was 'managing focus when doing exploratory testing.' The attendees for the workshop included:

  • Andrew Andrada

  • James Hill

  • Sreekala Kalidindi

  • Michael Kelly

  • Brett Leonard

  • Brad Tollefson

  • Christina Zaza


We opened the workshop with a presentation from me. I shared some tips for using different alternating polarities to help manage focus. The polarities discussed primarily come from Bach, Bach, and Bolton and their work captured in Exploratory Testing Dynamics. In the talk, I gave some examples of using polarities. Those included:

  • explicitly defining polarities in your mission

  • using polarities to help generate more test ideas

  • using polarities as headings or sub-headings when taking session notes

  • using polarities when pairing with other testers

  • focusing on specific polarities using time-boxes


Not much of it is that ground breaking, but when I put it all together, it seemed like an interesting angle on how one might use the polarities explicitly in their testing. If you end up trying any of them, I'd be interested in hearing how it works. I've found those exercises to be helpful with my testing.

After that talk, Brett Leonard shared his thoughts on software testing in the conceptual age. His presentation was heavily influenced by Daniel Pink's "A Whole New Mind." The foundation of Brett's talk was that exploratory testing is both high concept and high touch. Both are concepts Pink talks about in his book:

High Concept:
"Ability to create artistic beauty, to detect patterns and opportunities, to craft a satisfying narrative and to combine seemingly unrelated ideas into a novel invention"

High Touch
"Ability to empathize, to understand subtleties of human interaction, to find joy in one’s self and to elicit it in others, and to stretch beyond the everyday in pursuit of purpose and meaning"

In his slides, Brett summarized the points to say that exploratory testing is the "ability to create, detect, craft, and combine [...] to empathize, understand, to find and elicit, and stretch." The idea of exploratory testing as high concept and high touch resonated with me - those ideas accurately reflect my view of what exploratory testing is.

Brett went on to tie this into the testing he does every day. He shared how his stories about users drives his focus when he tests. He uses his stories to create a testing culture that focuses on value to the user. For him, managing focus is about setting up so that when you test, you're implicitly focusing on value to the user.

As a side note, in his presentation Brett brought to my attention a new polarity! He described using logic vs. empathy when doing testing. I'll see if I can lobby Bach to get it added to the list.

After Brett, Christina (Tina) Zaza shared an wonderful experience report about how she does her exploratory testing when she's working on projects. She talked about setting up for testing, doing testing, and how she avoids interruptions while testing. For me, the highlights of Tina's advice were the following:

  • block off time to test (1 to 2 hours), close out email and IM, and schedule a conference room if needed so people know you're unavailable

  • before you start, open up everything you think you'll need to do your testing effectively: applications, databases, spreadsheets, tools, etc... -- this way you won't get distracted or slowed down mid testing

  • prioritize your tests - she listed three methods she uses:

    • faster tests (or quick tests) first

    • higher risks tests (or tests more important to the business) first

    • group features together to reduce context switching while testing



  • don't stop your testing once you've started to write up defects or to ask developers/analysts questions -- note them while you test, and then do that after your session is completed


Finally, we finished the workshop with another experience report from me. This time I shared tips for writing more effective charters. I find that often when I can't focus when I'm testing, it's because my charter is unclear. The two most well received tips from that talk were the use of a template to help clarify test ideas or test missions; and the use of group thumb voting on priorities to force discussion / clarification on test missions.

While it was a small workshop, smallest this year, I enjoyed it a lot. It's my favorite topic, so that's not surprising. We captured some feedback for next year's workshops, and we'll be working on a schedule during the holiday season. Based on the feedback, next year we'll have at least one or two hands-on laptop required workshops where we can try some of this stuff.
Page 1 ... 2 3 4 5 6 ... 51 Next 5 Entries »