Bad books on software testing
In Chapter Two, page 29, the book states: "It is impossible to test a software system without having some definition of its intended functionality."
What?!?!
So here is the problem. Out of three authors (and probably several people who reviewed the book) you would think one of them would know how to test software in the real world where everything is not clearly defined. If you give me (or probably you, or any other not brain-dead tester) any piece of software, for any market or intended purpose, and I can test it (with admittedly varying degrees of success) for:
- usability
- performance
- fault injection
- dataflow errors
- data corruption
- memory leaks
- concurrency issues
- configuration problems
- installation and operating platform problems
- common problems in the technology used to develop it
- reliability
- testability
- efficiency
- security
- scalability
- look and feel
- industry standards
- regulations (if applicable - or even if not)
- internationalization
- etc...
The only thing I can't test is functionality as described by a document.
Not to mention that if I'm a thinking human being, I might be able to find some other product on the market that "looks like" the one I'm testing and I might be able to do a parallel test. Or I might be able to use my superpowers of common sense to do some research to see if I can discover what the software should do and what users of such software need.
My main problem with the book (and similar books I've read lately) is that silly statements like this set the tone for the entire book. They are platitudes that have nothing to do with working in a real test environment. When I read this I hear the author saying: "If your project doesn't conform to these standards that I have arbitrarily set, then you can't possibly be engaged in real software testing."
What makes it even worse is that it's not really that bad of a book. There are good guidelines and suggestions in the book that make it worth reading. It's just unfortunate that now when I read anything in the book I have to remember some of the really stupid statements and it discolors the entire experience.
Oh well... I guess it's easy to criticize when I haven't written a book of my own.
--------------------------
Past Comments
--------------------------
>>>>>My initial reaction...
Submitted by sbarber on Mon, 13/12/2004 - 22:45.
is simply --
Wow!
Fascinatingly enough, I am sitting in the lobby of a hotel at STPcon (Software Testing and Performance Magazine Conference) where Cem Kaner gave a keynote last night that, among other things, blasted the concept of “it ain’t testing if there ain’t requirements before you start.” Even more interesting was the fact that he completely validated many of the points I had made earlier in the day at my presentation… and we didn’t even coordinate topics, let alone content! And guess what? Attendees smiled with heads nodding and the attending authors who publish rubbish like that blushed embarrassedly and slinked from the room before the end of the presentation. Hmmm, I wonder why?
I certainly don’t have the hours it would take to write the response I’d like to right now… maybe I’ll just use this as a basis for a whole new article, but in the mean time…
>>i.e. in your list is the parameter of performance. How do you test an app w/o knowing what the acceptable performance parameters are?
Well, it is through testing that I determine capacity, contention, concurrency and configuration issues. It is through testing that I determine current performance under various volumes and workloads. It is through testing that I am able to quantify what Beta testers are currently referring to as “slow”. It is through testing that I am able to determine inconsistencies in performance across the application\system.
I need no requirements to do that, no documentation, no pass/fail criteria, yet all of testing results in quality related information that is used to make quality related decisions about the application under test.
Does none of this count as testing to you?
>>It is not a process that is repeatable.
Sure it is – the thought process is repeated EVERY TIME WE TEST. Are the exact steps repeatable? Who cares? Exactly repeatable tests are valuable for “change detection” testing, “multi-OS compatibility” testing and certain types of “regression” testing. Explain to me how one can know the exact steps to testing new functionality that has never been used. Take a simple case, an on-line order form. Do you need exact steps, down to the keystroke, to perform the “cat on the keyboard” test? And if you do, what stops the developer from handling those exact keystrokes rather than some other random keystrokes? On top of that, how many more keystroke combinations can you test in the time it would take you to document those steps?
>>There is a lot to be said to Exploratory Testing and there is just as much to be said about formal methods of Testing.
Are you implying that ET is not a formal method of testing?!? I submit that we are either not referring to the same thing or that you are uneducated in Exploratory Testing. Note the capital letters – (e)xploratory (t)esting is a description, much like “boring testing”. (E)xploratory (T)esting is a proper noun that describes a very specific, formal, researched, documented, presented, debated, revised, trained, requested and widely accepted testing method. It has techniques, methods, it can be done well or poorly, it is so advanced, in fact, that to the untrained eye it may APPEAR to be informal.
It is not my intent to flame the poster whose comments I have commented on. My intent is to shed some light on some extremely commonly held opinions that many industry leaders have repeatedly shown to have fundamental flaws.
Please do not hesitate to post counter arguments either here or to mail them to me personally. If I am off base, I’ll be happy to refine or retract my statements.
Scott
--
Scott Barber
PerfTestPlus
Software Performance Specialist,
Consultant, Author, Educator
sbarber@perftestplus.com
www.perftestplus.com
>>>>>I'm with you now...
Submitted by Mike Kelly on Thu, 09/12/2004 - 11:35.
From your last set of comments, I think we are on the same page.
Like I said, there are many good points in the book on streamlining process and identifying shortcomings. I just don't like universals that say "we should always have this" or "we should always be looking for that." Just as when I read them in requirements, when I read them in books it makes me immediately suspicious and I am certainly aware of exceptions to the rule the authors stated above.
Thanks for the comments. I enjoyed both the feedback and the discussion. Perhaps I am too hasty in my criticisms.
>>>>>sorry this statement, "The ul
Submitted by jayrod84 on Thu, 09/12/2004 - 11:26.
sorry this statement, "The ultimate goal for a software process and products is to create a repeatable process." should be changed to
"The ultimate goal for a software process and products is to create a succesful repeatable process." Because being profitable over a long period of time depends on not doing something good once, but doing it over and over.
>I logged over 50 bugs my first two weeks and I didn't look at one requirement.
From your first post I gathered that your bugs even though not based on actual project requirements are none the less based on implied requirements.
I think we might be in two different schools of thought. There is alot to be said to Exploratory Testing and there is just as much to be said about formal methods of Testing.
There are many many things that are good in theory but fall short in practice. Ideally software shouldn't be tested until you know what you're looking for. Your list of things however encompass things that go overlooked in requirements due to their commonality. I believe these things however common they are still fall on the shoulders of the designer to specify.
In a perfect world testers would only need to look at the requirements to know how to test a product. and in an even more perfect world bugs wouldn't happen, but then where would the testers be :P
Good blog, I enjoy reading it.
>>>>>Welcome to Context-Driven Testing
Submitted by Mike Kelly on Thu, 09/12/2004 - 10:49.
The title of your first comment "I can understand from a non real world tester viewpoint" makes me ask the questions: Why the heck do we care about testing that does not exist in the real world? What do we gain from setting up examples that will never exist and what do we really learn from these examples?
You said it yourself. As testers we can use shortcuts and prior knowledge. No project is the same and certainly not all projects should use the same process. As a professional tester I have tools available to me to help me find problems without requirements. Are you suggesting that if I inject faults in the system and it crashes and corrupts data that this is not a bug? A document needs to tell me the application should not corrupt data? Or that if I can bypass the login mechanism for a website by manipulating a cookie, that this is not a bug? A document needs to tell me that no one should be able to bypass security?
In your second post you say "The ultimate goal for a software process and products is to create a repeatable process." What? I thought the goal was to provide a solution to a need and to try to make a profit. Who am I to question a successful project that has no requirements, no process, and a great product? They exist (however I will grant that they are probably few and far between).
I'm not saying we shouldn't look to find better processes for our context, what I am saying is that everything we do is context specific. As a professional, if I'm in an environment that has no requirements, I should still be able to function.
I just joined a new company. I have been here for about four weeks. The second day I was working, I was testing. I logged over 50 bugs my first two weeks and I didn't look at one requirement. All of the bugs were reviewed and with the exception on one of them, they were fixed. Are you saying that I should not have been able to find those bugs without a document? That's just silly.
>>>>>also... >>I’m testing and
Submitted by jayrod84 on Thu, 09/12/2004 - 10:35.
also...
>>I’m testing and I might be able to do a parallel test. Or I might be able to use my superpowers of common sense to do some research to see if I can discover what the software should do and what users of such software need.
While this is very true and things can easily be assumed. It is not a process that is repeatable. You can not from one project to another make the same assumptions. Nor can you expect that everyone else under the same situation make the same assumptions. The ultimate goal for a software process and products is to create a repeatable process.
Anyone is able to deliver working bug free code once. But can you guaruntee the same success time and time again by making assumptions? I'd think not. It's safer and more steadfast to say "Here are the requirements, this product 'should' work like this". Any of the above that you mention therefore should be on the list of things to address during testing, woe to the project that lacks this foresite.
>>>>>I can understand from a non real world tester viewpoint
Submitted by jayrod84 on Thu, 09/12/2004 - 10:28.
Not that you can't test software out of the box but when I read “It is impossible to test a software system without having some definition of its intended functionality.” I thought to myself... Where is the problem?
I believe this may be from a mathmatical view or something. i.e. in your list is the parameter of performance. How do you test an app w/o knowing what the acceptable performance parameters are? This is drastically different from a web app to a real time system. While it is true that you can assume that waiting 3 minutes for a page to load is unacceptable. But is it wise to make this assumption?
I think that the authors view is that it is unacceptable to begin testing a system w/o knowing what to test for. True there are shortcuts and use of prior knowledge but when it boils down to it all of the things that you test for come from someone elses definition of its intended functionality.