Monday, October 10, 2011

On The Many Meanings of Testing

I have long been frustrated by the many different meanings that people have for the word testing.  The past few days have added several more.  These differences in understanding add to the adamance of our positions and occasional rancor of our discussions.  So, for discussions on this blog at least, I wanted to set down the definition of testing that I use.

One of the definitions of testing that I learned recently was that (and I'm paraphrasing here because I don't remember the exact wording)  testing includes any inquiry that we make that gives us information about the product.  For clarification, I asked "does that include code reviews."  "Yes" was the answer.  "OK, how about attending a staff meeting?"  "If it tells you something about the product."  While I appreciate the attention on larger quality issues, I think there is value in distinguishing between the act and impact of inquiring through the execution of the software and other forms of inquiry.  The power of techniques like exploratory testing come from running the software and looking at and being effected by the results.

If testing involves executing software, does that mean that any time you are executing software (before release at least) that you are testing?  In one sense, certainly.  But, again, I think this serves only to muddy the issue.  Some characteristics of a system simply cannot be engineered without executing the system.  All the modeling in the world won't identify all the bottlenecks in your system.  Usability, too, can only be achieved through a process of trial and improvement.  There are many organizations where these kinds of efforts are done by specialist engineers and not by those having the role of tester.

The testing that we increasingly do to prevent errors falls into this category as well.  Test Driven Design and Acceptance Test Driven design are great methods for engineering software (in the small and large) that does what it is supposed to do.  But when you introduce these topics to testers, you meet a great deal of resistance.  Certainly not because the methods don't improve quality.  We can all agree that they do.  It seems to me, that the more likely reason is because these methods simply don't accomplish the ends that testers believe they are responsible for.

Enough already,The definition of testing I use is this: testing is the act of executing software in order to find bugs.  This is the essentially the definition that Glen Myers gave us many years ago and is the definitions that I believe would find the greatest degree of acceptance among test practitioners.  We test to find bugs and these bugs are a big part of our value.

That's not do say that bugs are our only value.  In a previous post, I discussed the notion of contrarianism.  Every team needs a skeptic to puncture the generally optimistic group think that infects teams.  We also provide additional perspective on the value of the functionality that is being implemented and the usability of that implementation.  We make teams think about what they are doing before they rush headlong into implementation.  We do all these things, but they are not the act of testing that defines us.

I hope this helps to make sense of my ramblings and establishes a foundation for future conversations.

2 comments:

  1. I think it's valuable to work with several definitions. I doubt every reader of your blog will adopt or remember your definition.

    It's useful to have words to describe the different activities of testing, as well as the activities that surround testing effort (such as gathering information and developer relations).

    Regarding the purposes of testing: Read http://www.basilv.com/psd/blog/2011/when-is-testing-done and a link from there to a PDF by Cem Kaner which explores other (less common) purposes of testing in "What is a good test case?" (http://www.kaner.com/pdfs/GoodTest.pdf).

    Do you (or rather, would it be good to) provide information beyond bug reports? For example, see the Low Tech Testing Dashboard by James Bach--where part of the point is to also provide information about areas of risk and levels of testing in different product areas.

    If you define testing in the second way, rather than as "inquiry to find information about the product", may I suggest that you keep the broader goals of Risk Investigation and Information Scouting (or some variation of the terms) in mind as well--as part of a tester's job?

    If that's your definition of testing, also, I would suggest that a proper name for the role of the person doing it is an "Information Scout" or "Risk and Quality Information Investigator" or something. Maybe you can offer a better idea?

    ReplyDelete
  2. I don't expect everyone to accept my definition of testing. But I also do not want to have to repeat this definition in each of my posts either. So I wrote this to have something I can point people at when I get to the more controversial posts to come.

    I agree that most testers contribute more to team success than just testing as I've defined it. But, it is also my experience that managers value those contributions far less than testers think they should and far less than testing to find bugs and maybe to assess the software. Just because we agree among ourselves how great we are doesn't mean that management (at least in most companies) also agrees.

    If testers no longer test, then I wouldn't be at all surprised if managers search for other, cheaper ways to realize that value. If testing (as I've defined it) has limited value at Dr. Whittaker suggests, then those of us who are testers need to understand that we need to do a much better job educating management about that value or we may be finding ourselves on the outside looking in.

    ReplyDelete