Kutije u tehnikoloru
Autor cuvene knige 'The Black Box' o crnim i belim kutijama.
Meni je ovaj post pomogao da malo bolje razumem prakticnu primenu teorije sa SPa.
Mozda jos nekome bude od koristi...
Cheers,
Vlada
________________________________________________________________
I have not yet seen the posting to which this note refers, or
perhaps I missed it. However, I believe that the question and
responses may be interesting to the group.
Mark Taylor,<mtay...@dbintellect.com> in a private email asked:
>Even thought I have posted this to the newsgroup, I am E-Mailing
this to
>you specifically because I would really like your input. Any
thoughts
>you could pass along would be appreciated.
>I look forward to hearing your thoughts.
- Mark
=> mtay...@dbintellect.com
>++++++++++
>I need your input. I am writing a paper and would like
>to get some input from my fellow Comp.Software.Testing
professionals.
>I would like to start a thread, and get your input, on the
following
>topics;
Your questions are wrong, but that doesn't mean that they shouldn't
be asked. The very wrongness thereof is important and reflects some
common misconceptions about the role of "black-box" versus "white
box" testing.
>[1] When it comes to unit level testing, how much of this testing
>should be black-box 'type' of tests (controlled input - expected
output)
>vs. white-box 'type' of tests (path coverage, loop testing, etc.)?
1. Black-box versus white-box (the preferred terms are
respectively behavioral testing and structural testing) do not
really exist except in the test designers mind. The terms refer to
the models used by the test designer to create test cases. A test
specification consists of:
a. Initial condition (state)
b. Input specification
c. Expected outcome.
d. Verification criteria (how to compare expected to actual
outcome)
Once a test has been designed, there is no way to tell if it was a
"black box" or a "white box" test. It just is a test. Tests as
executed, are of course "white box," which is to say structural
because they are executed on real code.
The modern point of view is to concentrate on behavioral test
designs at all levels and to use various structural test completion
criteria (e.g., statement, branch, predicate, all-define-use data
flow, etc.). If the component's specification is very good (it
rarely is) and the behavioral test design excellent, then properly
designed behavioral tests should achieve all of the above-mentioned
coverage criteria. We use the coverage tool to tell us what we
missed. If, for example, we did not hit a statement, we inquire
why -- and what was missing in our understanding of the
requirements that led us to miss this case -- or what superfluous
code was introduced into the implementation of that specification.
The question of path coverage (rarely needed or possible) and the
various loop coverage criteria apply also. This is the
contemporary view of leading thinkers on the subject. I will call
this view "Polka Dot" testing in my repetition of this note to the
group when I see the posting.
>[2] What amount of automation should be employed for unit level
testing?
>And, does it make sense for this automation to be applied to both
black-
>and white-box 'type' tests?
There are two different automation issues and the question shows
that the distinction is not clear in the questioner's and perhaps
in other people's minds. There is the question of test design
automation and the separate issue of test execution automation.
2.1. The use of test design automation depends very much on the
mental and technical tools available to the test designer, the
application, and the specific details of the unit. For many small
units that do not easily fall into one of the models on which test
design automation tools are based, automation is pointless. There
is no simple answer to this question. It depends on whether enough
of the routine's behavior can be captured by a suitable model --
because all test design automation tools are based on models. For
many units, test design automation is not appropriate -- for many
it is.
2.2. "Black Box", "White Box" and test execution automation are
independent issues. The question betrays a misconception about the
reason for test execution automation. The purpose of test
execution automation is in support of maintenance and regression
testing. It never pays to automate something that will be run only
once. However, the typical test suite for a unit is (or rather
should be) rerurn over 50 times in its life. Proper regression
testing after maintenance will only be actually done if execution
is automated. Test execution automation has nothing to do with
"black box" or "white box."
The possibility, difficulty, and cost-effectiveness of
implementing test execution automation, be it in unit testing or
system testing or anything between depends very much on the
availability of suitable test drivers. For many source languages
and environments, there are many available automated test drivers
to be used in unit and low-level component testing. For batch
programs, for example, the ordinary batch controls (e.g., JPL) if
augmented by suitable smart test result comparators serve as
effective drivers. In system testing, off-the shelf
capture-playback systems can serve the purpose. Test drivers can
be anything from a few lines of code to complicated systems in
their own right. No simple answers here either.
Boris
-------------------------------------
Boris Beizer Ph.D. Seminars and Consulting
1232 Glenbrook Road on Software Testing and
Huntingdon Valley, PA 19006 and Quality Assurance
TEL: 215-572-5580
FAX: 215-886-0144
Email direct: bbei...@sprintmail.com
Email (Forwarded): bbei...@acm.org, bbei...@bigfoot.com
--------------------------
Meni je ovaj post pomogao da malo bolje razumem prakticnu primenu teorije sa SPa.
Mozda jos nekome bude od koristi...
Cheers,
Vlada
________________________________________________________________
I have not yet seen the posting to which this note refers, or
perhaps I missed it. However, I believe that the question and
responses may be interesting to the group.
Mark Taylor,<mtay...@dbintellect.com> in a private email asked:
>Even thought I have posted this to the newsgroup, I am E-Mailing
this to
>you specifically because I would really like your input. Any
thoughts
>you could pass along would be appreciated.
>I look forward to hearing your thoughts.
- Mark
=> mtay...@dbintellect.com
>++++++++++
>I need your input. I am writing a paper and would like
>to get some input from my fellow Comp.Software.Testing
professionals.
>I would like to start a thread, and get your input, on the
following
>topics;
Your questions are wrong, but that doesn't mean that they shouldn't
be asked. The very wrongness thereof is important and reflects some
common misconceptions about the role of "black-box" versus "white
box" testing.
>[1] When it comes to unit level testing, how much of this testing
>should be black-box 'type' of tests (controlled input - expected
output)
>vs. white-box 'type' of tests (path coverage, loop testing, etc.)?
1. Black-box versus white-box (the preferred terms are
respectively behavioral testing and structural testing) do not
really exist except in the test designers mind. The terms refer to
the models used by the test designer to create test cases. A test
specification consists of:
a. Initial condition (state)
b. Input specification
c. Expected outcome.
d. Verification criteria (how to compare expected to actual
outcome)
Once a test has been designed, there is no way to tell if it was a
"black box" or a "white box" test. It just is a test. Tests as
executed, are of course "white box," which is to say structural
because they are executed on real code.
The modern point of view is to concentrate on behavioral test
designs at all levels and to use various structural test completion
criteria (e.g., statement, branch, predicate, all-define-use data
flow, etc.). If the component's specification is very good (it
rarely is) and the behavioral test design excellent, then properly
designed behavioral tests should achieve all of the above-mentioned
coverage criteria. We use the coverage tool to tell us what we
missed. If, for example, we did not hit a statement, we inquire
why -- and what was missing in our understanding of the
requirements that led us to miss this case -- or what superfluous
code was introduced into the implementation of that specification.
The question of path coverage (rarely needed or possible) and the
various loop coverage criteria apply also. This is the
contemporary view of leading thinkers on the subject. I will call
this view "Polka Dot" testing in my repetition of this note to the
group when I see the posting.
>[2] What amount of automation should be employed for unit level
testing?
>And, does it make sense for this automation to be applied to both
black-
>and white-box 'type' tests?
There are two different automation issues and the question shows
that the distinction is not clear in the questioner's and perhaps
in other people's minds. There is the question of test design
automation and the separate issue of test execution automation.
2.1. The use of test design automation depends very much on the
mental and technical tools available to the test designer, the
application, and the specific details of the unit. For many small
units that do not easily fall into one of the models on which test
design automation tools are based, automation is pointless. There
is no simple answer to this question. It depends on whether enough
of the routine's behavior can be captured by a suitable model --
because all test design automation tools are based on models. For
many units, test design automation is not appropriate -- for many
it is.
2.2. "Black Box", "White Box" and test execution automation are
independent issues. The question betrays a misconception about the
reason for test execution automation. The purpose of test
execution automation is in support of maintenance and regression
testing. It never pays to automate something that will be run only
once. However, the typical test suite for a unit is (or rather
should be) rerurn over 50 times in its life. Proper regression
testing after maintenance will only be actually done if execution
is automated. Test execution automation has nothing to do with
"black box" or "white box."
The possibility, difficulty, and cost-effectiveness of
implementing test execution automation, be it in unit testing or
system testing or anything between depends very much on the
availability of suitable test drivers. For many source languages
and environments, there are many available automated test drivers
to be used in unit and low-level component testing. For batch
programs, for example, the ordinary batch controls (e.g., JPL) if
augmented by suitable smart test result comparators serve as
effective drivers. In system testing, off-the shelf
capture-playback systems can serve the purpose. Test drivers can
be anything from a few lines of code to complicated systems in
their own right. No simple answers here either.
Boris
-------------------------------------
Boris Beizer Ph.D. Seminars and Consulting
1232 Glenbrook Road on Software Testing and
Huntingdon Valley, PA 19006 and Quality Assurance
TEL: 215-572-5580
FAX: 215-886-0144
Email direct: bbei...@sprintmail.com
Email (Forwarded): bbei...@acm.org, bbei...@bigfoot.com
--------------------------
Previous by date: Re: Sajt ri4sp.etf.bg.ac.yu - Ormaric na podu?
Next by date: police i ladice
Previous by thread: Re: Sajt ri4sp.etf.bg.ac.yu - Ormaric na podu? Next by thread: police i ladice
Previous by thread: Re: Sajt ri4sp.etf.bg.ac.yu - Ormaric na podu? Next by thread: police i ladice