From: Martin Gregorie on
On Sat, 31 Jul 2010 20:36:54 -0500, Alan Gutierrez wrote:

> I'm not sure how you'd go about testing without source code and coverage
> tools. I can imagine that you could investigate an implementation, using
> a test framework, but I wouldn't have too much confidence in tests that
> I wrote solely against an interface.
>
IME testability is down to the designer, since that is presumably who
defined and documented the interface and the purpose of the unit being
tested. If all they did was publish an interface with about one sentence
saying what its for then you're right - its untestable.

Put another way, if you have to read the source code to write tests, then
you've been trapped into testing what the coder wrote and not what you
should be testing, which is whether the code matches the requirement is
was written to fulfil. In this case the designer is at fault, since he
failed to provide a clear, unambiguous description of what the unit is
required to do, which must include its handling of bad inputs.

I've run into this problem with designers who think they're done when
they've produced a set of use cases that a non-technical user can
understand. They haven't. A one page use case with no exception handling
detail is plain inadequate. It doesn't help either if requests for
clarification only result in the designer changing the use case without
providing the missing detail or, worse, contradicting some of his
previous requirements. I was involved with writing automated tests for a
complex package a bit over two years ago which had exactly this level of
airy-fairy documentation. The test package wasn't complete when my
contract ended and I've just heard that its apparently no further on
right now and said complex package is still far from being completely
tested.

Designers should at least document to the level of a fully comprehensive
Javadoc - in fact I'd go further and say that it is not unreasonable for
the designers to deliver module specifications at the package, interface
and externally visible class level either as a set of outline Java files
containing comment and skeletons acceptable to the javadocs utility, or
at least text that can be rapidly reformatted into something that
javadocs can process. And, before you ask, that's exactly the type of
documentation I aim to deliver regardless of whether the coder is myself
or somebody else.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
From: Eric Sosman on
On 7/31/2010 12:51 PM, Stefan Ram wrote:
> [...]
> All these assertions can be tested indeed whenever the
> quantification covers some small set (such as bool).
> When all possibilities are tested, this is a proof.
> So you can prove the operator �&� for booleans by testing
> false&false, false&true, true&false, and true&true.

What about `false & true | true', and so on? If the
implementation incorrectly treats `&' as having lower precedence
than `|', is that not an error? One could quibble about whether
the error is in `&' or in `|', but either way it is clear that
there's something wrong that your four cases plus four similar
cases for `|' would not catch.

Also, your four cases should be expanded to cover boolean
values of different provenance: Literal constants, static final
constants, local variables, static variables, instance variables,
method values, ... It is entirely possible that `&' could work
just fine for most of these but fail for some quirky combination
(e.g., through an incorrect optimization). Then there's JIT ...

At some point, I think a practical test must eventually rely
on "inside information" about the implementation, or on a notion
of "close enough for jazz." Without that -- well, you might be
able to demonstrate the correctness of many many results today,
but does that prove they will also be correct tomorrow?

--
Eric Sosman
esosman(a)ieee-dot-org.invalid
From: Tom Anderson on
On Sun, 1 Aug 2010, Eric Sosman wrote:

> On 7/31/2010 12:51 PM, Stefan Ram wrote:
>> [...]
>> All these assertions can be tested indeed whenever the
>> quantification covers some small set (such as bool).
>> When all possibilities are tested, this is a proof.
>> So you can prove the operator ?&? for booleans by testing
>> false&false, false&true, true&false, and true&true.
>
> What about `false & true | true', and so on? If the
> implementation incorrectly treats `&' as having lower precedence
> than `|', is that not an error? One could quibble about whether
> the error is in `&' or in `|',

No, the error is clearly in ' '.

> but either way it is clear that there's something wrong that your four
> cases plus four similar cases for `|' would not catch.

Ah, but that's an integration test!

tom

--
Tech - No Babble
From: Tom Anderson on
On Sun, 1 Aug 2010, Martin Gregorie wrote:

> On Sat, 31 Jul 2010 20:36:54 -0500, Alan Gutierrez wrote:
>
>> I'm not sure how you'd go about testing without source code and coverage
>> tools. I can imagine that you could investigate an implementation, using
>> a test framework, but I wouldn't have too much confidence in tests that
>> I wrote solely against an interface.
>
> IME testability is down to the designer, since that is presumably who
> defined and documented the interface and the purpose of the unit being
> tested. If all they did was publish an interface with about one sentence
> saying what its for then you're right - its untestable.
>
> Put another way, if you have to read the source code to write tests, then
> you've been trapped into testing what the coder wrote and not what you
> should be testing, which is whether the code matches the requirement is
> was written to fulfil. In this case the designer is at fault, since he
> failed to provide a clear, unambiguous description of what the unit is
> required to do, which must include its handling of bad inputs.
>
> I've run into this problem with designers who think they're done when
> they've produced a set of use cases that a non-technical user can
> understand. They haven't. A one page use case with no exception handling
> detail is plain inadequate. It doesn't help either if requests for
> clarification only result in the designer changing the use case without
> providing the missing detail or, worse, contradicting some of his
> previous requirements. I was involved with writing automated tests for a
> complex package a bit over two years ago which had exactly this level of
> airy-fairy documentation. The test package wasn't complete when my
> contract ended and I've just heard that its apparently no further on
> right now and said complex package is still far from being completely
> tested.
>
> Designers should at least document to the level of a fully comprehensive
> Javadoc - in fact I'd go further and say that it is not unreasonable for
> the designers to deliver module specifications at the package, interface
> and externally visible class level either as a set of outline Java files
> containing comment and skeletons acceptable to the javadocs utility, or
> at least text that can be rapidly reformatted into something that
> javadocs can process. And, before you ask, that's exactly the type of
> documentation I aim to deliver regardless of whether the coder is myself
> or somebody else.

This all sounds a bit mental to me. If this alleged designer is going to
design to this level of detail, why don't they just write the code?

tom

--
Tech - No Babble
From: Joshua Cranmer on
On 07/31/2010 09:36 PM, Alan Gutierrez wrote:
> I'm not sure how you'd go about testing without source code and coverage
> tools. I can imagine that you could investigate an implementation, using
> a test framework, but I wouldn't have too much confidence in tests that
> I wrote solely against an interface.

The most common cases I can think of where this is done is when you're
writing a test suite for some specification (e.g., CSS) and when you are
trying to write an autograder for homework submissions. In both cases,
you'll probably have a corpus of buggy implementations you can use to
test for specific bug classes. Even if you don't, a good programmer
should be able to guess where the most problems are likely to occur
(e.g., a cross of two features generally not intended to mix) and write
test cases for those specific situations.

You won't get perfect coverage, but it should satisfy most people's
expectations.

>> That was when I discovered that most people
>> don't know how to thoroughly test their own code to find problems such
>> as a really common one brought about by implicit type conversion.
>
> Which would that be? Curious. I'm probably one of the people that
> doesn't know how to test for it.

The assignment was to write a simulator for a simple 16-bit processor
architecture. There is an instruction which basically says "load the
memory at the address of this register plus this (signed) immediate value".

The types:
u16 memory[65536]; s16 (or u16) reg; s16 immed;

The problematic line:
reg = memory[reg + immed];

The end result was that reg + immed should be equal to 0xB716 after
overflowing. The type conversion implicitly converted the computed index
into either a signed 16-bit value or a 32-bit value. Respectively, that
would produce either a negative index or an index of 0x1B716, both of
which are out of bounds.

--
Beware of bugs in the above code; I have only proved it correct, not
tried it. -- Donald E. Knuth