From: Martin Gregorie on
On Sun, 01 Aug 2010 23:08:05 +0100, Tom Anderson wrote:

>
> This all sounds a bit mental to me. If this alleged designer is going to
> design to this level of detail, why don't they just write the code?
>
The level I'm advocating is pretty much what you see in a good Javadoc or
a C standard library manpage. There's a good argument to be made that if
you don't document to that level before coding starts, then its pretty
much hit or miss whether the resulting code does what its intended to do
if the coding is done by somebody else.

OK, it might stand a chance of doing its job if the designer wrote it,
but what are the chances of somebody else using it correctly without a
fair amount of coaching and hand-holding from the designer?

The project I mentioned with the use-case system docs was like that: for
some packages running Javadocs showed absolutely nothing but the class
names and method interfaces - and the method names weren't particularly
meaningful. The result was that the Javadocs were meaningless for
determining how and when to use the classes - we had to prize example
code and argument descriptions out of the author before we could even
think about using them. You may think that's acceptable but I don't.

Read the source? We didn't have it and couldn't get it. Besides, I expect
to use library objects without needing to see implementation detail.
Isn't that the while point of OO?


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
From: bugbear on
Martin Gregorie wrote:
> On Sat, 31 Jul 2010 20:36:54 -0500, Alan Gutierrez wrote:
>
>> I'm not sure how you'd go about testing without source code and coverage
>> tools. I can imagine that you could investigate an implementation, using
>> a test framework, but I wouldn't have too much confidence in tests that
>> I wrote solely against an interface.
>>
> IME testability is down to the designer, since that is presumably who
> defined and documented the interface and the purpose of the unit being
> tested. If all they did was publish an interface with about one sentence
> saying what its for then you're right - its untestable.
>
> Put another way, if you have to read the source code to write tests, then
> you've been trapped into testing what the coder wrote and not what you
> should be testing, which is whether the code matches the requirement is
> was written to fulfil.

But if the domain of the API is large (or floating point), it's clearly impossible
to "fully test".

A particular common case is when a function utilises a cache. Clearly
the caching should be tested, and yet
how (without some kind of knowledge of cache size
and replacement algorithms) could one know that the cache
has been tested?

I see no realistic way of getting trustworthy testing in this
case without some knowledge of the implementation.

BugBear
From: bugbear on
Tom Anderson wrote:

>
> This all sounds a bit mental to me. If this alleged designer is going to
> design to this level of detail, why don't they just write the code?

I can specify a sorting routine, and write tests for
it, a lot easier than I can implement it.

BugBear
From: Martin Gregorie on
On Mon, 02 Aug 2010 12:04:24 +0100, bugbear wrote:

> Martin Gregorie wrote:
>> On Sat, 31 Jul 2010 20:36:54 -0500, Alan Gutierrez wrote:
>>
>>> I'm not sure how you'd go about testing without source code and
>>> coverage tools. I can imagine that you could investigate an
>>> implementation, using a test framework, but I wouldn't have too much
>>> confidence in tests that I wrote solely against an interface.
>>>
>> IME testability is down to the designer, since that is presumably who
>> defined and documented the interface and the purpose of the unit being
>> tested. If all they did was publish an interface with about one
>> sentence saying what its for then you're right - its untestable.
>>
>> Put another way, if you have to read the source code to write tests,
>> then you've been trapped into testing what the coder wrote and not what
>> you should be testing, which is whether the code matches the
>> requirement is was written to fulfil.
>
> But if the domain of the API is large (or floating point), it's clearly
> impossible to "fully test".
>
> A particular common case is when a function utilises a cache. Clearly
> the caching should be tested, and yet how (without some kind of
> knowledge of cache size and replacement algorithms) could one know that
> the cache has been tested?
>
> I see no realistic way of getting trustworthy testing in this case
> without some knowledge of the implementation.
>
I think I'd expect the cache capacity to be documented, assuming there
isn't a method for setting it.

Similarly, I'd want the cache's discard algorithm to be described. There
are two reasons for doing this:

(1) to give the user some idea of what to expect. A LRU discard/overwrite
algorithm will give a very different performance in some
circumstances from FIFO. There are cases when either may be
appropriate so the user needs to which was used.

(2) you need to tell the coder which algorithm to implement, so why not
simply write it into the spec. and deal with both points.

Besides, if more than a few milliseconds is spent writing the spec, you
may come to realise that, in this example, a selectable set of discard
algorithms should be provided along with a method of specifying which is
to be used in each instance.

[*] LRU is appropriate when you want to retain the most frequently
used items and item usage is heavily skewed, but FIFO may be more
appropriate if there is also a TTL requirement or item usage is
fairly evenly spread.

--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
From: Alan Gutierrez on
Martin Gregorie wrote:
> On Sun, 01 Aug 2010 23:08:05 +0100, Tom Anderson wrote:
>
>> This all sounds a bit mental to me. If this alleged designer is going to
>> design to this level of detail, why don't they just write the code?

> Read the source? We didn't have it and couldn't get it. Besides, I expect
> to use library objects without needing to see implementation detail.
> Isn't that the while point of OO?

There are other points to OO. I want to be able to read the source of
the software I use for development. I use open source software, not 100%
Pure Open Source, I don't take an approach that is zealotry.

That should actually clarify my intent, which is to publish a program as
open source, using the tests as a way to argue the quality of the
implementation, plus give people a way to structure the impromptu code
reviews that occur when a potential adopter investigates the implementation.

In any case, hiding implementation detail is the point of encapsulation,
but it is not the "whole point of OO."

Sometimes the devil is in the details. If you're testing, you need to
see those devils. Thus, I agree with Tom, and would say that the
generalized problem of specification conformance test is not what I'm
trying address, rather the establishment of the validity of an
implementation.

--
Alan Gutierrez - alan(a)blogometer.com - http://twitter.com/bigeasy