From: Patricia Shanahan on
Martin Gregorie wrote:
....
> I've usually found debuggers to be the slower option in terms of
> programmer time. Tracing statements that show data as well as location
> are a better way of tracking down the problem - especially when the
> causative code is some distance from the apparent trouble spot.

It depends on how much is already known about the problem, and how long
it takes to run to a detectably erroneous state. If it takes a few
minutes, or I have a specific hypothesis I want to test, I like tracing
statements.

If I have no idea what is going on, and it takes four hours
to reproduce the bug, I want to squeeze as much data as possible out of
each failure. With a debugger, I can often find the answer to a question
I did not think of until after I saw the answers to other questions.

Patricia

From: Tom Anderson on
On Sun, 14 Mar 2010, Patricia Shanahan wrote:

> BGB / cr88192 wrote:
> ...
>> often, if there is no good way to test or use a piece of code, well then
>> this is a bit of an issue...
>
> The conditions that trigger a piece of code in operation may be
> difficult to cause in system test,

If it's difficult to cause with a system test, then it's possible, and
that's something that can and should be tested - corner cases are
something you should pay particular attention to, because they usually
won't be caught by informal testing.

If it's impossible to cause with a system test, then why is the code
there?

At work, for various bad reasons, we write only system tests. We do
test-first development, and have almost all the system's code covered by
tests. The only bits that aren't covered are where deficiencies in our
test tools stop us getting at them (eg something on a web interface that
involves very complex javascript that's out of our control, where our
ancient version of HtmlUnit can't handle it). I've yet to write anything
that could not in principle be tested by a system test.

Hence, i do wonder if the orthodoxy that unit tests should be the
foundation of developer testing is right. Their advantage is that when
they fail, they give you a much more precise indication of what's gone
wrong. Their disadvantage is that they don't test interactions between
components (or if they do, they become less unitish, they lose some of the
aforementioned precision), and don't provide the lovely whole-system
reassurance that system tests do, the kind of reassurance that means you
can point your product owner at the Hudson chart and say "look, it all
works!".

I appreciate that TDD orthodoxy says that you should also have functional
system-level tests, to provide that kind of verification. But if you have
those at a comprehensive level of density, why do you also need unit
tests?

> but the code itself can be unit tested.
>
> I find it helps to write unit tests and detailed specification, usually
> Javadoc comments, in parallel. Sometimes a small change in an interface
> that does not harm code I'm going to write using it makes the
> implementing classes much easier to test.

I think you're saying you write the tests and the spec at the same time
rather than writing the spec first. I come from the opposite direction -
we only write javadoc on interfaces we're exposing to some third party (eg
in a library we're releasing), and then we only write it right before
releasing it. It's very much a case of the spec being a description of
what we've built, rather than a true specification. We adhere to Extreme
orthodoxy in considering the tests the true spec. Er, even though we write
system rather than unit tests.

Actually, in the case i'm thinking of, the library has no direct tests.
Rather, we have a simple application built on top of the library,
essentially the thinnest possible wrapper round its functionality, and
then we have system tests for that application. So the application is sort
of simultaneously a test framework, a reference application and a living
specification. This may seem like a wildly marginal way to do testing and
documentation, and it probably is, but it seems to work okay. It really
forces us to test the library in practical, functional terms - it makes us
expose functionality in a way that actually makes sense to an application
developer.

Anyway, what i am working my way round to is an observation that for us,
writing documentation earlier, as you do, would probably be a good thing.
Back when i was a scientist, i would toil away in the stygian darkness of
the microscope room for months on end, doing what i thought was solid
work, and then invariably find that when i sat down to prepare a
presentation, poster, paper, thesis chapter, etc, that my data was shot
full of holes - controls i didn't do, possibilities i didn't investigate,
variables i didn't measure, related work i hadn't taken into account, and
so on. The act of bringing the data together in writing forced out all the
flaws, and gave me a stack of (mostly small) things to go back to the
bench and do.

Recently, as i've been javadocking some interfaces, i've found something
similar - we have something that works, and that makes sense to us
(including when writing the reference app), but when you have to explain,
say, what that parameter does in an @param line, you suddenly realise that
it's really being used to mean two quite different things, and should
really be two parameters, or that it should be a different type, on
another method, a field, or whatever. The act of explaining it to someone
- even the imaginary audience of the javadoc - makes you think about it in
a deeper way, or a different way at least. A way where you have to explain
what it does without being able to talk about how it does it. Interface
not implementation, with a bit of the Cardboard Programmer effect thrown
in.

So, er, yeah. There you have it. Rock on Dr S.

tom

--
Death to all vowels! The Ministry of Truth says vowels are plus
undoublethink. Vowels are a Eurasian plot! Big Brother, leading us proles
to victory!
From: BGB / cr88192 on

"Martin Gregorie" <martin(a)address-in-sig.invalid> wrote in message
news:hnj4to$edu$1(a)localhost.localdomain...
> On Sun, 14 Mar 2010 09:00:47 -0700, BGB / cr88192 wrote:
>
>> "Martin Gregorie" <martin(a)address-in-sig.invalid> wrote in message
>>> IMO, an essential part of the design is making it testable. That should
>>> have equal priority with usability, adequate error and incident
>>> reporting and a way to measure performance. All are essential to
>>> developing a good application. I seldom use a debugger, preferring to
>>> use the trace log approach to debugging & testing. I make sure the
>>> tracing and its detail levels are easily controlled and invariably
>>> leave this code in the product. The performance hit is almost
>>> undetectable when its turned off and the payoff from being able to turn
>>> it on during a production run is huge.
>>>
>>>
>> I use a debugger some amount, usually to identify where a problem has
>> occured and the situation at the place it has occured.
>>
> I've usually found debuggers to be the slower option in terms of
> programmer time. Tracing statements that show data as well as location
> are a better way of tracking down the problem - especially when the
> causative code is some distance from the apparent trouble spot.
>

the debugger will often blow up somewhere, and often one can look at the
state where it blew up and try to figure out what has gone wrong.

it is all much better than trying to figure out things, say, by dumping
logging information to a file and trying to make sense of this (although,
logfiles are often a fairly helpful means of detecting all sorts of issues
as well, such as writing out log messages whenever something looks amiss,
....).


>
> --
> martin@ | Martin Gregorie
> gregorie. | Essex, UK
> org |


From: Martin Gregorie on
On Sun, 14 Mar 2010 10:27:43 -0700, Patricia Shanahan wrote:

> Martin Gregorie wrote:
> ...
>> I've usually found debuggers to be the slower option in terms of
>> programmer time. Tracing statements that show data as well as location
>> are a better way of tracking down the problem - especially when the
>> causative code is some distance from the apparent trouble spot.
>
> It depends on how much is already known about the problem, and how long
> it takes to run to a detectably erroneous state. If it takes a few
> minutes, or I have a specific hypothesis I want to test, I like tracing
> statements.
>
The other benefit comes from leaving all the tracing statements in
production code and making its controls accessible. This was a real get
out of jail free card in a broadcast music planning system I worked on a
while back. The data and how it was used mandated a complex, menu driven
UI and the users were a very bright lot, so the system was designed to
provide interactive access to trace controls. If the users rang with a
query, we'd just tell them to turn tracing on, do it again and turn
tracing off. This found all manner of stuff from finger trouble and odd
corner cases to actual bugs with the benefit of being able to talk the
user through it with the trace report and work out the solution there and
then.

The other thing we did (which I've never done since) was that we could
set up the database to stamp each changed record with the program ID and
timestamp. Again, that found an obscure bug on one of the overnight batch
processes that would have been hell on wheels to find without this
annotation. Needless to say, the culprit was not the program that crashed!

> If I have no idea what is going on, and it takes four hours
> to reproduce the bug, I want to squeeze as much data as possible out of
> each failure. With a debugger, I can often find the answer to a question
> I did not think of until after I saw the answers to other questions.

On that mainframe (ICL 2900 running VME/B) you didn't need an interactive
debugger. Coredumps were great - all data formatted correctly and
labelled with variable names, the failed statement and path to it shown
with source file names and line numbers - and did it without reference to
the source files. I wish current systems offered something like that, or
at least a dump analysis program that could do the same job.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
From: Arne Vajhøj on
On 13-03-2010 01:23, BGB / cr88192 wrote:
> "Arne Vajh�j"<arne(a)vajhoej.dk> wrote in message
>> But developers that like to understand what they are doing and
>> why will be thinking math every hour at work whether they
>> realize it or not.
>>
>
> the aspects of math used in computers, however, is very different from what
> one typically runs into in math classes...
>
> "subtle" connections are not usually of much particular relevance, as
> usually it is the overt details which matter in this case. in many respects,
> comp-sci and math differ in terms of many of these overt details.

Software development, computer science and math is not identical.

But they are strongly related and build on each other.

>> Picking correct collection based on big O characteristics and
>> using relational databases are extremely common. Both build
>> on a strong mathematical foundation.
>
> and neither are particularly math, FWIW...

They require math to understand.

> this is about like arguing that someone has to understand convergence and
> divergence to make use of things like the taylor series.
>
> the taylor series works regardless of whether or not one understands
> convergence...

That is where I disagree.

Learning that X solves Y without understanding why will with
almost certainty lead to using X to solve Z where it is bad
solution.

> it is also far more useful to note that, for example, compound interest can
> be implemented with a for loop, than to note that it can be modeled with an
> exponential function.

I am a lot more confident in a financial calculation program
developed by someone that understand the formulas, then by
someone with elementary school math and a long experience with
for loops.

Arne