From: Arne Vajhøj on
On 14-03-2010 19:54, Arved Sandstrom wrote:
> Arne Vajhøj wrote:
>> On 14-03-2010 13:03, Martin Gregorie wrote:
>>> On Sun, 14 Mar 2010 09:00:47 -0700, BGB / cr88192 wrote:
>>>> "Martin Gregorie"<martin(a)address-in-sig.invalid> wrote in message
>>>>> IMO, an essential part of the design is making it testable. That
>>>>> should
>>>>> have equal priority with usability, adequate error and incident
>>>>> reporting and a way to measure performance. All are essential to
>>>>> developing a good application. I seldom use a debugger, preferring to
>>>>> use the trace log approach to debugging& testing. I make sure the
>>>>> tracing and its detail levels are easily controlled and invariably
>>>>> leave this code in the product. The performance hit is almost
>>>>> undetectable when its turned off and the payoff from being able to
>>>>> turn
>>>>> it on during a production run is huge.
>>>> I use a debugger some amount, usually to identify where a problem has
>>>> occured and the situation at the place it has occured.
>>>>
>>> I've usually found debuggers to be the slower option in terms of
>>> programmer time. Tracing statements that show data as well as location
>>> are a better way of tracking down the problem - especially when the
>>> causative code is some distance from the apparent trouble spot.
>>
>> Not to mention that quite often the problem does not happen
>> when debugging. Concurrency problems are frequent problems.
>> Often debugging requires optimization turned off and then
>> bad code may work. Etc..
>>
>> In server programming debuggers are mostly a tool to
>> impress pointy haired bosses with.
>
> I beg to differ. Not two weeks ago I saved a lot of time by debugging
> down into a library used by a client's J2EE application; the problem was
> almost certainly caused by *our* code, but it was manifesting in the 3rd
> party library. Any other approach other than using a debugger would have
> been considerably more time-consuming.
>
> I've found that in J2EE programming, unless you're actually the guy
> writing the guts of the app server, that a very small percentage of all
> defects have anything to do with concurrency issues, provided that the
> app programmers have half a clue. And debugging can often be the easiest
> way to trace one's way through what can be rather convoluted paths of
> execution.

My experience is different.

The bad problems usually seems to only show up when running with
high load.

Completely impossible to use debugger with.

So a 50 MB log4j log file from each cluster member.

:-(

Arne

From: jebblue on
On Mon, 15 Mar 2010 08:48:13 +0000, Arved Sandstrom wrote:

> jebblue wrote:
>> On Fri, 12 Mar 2010 10:26:57 +0000, Arved Sandstrom wrote:
>>> Both terms actually have clear English meanings - "equality" means (or
>>> should mean) that two things *are* the same, and "equivalence" means
>>> (or should mean) that two things can be substituted, that they behave
>>> the same way.
>>
>> A man and woman are equal and yet very different.
>>
> Then clearly how they are equal needs to be qualified, does it not?
>
> In fact when the word "equal" is applied in the sense of gender
> equality, or equality before the law, or "all men are created equal",
> such qualifications are made.
>
> We make exactly the same qualifications in a programming language like
> Java when we write equals() methods.
>
> AHS

:-) yep hard to argue with that, totally agree, was just trying to inject
a small bit of humor.

Oh wait... you said, "equality" means (or
>> should mean) that two things *are* the same, and "equivalence" means
>> (or should mean) that two things can be substituted, that they behave
>> the same way."

So my comment stands unchallenged, men and women are equal but totally
unequivocal.

Use Case: Chic Flicks

--
// This is my opinion.
From: jebblue on
On Mon, 15 Mar 2010 08:57:47 +0000, Arved Sandstrom wrote:

> Patricia Shanahan wrote:
>> I take the view that any multi-processor or multi-thread timing case
>> that cannot be proved impossible will happen sooner or later, even if
>> there is no known system test that can be guaranteed to produce it.
>> That means the code to handle it should be there, and should be tested.
>>
>> Patricia
>
> It seems to me that if you are sufficiently skilled in concurrency
> programming that you can pinpoint a situation that you cannot test but
> can't prove impossible, that rather than spend time writing code to
> handle the execution of the "possibly impossible" code, and then testing
> that handler code, that you might be better off simplifying your
> original code in the first place.
>
> AHS

There is no simple code in concurrent programming. Just have a good movie
ready when you get home.

--
// This is my opinion.
From: BGB / cr88192 on

"Arne Vajh�j" <arne(a)vajhoej.dk> wrote in message
news:4ba03097$0$285$14726298(a)news.sunsite.dk...
> On 14-03-2010 19:54, Arved Sandstrom wrote:
>> Arne Vajh�j wrote:
>>> On 14-03-2010 13:03, Martin Gregorie wrote:
>>>> On Sun, 14 Mar 2010 09:00:47 -0700, BGB / cr88192 wrote:
>>>>> "Martin Gregorie"<martin(a)address-in-sig.invalid> wrote in message
>>>>>> IMO, an essential part of the design is making it testable. That
>>>>>> should
>>>>>> have equal priority with usability, adequate error and incident
>>>>>> reporting and a way to measure performance. All are essential to
>>>>>> developing a good application. I seldom use a debugger, preferring to
>>>>>> use the trace log approach to debugging& testing. I make sure the
>>>>>> tracing and its detail levels are easily controlled and invariably
>>>>>> leave this code in the product. The performance hit is almost
>>>>>> undetectable when its turned off and the payoff from being able to
>>>>>> turn
>>>>>> it on during a production run is huge.
>>>>> I use a debugger some amount, usually to identify where a problem has
>>>>> occured and the situation at the place it has occured.
>>>>>
>>>> I've usually found debuggers to be the slower option in terms of
>>>> programmer time. Tracing statements that show data as well as location
>>>> are a better way of tracking down the problem - especially when the
>>>> causative code is some distance from the apparent trouble spot.
>>>
>>> Not to mention that quite often the problem does not happen
>>> when debugging. Concurrency problems are frequent problems.
>>> Often debugging requires optimization turned off and then
>>> bad code may work. Etc..
>>>
>>> In server programming debuggers are mostly a tool to
>>> impress pointy haired bosses with.
>>
>> I beg to differ. Not two weeks ago I saved a lot of time by debugging
>> down into a library used by a client's J2EE application; the problem was
>> almost certainly caused by *our* code, but it was manifesting in the 3rd
>> party library. Any other approach other than using a debugger would have
>> been considerably more time-consuming.
>>
>> I've found that in J2EE programming, unless you're actually the guy
>> writing the guts of the app server, that a very small percentage of all
>> defects have anything to do with concurrency issues, provided that the
>> app programmers have half a clue. And debugging can often be the easiest
>> way to trace one's way through what can be rather convoluted paths of
>> execution.
>
> My experience is different.
>
> The bad problems usually seems to only show up when running with
> high load.
>
> Completely impossible to use debugger with.
>
> So a 50 MB log4j log file from each cluster member.
>
> :-(
>

a lot may depend on when, where, and how one uses threading...

admitted, most of my multi-threaded experience has been in C, so I am not as
sure how it compares with Java...


simply creating bunches of threads which work on the same code and data,
using only basic mutex-based synchronization, doesn't really turn out well.

instead, I tend to split threads along with the data, so usually each thread
is working on its own data mostly independent of the others.

for shared components, I tend to split threads along the component borders
as well, so rather than having 2 or more threads operating "within" a
component, nearly all requests are routed through a "mailbox", which may
serve to serialize activity (actually, it is sort of like event-driven code,
where usually a loop will serve as a message-dispatcher, or sleep when
idle).

this could be combined with the prior strategy, in which case one
"communicates with" a piece of data, rather than having its internals be
beaten on by bunches of threads at the same time.

note that the mailbox may be hidden behind the API, so the client code
doesn't directly see that the requests are being marshalled through a
mailbox (and, usually, locking/mutexes are used to synchronize access to the
mailbox, such as when adding or removing messages, ...).


admittedly, my approach to threading had been originally influenced some by
Erlang, and also I have not done much along the lines of large-scale /
highly-parallel code, so I am not really sure the hows and whys of thread
use in commercial SW...

a lot of my stuff tends to be largely sequential code with threads off doing
side tasks, with the threads mostly there to deal with the possibility of
threads, or to better utilize multi-core processors. as such, a lot of the
code is not "thread safe" per-se...


or such...


> Arne
>


From: Arved Sandstrom on
jebblue wrote:
> On Mon, 15 Mar 2010 08:57:47 +0000, Arved Sandstrom wrote:
>
>> Patricia Shanahan wrote:
>>> I take the view that any multi-processor or multi-thread timing case
>>> that cannot be proved impossible will happen sooner or later, even if
>>> there is no known system test that can be guaranteed to produce it.
>>> That means the code to handle it should be there, and should be tested.
>>>
>>> Patricia
>> It seems to me that if you are sufficiently skilled in concurrency
>> programming that you can pinpoint a situation that you cannot test but
>> can't prove impossible, that rather than spend time writing code to
>> handle the execution of the "possibly impossible" code, and then testing
>> that handler code, that you might be better off simplifying your
>> original code in the first place.
>>
>> AHS
>
> There is no simple code in concurrent programming. Just have a good movie
> ready when you get home.
>
Unless you're dealing with shared state most code that gets executed by
multiple threads is as simple (or not simple) as it would be if executed
by a single thread. Usually that means most code, period.

A competent developer should be able, through doing some professional
development on the topic, be able to often avoid shared state. If this
is not possible, the techniques to be followed in order to eliminate
concurrency problems are not exactly that difficult for a competent
programmer to grasp. And I used the word "eliminate" on purpose - not
"reduce", but "eliminate".

I don't doubt that 80-90 percent of the people who currently work as
programmers couldn't competently write reliable concurrent code, but
then OTOH they can't write reliable code period, so it's really got
nothing to do with concurrency. A software developer who can write
high-quality code can write high-quality concurrent code, and not have
to agonize over it either.

AHS