From: Nick Keighley on
On 28 Oct, 23:02, mike <m....(a)irl.cri.replacethiswithnz> wrote:

> I think that most of the problems inherent in any large-scale
> programming project result from the inherent 'fragility' of all
> programming languages.
>
> If you compare a large computing project with a large engineering
> project

large software projects aren't large engineering projects?


> there are clear similarities, but one very significant
> difference is that almost any undetected error in code has the potential
> to result, somewhere down the line, in catastrophic outcomes; whereas if
> a nail is not quite hammered in as far as specified or if a light
> fitting (or even a window) is put in the wrong place then the building
> usually remains safe, functional, and fit for purpose.

even civil engineers have their bad days
http://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse

O-rings

<snip>
From: robertwessel2 on
On Oct 29, 5:02 am, Mok-Kong Shen <mok-kong.s...(a)t-online.de> wrote:
> In engineering works there are "factors of safety" to take account
> of variability of materials, unpredictability of actual loadings
> and inaccuracies in construction etc. etc. In computing, analogous
> has been done. For results for control of critical (including
> in particular potentially dangerous) processes, one employs
> double or triple hardware to take care of the possibility of
> malfunction of hardware. As far as I know, in order to better
> detect programmer errors, one similarly employs different and
> independent teams of programmers to do the same job and then with
> test cases to compare the results of the resulting programs.
> However, if I don't err, this comparision is only done in the design
> phase of the software and later only the work of one of the teams
> is selected for practical application. A safer way, I think, would
> have these presumably equivalent software (resulting from different
> teams, employing preferably different programming languages and
> environments) in actual production runs to always work parallelly
> on multiple hardware (of possibly different types), so as to futher
> reduce the risk of errors, since testing of software in the design
> phase might not be thorough enough to uncover all errors that may be
> present. Of course, errors could not be "absolutely" eradicated,
> in accordance with Murphy's Law.


This is sometime done. For example, the Space Shuttle has five
(identical) computers in its flight control system. Four run the full
mission software in a parallel/lockstepped and redundant
configuration, while the fifth runs a basic version of the flight
control software developed independently from the main software
group. The fifth system has enough function to fly the Shuttle, but
nothing else.

Note that they did not choose to implement different hardware for the
fifth system (it's very difficult to use heterogeneous hardware within
a lockstepped group, so that's not really an option for the first
four). While in theory different hardware for the fifth system might
improve reliability some, the systems used are extremely well tested
(and actually quite old - part of why they're so well tested), and
since they're running different software, a latent hardware bug (as
opposed to a fault, which would only take out a single computer) is
unlikely to hit both the primary group of four and the fifth system
simultaneously. It is, as always, a tradeoff.

Obviously the cost for this sort of approach is very high.

Also, this sort of parallel/independent development has been
demonstrated to *not* be as independent as one would like. While many
of the low level details of the independent implementations do show
fairly little correlation, several studies (including one by NASA,
IIRC) have found that there is a significant tendency for independent
development groups to make similar errors interpreting specifications
and making similar higher level implementation decisions. This has
been noticed by the two "independent" systems having a surprising
number of parallel bugs.
From: Dmitry A. Kazakov on
On Thu, 29 Oct 2009 05:10:37 -0700 (PDT), Nick Keighley wrote:

> On 27 Oct, 08:55, "Dmitry A. Kazakov" <mail...(a)dmitry-kazakov.de>
> wrote:
>
>> * Unmaintainable code. Look at large data flow programs (e.g. in DiaDem,
>> Simulink, LabView). It is impossible to use reasonable software processes
>> on them. How to compare two diagrams?
>
> convert it to a textual represention the run diff on it. I'm not
> saying it's trivial but I don't think its intractable either.

Are pixel positions and sizes of the blocks relevant to the comparison?
(:-))

The very argument that a text form is somewhat better raises a suspicion
why not to use it from the start?

>> When are they semantically equivalent?
>
> when they are the same.

Yes two programs are equivalent when they are...

> Code gives you exactly the same problem.

Sure, but for a programmer it is easier to decide when he deals with text.
"Find 10 differences" is a game known for pictures, no for texts.

(I am not an expert in psychology, but it seems that abstract visual
information cannot be digested at a finely detailed level. Maybe it is a
fundamental problem. Maybe not. But so far text works at best. Look at the
younger generation, who read virtually nothing. They only watch. They have
got completely new gadgets (like mobile phones), my generation didn't have.
I.e. it was a fresh start. And what we see? SMS - naked texts reborn!)

>> How do I validate a diagram?
>
> there were tools that could do this in the 80s

There was no LabView that time, and there is no such tools now... (:-))
AFAIK, engineers designing models in Simulink, LabView etc, validate the
generated code. They do not the diagrams.

>> How to search for anything in a diagram?
>
> solved in the 80s

Ah, that is maybe because there were only alphanumeric display device back
then? (:-))

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Richard Harter on
On Mon, 26 Oct 2009 17:04:36 -0700 (PDT), user923005
<dcorbit(a)connx.com> wrote:

This really is tangential, but perhaps we can get it out of the
way.

>I've never seen a convincing argument to show that this is wrong.
>
>We can use a 26 letter alphabet to make little words.
>We can use a 26 letter alphabet to make bigger words.
>We can use a 26 letter alphabet to make little paragraphs.
>We can use a 26 letter alphabet to make bigger paragraphs.
>We can use a 26 letter alphabet to make little books.
>We can use a 26 letter alphabet to make bigger books.
>We can use a 26 letter alphabet to make entire libraries.

I shall begin with what appears to be a quibble but really isn't.
Of these seven statements all but the first two are false.
Paragraphs are made of sentences. In English text sentences
include a period at the end and initialization at the end. To
get the full range of normal English text you need upper and
lower case letters, digits, and a suite of punctuation
characters. To use the jargon of programming, you need the full
English language character set.

That is not all. You also need the rules for putting together
valid English prose, a dictionary of English words, and means to
add new words to that dictionary.

Is that all? No. A proper character set, composition rules, and
a dictionary will enable you to produce paragraphs. It does not
suffice for creating books. The text in books is on a two
dimensional surface that is partitioned into pages. The material
in the books does not go into that surface randomly. There are
rules for layout. What is more, these rules vary from book to
book. Finally books can contain graphical elements of various
sorts and there are rules for doing that.

What it comes down to is that you need a lot more than your 26
letters to make your paragraphs, books, and libraries. What is
more, further you go up the scale the more extra stuff you need.
I am confident you agree with the need, though you might feel
that it's just unimportant details. It's not.

So much for that. Let's get on to the juxtaposiion fallacy. I
don't recall the correct name of the fallacy offhand, but the
idea is to suggest a relationship by putting elements next to
each without ever establishing the relationship. It is a
rhetorical device for appearing to make an argument without
actually making it.

In your seven sentences there is an implicit suggestion that
there is a ladder of building blocks, i.e., we build words out of
letters, paragraphs out of words, books out of paragraphs, and
libraries out of books. There is a real problem with this idea.
Books can have a lot of different things in them that are not
paragraphs, indeed aren't even necessarily words. Examples
include indices, bibliographies, tables, footnotes, chapter
titles, illustrations, shaded text boxes, and graphs. Oh yes,
don't forget recipes and poems.

>

>
>Why isn't the same thing true of programming languages?

>
>Now, I admit that if our tiny building blocks are poorly made, the
>bigger building blocks become more and more fragile.
>But that is true no matter which programming language we use to build
>the smallest blocks. And it is always a tragic mistake to try to make
>one big giant block that does it all (Forth metaphor is not an
>exception because the mega word comes from its babies).
>
>I also don't think it matters which direction we build the turtles, as
>long as they make it from top to bottom.

And here is where the fallacy has been used. The argument uses
the implied analogy to a fallacious model of natural language
texts. There is a double fault here. The first is the smuggling
in of a faulty model of natural language texts. The second is
the drawing of an analogy without establishing whether it is
appropriate.

Don't think I didn' see the connection you were suggesting. What
I am saying is that your argument wasn't even close to being
legitimate.

BTW, All of that said, none of this has anything to do with
whether data flow languages are a good idea.


Richard Harter, cri(a)tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Kafka wasn't an author;
Kafka was a prophet!
From: Mok-Kong Shen on
Ray wrote:
> robertwessel2(a)yahoo.com wrote:
>
>> Also, this sort of parallel/independent development has been
>> demonstrated to *not* be as independent as one would like. While many
>> of the low level details of the independent implementations do show
>> fairly little correlation, several studies (including one by NASA,
>> IIRC) have found that there is a significant tendency for independent
>> development groups to make similar errors interpreting specifications
>> and making similar higher level implementation decisions.
>
> That seems to me to be likely the fault of those who wrote the
> specification rather than those who were interpreting it.
> Seriously, if people are reaching the *same* misinterpretation,
> the flaw is in the document they're reading.

To have correct and good specifications is indeed of paramount
importance. Long time ago I was told that there was a firm that
apparently fairly successfully offered a so-called collaborative
model-driven development solution for systems design of real-time
embedded applications. It seemed to be a system in which the computer
tightly controlled/assisted the human designer in the entire process
from specification/documentation to code generation. I just surfed
for it and found that the firm has been acquired by IBM. (See
http://en.wikipedia.org/wiki/I-Logix.) If anyone has practically
used similar software design systems, it would be fine, if he would
let his good or bad experiences with them be shared.

M. K. Shen