From: Mok-Kong Shen on
mike wrote:
> mok-kong.shen wrote:
[snip]
>> .......... A safer way, I think, would
>> have these presumably equivalent software (resulting from different
>> teams, employing preferably different programming languages and
>> environments) in actual production runs to always work parallelly
>> on multiple hardware (of possibly different types), so as to futher
>> reduce the risk of errors, since testing of software in the design
>> phase might not be thorough enough to uncover all errors that may be
>> present. Of course, errors could not be "absolutely" eradicated,
>> in accordance with Murphy's Law.
>>
> Yes - it is true that paralelling the hardware and software with result
> comparison migh help reduce software errors, but I think it may be a
> while before I can load up a [Windows/Linux/Chrome] operating system on
> my new [AMD/Intel/Via(?)] chipset...

It depends of course on how critical the consequence of an eventual
error would be in one's application. In allday works, the tolerance is
relatively high. That's why almost all commonly used software repeatedly
have updates, which not only introduce new features but also often
correct errors, and yet people use them. One takes that risk
consciously, just like anyone taking a flight knows that there is
a very small but certainly non-zero probability that his plane would
have troubles en route and he might not arrive at his destination.
He must judge whether it is wise for him to take the risk. That is
unfortunately life.

M. K. Shen
From: Marco van de Voort on
On 2009-10-28, Pascal J. Bourguignon <pjb(a)informatimago.com> wrote:
>> On 2009-10-28, Pascal J. Bourguignon <pjb(a)informatimago.com> wrote:
>>> Now if you need to write a million of source line, then just don't
>>> do it. Use metaprogramming to generate this million of source lines
>>> from a smaller source. And so on, you can add layers of
>>> metaprogramming all you need to compact your sources and always have
>>> something of manageable size.
>>
>> So, you're saying that *every* programming problem can be solved in at
>> most a few tens of thousands of lines of code?
>
> Can you not specify all programming problem in less that a few
> thousands of lines of specification?

Yes but that is usually a top-down specification, and while implementing
lots of little decisions still have to be made.

So usually, such specifications are not complete.

> Well, you can always write more detailed specifications, but I can
> assure you that sales peoples will always be able to put the whole
> specifications of your software on a 2-page booklet.

Yes, but strictly speaking that is a summary. Not a full specification.

IOW you reduce complexity by removing details, and describe first order
operation only. Not by introducing an abstraction that reduces data.

>> Certainly some problems can, but most can't. Metaprogramming is just
>> a form of compression, and there is no compression system that can
>> reduce every source below a given size.

I like that analogy.

>> Some problems really are irreducibly complex, and demand complex
>> solutions.
>
> Yes indeed. However, assuming a big ontology (eg. take wikipedia, or
> even the whole web), wouldn't it be possible to express the needs for
> any software in less than ten thousands lines, and let the
> sufficiently smart system develop it, filling in the blanks in the
> specifications with all the knowledge it can extract from the web?

I don't think so. Trying to do that you just move a lot of assumptions and
decisions in the architecture of the interpreter of those thousand lines,
which will make that interpreter harder to reuse.

From: Tim Little on
On 2009-10-28, Pascal J. Bourguignon <pjb(a)informatimago.com> wrote:
> Yes indeed. However, assuming a big ontology (eg. take wikipedia,
> or even the whole web), wouldn't it be possible to express the needs
> for any software in less than ten thousands lines, and let the
> sufficiently smart system develop it, filling in the blanks in the
> specifications with all the knowledge it can extract from the web?

You can do that: but now your program's correctness depends upon the
correctness of every detail in Wikipedia, or even the whole web, as
well as that of both the "sufficiently smart" system that extracts
information from it, and the ten thousand lines of specification.

And yet, there will still be programs that cannot be specified in less
than ten thousand lines (unless the lines are of unbounded length).


> Or take the problem actually in the other dirrection. Would you
> trust any implementation of a system that has orders of magnitude
> more than ten thousand lines of specifications?

Not with the fate of the human species, no. As an acceptable element
of risk to my own life, yes. You probably have done so too: air
traffic control systems do have much more than ten thousand lines of
specification. So do the systems that actually allow the pilots to
fly the jets. So do hospital patient record systems, including
records of medications to be administered.


> How can you ensure these specifications are consistent? How can you
> ensure that they're effectively implemented?

I cannot. I could not even read most of the specifications if I
wanted to, nor the source code of virtually any system upon which my
life may depend. Even if I could compare them, most of them would
require specific knowledge in domains that could take a decade to
achieve the required level of background knowledge to adequately check
their correctness.


> Wouldn't you be more able to understand and check the specifications
> if they were shorter, that is indeed, given the ultimate limits to
> compression, if what they specified was less complex or of a more
> limited scope?

If that were possible. At some point reduction in length can only
come with less comprehensibility, and beyond that there is a point
where no reduction in length is possible at all.


> If you accept that big systems must be decomposed into small
> programs,

I do not. Sometimes it is appropriate to decompose a big system into
small programs, sometimes it does not help much. Sometimes it makes
the problem worse by greatly increasing the complexity of interactions
between programs.


> Then the degree of automatization in the process of translating the
> specifications into executable code is only a matter of advancement
> of the techniques, while the size of the executable code is only
> (roughly) a function of the number of metaprogramming levels used.

Resource requirements for a given task can scale exponentially with
the number of metaprogramming levels used, and frequently do. Also,
the concept of "leaky abstraction" usually applies, becoming worse
with every level added.

Note that most large real-world systems have enormous numbers of
details that are both highly specific and necessary to correct
function. They will not be present in the pre-existing programming
environment because they are specific to the particular problem. They
cannot be ignored because they are necessary. Hence they must all be
represented in both the specification and resulting system source
code, which irreducibly blows both their lengths well past ten
thousand lines.


- Tim
From: Paul E. Black on
On Wednesday 04 November 2009 21:15, Tim Little wrote:
> On 2009-10-28, Pascal J. Bourguignon <pjb(a)informatimago.com> wrote:
>> ... However, assuming a big ontology (eg. take wikipedia,
>> or even the whole web), wouldn't it be possible to express the needs
>> for any software in less than ten thousands lines, and let the
>> sufficiently smart system develop it, filling in the blanks in the
>> specifications with all the knowledge it can extract from the web?
>
> You can do that: but now your program's correctness depends upon the
> correctness of every detail in Wikipedia, or even the whole web, as
> well as that of both the "sufficiently smart" system that extracts
> information from it, and the ten thousand lines of specification.
>
> And yet, there will still be programs that cannot be specified in less
> than ten thousand lines (unless the lines are of unbounded length).

Let me take another tack. Suppose I want to create a specification
language that is so powerful that ANY program can be specified in less
than 10 000 lines.

But wait, there are an infinite number of computer programs! If I
limit my specification language to about 75 characters (upper and
lower case letters plus numbers and punctuation), I can only write
about 75^(10 000 * 80) = 10^1500049. That's a lot of program, but not
infinite.

Ok, but I really don't care about specifying bizarre "programs" in my
language. I'll settle for only specifying "reasonable" ones.

Hold on, how can I decide beforehand which programs are reasonable
(that is, those I should be able to specify) and those that aren't??

I hope it is clear that specifications are no magic bullet.


Yes, I much prefer a short specific (in an "easily understood"
language) to a long, possibly complex-for-decent-performance program.
But not experience has shown that many perfectly reasonable tasks
don't have succinct specifications nor programs.

-paul-
--
Paul E. Black (p.black(a)acm.org)