Prev: simple: P versus NP
Next: 19990823: General announcements. Goedel's question: if a theorem has a proof of length at most n, can we find it in time O(n^2)? Another question on what can be computed in limited time and space.
From: user923005 on 28 Oct 2009 15:34 On Oct 28, 10:54 am, c...(a)tiac.net (Richard Harter) wrote: > On Wed, 28 Oct 2009 17:29:04 +0100, "Dmitry A. Kazakov" > > > > > > <mail...(a)dmitry-kazakov.de> wrote: > >On Wed, 28 Oct 2009 16:08:06 GMT, Richard Harter wrote: > > >> On Mon, 26 Oct 2009 17:04:36 -0700 (PDT), user923005 > >> <dcor...(a)connx.com> wrote: > > >>>On Oct 26, 3:22=A0pm, c...(a)tiac.net (Richard Harter) wrote: > >>>> SHOULD IT BE TURTLES ALL THE WAY UP? > > >>>> In the famous anecdote, the little old lady replies to the noted > >>>> philosopher, "It's turtles all the way down." =A0When it comes to > >>>> writing software many writers on software design and many > >>>> programming language creators seem to believe that it is turtles > >>>> all the way up. > > >>>> What do I mean by "turtles all the way up"? =A0By this I mean the > >>>> thesis that the techniques and programming language concepts that > > >>>> are used in the small can be extended indefinitely to programming > > >>>> in the large. =A0In other words if we use language X for 100 line > >>>> programs and 1000 line programs we can also use it for 1000000 > >>>> line programs. =A0We may have to add some extensions and new > >>>> features along the way, but we can increase the size of the > >>>> programs we write indefinitely using the same ideas. > > >>>I've never seen a convincing argument to show that this is wrong. > > >>>We can use a 26 letter alphabet to make little words. > >>>We can use a 26 letter alphabet to make bigger words. > >>>We can use a 26 letter alphabet to make little paragraphs. > >>>We can use a 26 letter alphabet to make bigger paragraphs. > >>>We can use a 26 letter alphabet to make little books. > >>>We can use a 26 letter alphabet to make bigger books. > >>>We can use a 26 letter alphabet to make entire libraries. > > >>>Why isn't the same thing true of programming languages? > > >> It is. We can use 1's and 0's to build software. > > >Yes, but that is the machine language. When communicating ideas about > >software to other people we are using natural languages. > > >> Similarly human brains are made out of atoms. > > >So is the ink used to print books. This is an invalid comparison. Atoms is > >the "language" of the Universe, considering the latter as a machine. Human > >brain does not operate atoms, its language is different. > > True enough, which was the point. > > > > >> Nothing wrong with the argument except that it is not relevant. > >> As the Jedi Master said, These are not the turtles you want. > > >The argument is that it is not clear what a hierarchy of programming > >languages adds. > > You are missing the point. He made an observation and called it > an argument. I made an analogy and never called it an argument. > It wasn't. He did go on to question whether a > hierarchy of programming languages had value. I went on to say that we do the same thing with programming langauges. For instance, in C, I will write simple functions to build libraries. I will use libraries to build filter programs. I will pipe filter programs in a chain to accomplish complicated tasks. This is a metaphor that works pretty well. A lot of computer science seems to me to be largely mythology. What I mean by that is (for instance) the notion that OO programming will allow less expensive complicated systems. It is possible to write complicated systems in OO languages for a reasonable cost. But empirical studies have shown that the cost is not lower than writing the same systems in a simple language like C [1]. So now people are trying other paradigms like Aspect Oriented programming. I am not saying that alternative paradigms are bad. In fact, I program almost exclusively in C++ and love the OO metaphor. I am just saying that a change in paradigm doesn't usually change much. For instance, in C++, by creating objects we do NOT reduce complexity. We only hide it. > However his > subsequent paragraph had no significant connection to his > observation. Perhaps the connection was only clear to me. > [snip rhetoric] [1] "Productivity Analysis of Object-Oriented Software Developed in a Commercial Environment" Thomas E. Potok, see: http://portal.acm.org/citation.cfm?id=326103
From: robertwessel2 on 28 Oct 2009 16:52 On Oct 28, 10:38 am, p...(a)informatimago.com (Pascal J. Bourguignon) wrote: > Happily, the verification software may be simplier than the verified > software. Therefore it should be easier to verify. Possibly by a > second order verification software that is itself even simplier to > verify. And so on, until you can have it verified it by hand by as > many human you need to reach the required degree of confidence. The main problem isn't the verifier (although verifying it *is* an issue), but the specification that you're going to verify against. That won't be all that much simpler that the program to be verified, and somehow needs to be verified itself. Not to say there aren't occasional ways around that. An interesting effort I read a paper on about a dozen years ago (and then never saw a follow up), was with someone trying an alternative approach to verifying the correctness of a compiler. As opposed to attempting to prove that the compiler was correct, and would always produce correct code, they attempted (and appeared to be making progress) to prove that any particular output from the compiler met the "specification" of the input (the source code being a sort of formal specification, of course). IOW, their scheme was to run the verifier every time the compiler ran, and complain if the output was flawed.
From: mike on 28 Oct 2009 19:02 In article <d44aa7fe-fab3-4f04-a379-bba9d37c5ca9 @r5g2000yqb.googlegroups.com>, robertwessel2(a)yahoo.com says... > On Oct 26, 5:22 pm, c...(a)tiac.net (Richard Harter) wrote: > > What do I mean by "turtles all the way up"? By this I mean the > > thesis that the techniques and programming language concepts that > > > > are used in the small can be extended indefinitely to programming > > > > in the large. In other words if we use language X for 100 line > > programs and 1000 line programs we can also use it for 1000000 > > line programs. We may have to add some extensions and new > > features along the way, but we can increase the size of the > > programs we write indefinitely using the same ideas. > > <p> > > The antithesis is that it isn't turtles all the way up, or at > > least it shouldn't be turtles all the way up. That is, the kinds > > > > of languages and programming technologies that we use in > > large scale programming should be quite different from the kind > > of languages used for programming in the small. > > > While this is perhaps true, it's completely unclear what we need to > successfully manage large projects. Sure a few things do help (like > strong type, memory safety, good support for interfaces), but those > also help on all but the smallest projects as well. > I think that most of the problems inherent in any large-scale programming project result from the inherent 'fragility' of all programming languages. If you compare a large computing project with a large engineering project there are clear similarities, but one very significant difference is that almost any undetected error in code has the potential to result, somewhere down the line, in catastrophic outcomes; whereas if a nail is not quite hammered in as far as specified or if a light fitting (or even a window) is put in the wrong place then the building usually remains safe, functional, and fit for purpose. If someone has some ideas about a programming language/paradigm that is fault-resistant, not only in the sense that it reduces the number of actual bugs and/or errors produced, but also in the sense that many bugs have no significant effect on function and behaviour then large-scale projects may be a lot easier to manage. Whether this will ever be a possibility remains, in my opinion, unknown to date. Mike
From: Richard Heathfield on 28 Oct 2009 23:36 In <08799c89-fa49-4efb-9a9e-b5384fe96c55(a)13g2000prl.googlegroups.com>, user923005 wrote: > On Oct 28, 10:54 am, c...(a)tiac.net (Richard Harter) wrote: <snip> >> However his >> subsequent paragraph had no significant connection to his >> observation. > > Perhaps the connection was only clear to me. Not so. It is perfectly obvious what your point was, and it could be missed only by someone determined to miss it. The whole "turtles" analogy is flawed. The alphabet analogy is much better. It's /letters/ all the way down. -- Richard Heathfield <http://www.cpax.org.uk> Email: -http://www. +rjh@ "Usenet is a strange place" - dmr 29 July 1999 Sig line vacant - apply within
From: Dmitry A. Kazakov on 29 Oct 2009 04:52
On Thu, 29 Oct 2009 12:02:13 +1300, mike wrote: > If someone has some ideas about a programming language/paradigm that is > fault-resistant, not only in the sense that it reduces the number of > actual bugs and/or errors produced, but also in the sense that many bugs > have no significant effect on function and behaviour then large-scale > projects may be a lot easier to manage. That is because the "physics" of the code is different. Small distances in the normal space mean small powers, and also fewer things you can do (right or wrong). A tiny piece of code can do a lot, this power is translated into fragility. > Whether this will ever be a possibility remains, in my opinion, unknown > to date. The evolution of programming languages slowly goes in the direction of weakening the constructs deployed to accomplish given task (e.g. getting rid of gotos, constraining operations per datatypes etc). "Least-energy principle"... -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de |