From: Neville Dempsey on
On Apr 24, 2:36 pm, "Zaphod " <a3904...(a)owlpic.com> wrote:
> Walter Roberson <rober...(a)hushmail.com> wrote in message <hqthjq$96...(a)canopus.cc.umanitoba.ca>...
>
> You are saying Algol 68is not archaic? The 68 stands for 1968.
> I looked for lists of most used programming languages and couldn't even find one listing Algol.
http://www.devtopics.com/most-popular-programming-languages/
http://langpop.com/

I checked langpop, and the site does not specifically check for
Algol68 so it is probably a red herring. Indeed if you do the google
search, then Algol68 outranks several of the languages that are
listed.

Algol 68 appears on rosettacode, but that is - possibly - only because
it has a dedicated fan there... namely "yours truly!"
http://rosettacode.org/wiki/Sort_most_popular_programming_languages

Re: "Python has no problem resolving ambiguity between parens for
tuples, order of evaluation, and function calls, and handles them in a
perfectly consistent way. "

- - - - SNIP: 8>< - - - - - - - - - - - - - - -
$ cat lists.py
lists=(1,2,3),[4,5,6],"789",xrange(10,13)
for this_list in lists:
try:
print tuple(this_list),
print "%r,%r,%r"%this_list
except TypeError, type_error:
print type_error,"%r"%this_list

- - - - snip: 8>< - - - - - - - - - - - - - - -
$ python lists.py
(1, 2, 3) 1,2,3
(4, 5, 6) not enough arguments for format string [4, 5, 6]
('7', '8', '9') not enough arguments for format string '789'
(10, 11, 12) not enough arguments for format string xrange(10, 13)

- - - - SNIP: 8>< - - - - - - - - - - - - - - -
Algol68 has some syntactic inconsistencies. eg:
$ cat case_ambig.a68
CASE 1 IN print("OK") ESAC;
( 1 | print("Ambig") )

- - - - snip: 8>< - - - - - - - - - - - - - - -
$ a68g case_ambig.a68
2 ( 1 | print("Ambig") )
a68g: error: INT cannot be coerced to BOOL in a meek-enquiry-clause
(detected in conditional-clause starting at "(" in this line).

And - I am sure - a few others. Algol 68 - as defined in the standard
- does have restricted

My pet python syntax error is:
- - - - SNIP: 8>< - - - - - - - - - - - - - - -
$ cat bool_ambig.py
if True == not False: print "?"

- - - - snip: 8>< - - - - - - - - - - - - - - -
$ python bool_ambig.py
File "bool_ambig.py", line 1
if True == not False: print "?"
^
SyntaxError: invalid syntax
c.f. http://docs.python.org/library/stdtypes.html#boolean-operations-and-or-not

Apparently this is entirely logical.

It all reminds me a little of "Gödel's first incompleteness theorem"
which can be stated as:
> Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory (Kleene 1967, p. 250).
c.f. http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#First_incompleteness_theorem

Njoy
NevilleDNZ
--
To download Linux's Algol68 Compiler, Interpreter & Runtime:
* http://sourceforge.net/projects/algol68
From: Walter Roberson on
Zaphod wrote:

> I said parens can denote several things in a language, namely order,
> invocation, tuples, etc.

You are changing your story after the facts. Need I go back and
explicitly quote your postings about what you said when?

> The key point here is that in this expression:
>
> (@(x) x + 1)(2)
>
> the reasonable result is 3, whereas in Matlab it is a syntax error.

Is _that_ all we are discussing, a bit of syntactic sugar?

Apply = @(f,x) f(x);

Apply(@(x) x + 1, 2)


> Also, it is not true that Matlab's parens denote order to the same
> extent as C. In C, you could write (slightly abbrevd):
>
> void foo() {}
>
> void main() {
> void* f = (void*)foo;
> (f)();
> }

> In this case, the parens in (f) will be used for order of operation.
> The expression evaluates to a function pointer, which is then be
> applied.

$ cat foo.c
void foo() {}

void main() {
void* f = (void*)foo;
(f)();
}

$ make foo
cc foo.c -o foo
foo.c: In function 'main':
foo.c:5: error: called object 'f' is not a function
foo.c:3: warning: return type of 'main' is not 'int'
make: *** [foo] Error 1

http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf
Section 6.9.1 Function Declarations
Paragraph 14 (PDF page 155, document page 143)

EXAMPLE 2 To pass one function to another,one might say
int f(void);
/*... */
g(f);
Then the definition of g might read
void g(int (*funcp)(void))
{
/*... */
(*funcp)(); /*or funcp(); ... */
}
or,equivalently,
void g(int func(void))
{
/* ... */
func(); /* or (*func)(); ... */
}


Thus in your example, the call can either be written as f() or as (*f)()
. The (*f)() form _does_ involve order of operations, in that function
calls have a higher precedence than dereferencing, so a sequence written
as *f() would mean to call f with no parameters and then to dereference
the result. There is no reason in C why (f)() cannot be written: it is
just redundant, since the normal order of operations would be to
evaluate the function pointer f first anyhow before making the call.

I note, by the way, that despite your earlier protestations about names,
you resorted to binding the function pointer to a name in order to use it...

Your code for the declaration of the function pointer is quite wrong for
C. In C, your code coherces foo into a generic void* pointer named f. In
C, void* pointers can only be copied and not operated upon; it would be
necessary to coherce the void* pointer into a function pointer before it
could be applied. Unfortunately for your purposes, in C, it is a
constraint violation to coherce a function pointer to a void pointer.
Section 6.2.3.2 (pdf page 59). Paragraph 1 allows conversion between
void* and any _object type_ -- but functions are not objects. Paragraph
8 (pdf page 60) allows conversions between function pointer types -- but
not to void pointers. This is a topic that has been discussed a number
of times in comp.lang.c, including contributions by people who were on
the standards committee, and this interpretation (i.e., that function
pointers cannot be coherced to void pointers) was reinforced, under the
reasoning that function pointers are allowed to be indefinitely larger
than object pointers, as function pointers are allowed to include
implementation defined information about parameter signatures, and
functions are allowed to be further apart in virtual memory than the
maximum total object size that is required to be supported.

> In
> Matlab, whether parens mean order of operation or syntax error (in cases
> where they unambiguously do not mean anything else), depends on if the
> enclosed expression is a function or not. That is the inconsistency.

(2:2:10)(3)

is _not_ 6 in Matlab, it is an error. In matlab, an expression in () is
*always* invalid directly following _anything_ in (). For example,

>> v = 1:5;
>> (v)(3)

??? (v)(3)
|
Error: Unbalanced or unexpected parenthesis or bracket.


It matters not whether v evaluates to a function or not.

In Matlab, you can chain {} but not (), and {} cannot appear directly
after (). Once you have a () in an indexing expression, the only syntax
that allows further {} or () is if the portion leading up and including
the () evaluates to a structure or structure field. For example, a
moment ago, I constructed these valid expression (for different foo)

foo{1}(1).bar{1}(2)
foo.(biz)(3)

In the second expression, biz had to be a character vector whose value
was the name of a field in the structure foo, so foo.(biz) evaluates to
a structure field; indexing of that structure field is then allowed. You
could call this an inconsistency if you wanted, that there -are- cases
where the value after () indexing determined whether a following () is
an error or not. On the other hand, these cases are syntactically
signaled in Matlab code by a '.' : .(value) is _always_ a structure
field access with the _value_ determining the field name, and '.' before
a name is _always_ a structue field access with a fixed field name;
these '.' allow static determination of whether further () will be
permitted or not (and if a structure or structure field was not arrived
at at run-time, it becomes a run-time error rather than a syntax error.)



> Right... the authors of Matlab didn't study the history of languages
> such as Algol, and thus repeated the mistake of defining a language that
> cannot be described with a context free grammar (or made a language that
> can be described with a CFG, but implemented an ad-hoc parser).

They didn't even try. Matlab started as a _practical_ tool. A lot of
people found it useful, and it grew from there. Everything beyond 2D
matrices of double precision numbers was warted on to what was already
there.


> You are misinterpreting Goedel's theorem in the context of programming
> languages. What Goedel's theorem says about languages is that there
> exist languages such that deciding if a string belongs to that language
> is not computable. The take-home message as it applies to programming
> language design is not that one must accept inconsistency, but that one
> should avoid designing languages that cannot be parsed.

Goedel's theorems (plural) extended to prove that once a system was
"sufficiently powerful" to be able to use the system to describe itself,
that it was impossible to refine the system to be completely
consistent and computable, no matter how many rules you added to the
system. Therefor, any computing language that meets Turing Equivalency
(and even some that fall below that) must have inconsistancies in it.

In C and C++ you can see these inconsistancies in aspects such as
recursive type declarations:

struct linkedlist { struct linkedlist * next; double * valptr }
linkedlist_t;

The type of the field 'next' is not finitely definable in this
construct, so this syntax silently makes the type of 'next' the union of
an infinite number of types, making it impossible for the C language to
be finitely static type checked.. and that's without even touching the
void* "Get Out Of Type Checking Free" construct.
From: Bruno Luong on
Walter Roberson <roberson(a)hushmail.com> wrote in message <hr27lf$svr$1(a)canopus.cc.umanitoba.ca>...
>

>
> Apply = @(f,x) f(x);
>
> Apply(@(x) x + 1, 2)
>

"Apply" has an built-in equivalent

feval(@(x) x + 1, 2)

Bruno
From: Bruno Luong on
Walter Roberson <roberson(a)hushmail.com> wrote in message <hr27lf$svr$1(a)canopus.cc.umanitoba.ca>...
>
>
> Goedel's theorems (plural) extended to prove that once a system was
> "sufficiently powerful" to be able to use the system to describe itself,
> that it was impossible to refine the system to be completely
> consistent and computable, no matter how many rules you added to the
> system. Therefor, any computing language that meets Turing Equivalency
> (and even some that fall below that) must have inconsistancies in it.

??????????????? That is the most bizarre statement I read (Admittedly my course of formal logic was long ago). Who claims a Turing maching can ever describe itself?

Bruno
From: Walter Roberson on
Bruno Luong wrote:
> Walter Roberson <roberson(a)hushmail.com> wrote in message
> <hr27lf$svr$1(a)canopus.cc.umanitoba.ca>...
>>
>>
>> Goedel's theorems (plural) extended to prove that once a system was
>> "sufficiently powerful" to be able to use the system to describe
>> itself, that it was impossible to refine the system to be completely
>> consistent and computable, no matter how many rules you added to the
>> system. Therefor, any computing language that meets Turing Equivalency
>> (and even some that fall below that) must have inconsistancies in it.
>
> ??????????????? That is the most bizarre statement I read (Admittedly my
> course of formal logic was long ago). Who claims a Turing maching can
> ever describe itself?

Turing Machines to describe Turing Machines do tend to be glossed over
in formal logic courses, but they are also fundamental to a number of
the proofs involving Turing Machines. In particular the proofs about
order of magnitude of computability always have in them that the problem
can be solved in such-and-such magnitude "+ k" time, where k is linear
constant of undetermined value -- a constant that gets treated by
readers much the same as the "+ c" of an indefinite integration... which
is to say, often skipped and its purpose forgotten except by those who
are being rigidly formal.

The Turing Machine "k" is the adjustment of the algorithm being
discussed to the representation of a _particular_ Turing Machine, and
the exact representational efficiency is considered inconsequential
because given any _particular_ Turing Machine representation, one can
write a UTM, Universal Turing Machine, that will emulate the Turing
Machine of the problem in terms of the target Turing Machine.

This step is ideologically similar writing a modern day "Virtual
Machine" that will (for example) run MS Windows within (say) a Linux
system that might be based upon a different processor. Any particular
Virtual Machine might be slower or faster than another Virtual Machine,
but for order of magnitude computations that doesn't matter.

The *how* of writing a UTM is often treated pretty spottily in formal
logic: once it is demonstrated that it can be done in theory, everything
after that just assumes that it has been done.

If I recall correctly after these years (well, more from having
discussed Kolmogrov Complexity more recently than having studied Turing
Machines themselves), given any particular "target" Turing Machine, the
UTM needed to adapt a particular problem is given as input on the Tape
just before the usual input, and thus the UTM shim becomes the "program"
that is executed by the target Turing Machine, and which then knows how
to interpret the remaining input on the tape.


It is, of course, a step from knowing that (in theory) you can write a
Turing Machine to emulate any _other_ Turing Machine, to writing a
Turing Machine that can emulate _itself_. Regretably I will need to
defer on this point, as my text (Enderton, An Introduction to Formal
Logic, if I recall correctly) is at work.