Prev: The correct choice for implementation (was: A simple web clientlibrary)
Next: Solved Re: Emacs speedbar doesn't show .lisp and .asd files
From: Andrew Poelstra on 12 Mar 2010 10:20 On 2010-03-12, Tim Bradshaw <tfb(a)tfeb.org> wrote: > On 2010-03-12 06:02:54 +0000, Ron Garret said: > >> You're kidding, right? Are you seriously questioning the claim that C++ >> is harder to learn than Scheme? > > Yes. > >> >> The reason C++ is hard to learn is not that it's big per se. Common >> Lisp is big too, but it's vastly easier to learn that C++. The reason >> C++ is hard to learn is that it burdens the programmer with remembering >> endless minutiae, all of which are essential to producing working code. >> Google for "C++ coding standards" and compare the result to "Scheme >> coding standards." There's an entire industry devoted to reminding >> people of the myriad things they must keep in mind while coding in C++. > > This of course means rather little: there are a lot more C++ guides > than Scheme ones because there are a lot more people programming in C++ > than in Scheme. > > But even so, I think it turns out that many people find it a lot easier > to learn a large number of relatively easy things than a far smaller > number of hard things. I know someone who, at the age of about 12, > could recite apparently endless data about Pokemon, and he was in no > way unusual - many of his friends knew all this stuff as well. > But C++ is not a "large number of relatively easy things". It is a large number of counter-intuitive, hidden, absurdly complex, often impossible to work around things. And many of them have either an obscenely long-winded compilation error, or a segfault originating from a function you never explicitly defined (or were aware of). > Similarly, people find it fairly easy to learn, say, entire plays or > entire large pieces of music - actors and musicians do this all the > time. And, significantly, it's not this ability which is interesting: > no one says "gosh, it's amazing that they managed to *learn* that > concerto", instead they talk about how sensitively they played it or > what have you. The ability to learn vast amounts of stuff is just > assumed, because people are very good at that. > > --tim > -- Andrew Poelstra http://www.wpsoftware.net/andrew
From: bugbear on 12 Mar 2010 10:47 J�rgen Exner wrote: > Saying something isn't harder than C pointers is like saying a desease > isn't worse than the Bubonic plague: it gives very little comfort to > people suffering from it. > Actually C pointers are probably among the worst concepts ever invented > in computer science. Was 'C' invented to be a "great" language, or was it just a compromise - easier (and more portable) than assembler, but still low level enough that a compiler of the era could generate decent code? These days, it's quite common to see 'C' categorised as a portable assembly language. Which may be a Good Thing, of course. BugBear
From: Pascal J. Bourguignon on 12 Mar 2010 11:20 John Bokma <john(a)castleamber.com> writes: > "Peter J. Holzer" <hjp-usenet2(a)hjp.at> writes: > >> On 2010-03-10 20:54, John Bokma <john(a)castleamber.com> wrote: > > [..] > >> I started with BASIC (think early 1980's here - line numbers and >> goto), > > ZX Spectrum, 1983 here > >> then did a little bit of Pascal and assembly (6502 and Z80) before > > More or less same here, Z80, Comal, Pascal, 6800, 6809, 68000 ... > >>> I do agree, however, that it would've been nice if C had references like >>> Perl, and (harder to get to) pointers as they are now. >> >> Actually, C pointers are quite ok in theory (for example, you can't make >> a pointer into an array point outside of the array (except "just after" >> it). > > How does C prevent this? Or I don't understand what a pointer into an > array is. Well, since C is weakly-to-not typed, you cannot enforce it at the variable site, however, it is specified indeed that it is invalid or undefined to derefer a pointer that doesn't point to allocated memory, and to compare pointers that don't point to elements of the same array or one beyond. This can be enforced by heavy pointers and run-time checks. Of course, since they prefer to have their results fast than correct, this is rarely enforced. One way is to define pointers as: typedef struct { address arrayBase; int elementSize; int elementCount; int index; } Pointer; Pointer NULL={0,0,0,0}; void Pointer_incr(Pointer* p){ if(p.index<p.elementCount){ p.index++; }else{ error("Trying to increment a pointer out of bounds"); }} void Pointer_decr(Pointer* p){ if(0<p.index){ p.index--; }else{ error("Trying to decrement a pointer out of bounds"); }} int Pointer_minus(Pointer p,Pointer q){ if(p.arrayBase!=q.arrayBase){ error("Incompatible pointers"); }else{ return(p.index-q.index); }} bool Pointer_equalp(Pointer p,Pointer q){ return(Pointer_minus(p,q)==0); } bool Pointer_lessp (Pointer p,Pointer q){ return(Pointer_minus(p,q)<0); } T Pointer_deref(Type T,Pointer p){ if(p.index<p.elementCount){ return(deref(T,p.arrayBase+p.index)); }else{ error("Trying to dereference out of bound pointer."); }} char a; char b; int c[10]; int d[10]; char* p1=&a; /* <=> p1.arraybase=address_of(a); p1.elementSize=1; p1.elementCount=1; p1.index=0; */ char* p2=&b; /* <=> p2.arraybase=address_of(b); p2.elementSize=1; p2.elementCount=1; p2.index=0; */ int* p3=&c; /* <=> p3.arraybase=address_of(c); p3.elementSize=sizeof(int); p3.elementCount=10; p3.index=0; */ int* p4=&(d[9]); /* <=> p4.arraybase=address_of(d); p4.elementSize=sizeof(int); p4.elementCount=10; p4.index=9; */ int* p5=&d; /* <=> p5.arraybase=address_of(d); p5.elementSize=sizeof(int); p5.elementCount=10; p5.index=9; */ char* n=NULL; p2++; /* <=> Pointer_incr(&p2); */ p3++; /* <=> Pointer_incr(&p3); */ p1==p2; /* <=> Pointer_equalp(p1,p2); <=> error */ p3<p4; /* <=> Pointer_lessp(p3,p4); <=> error */ p4<p5; /* <=> Pointer_lessp(p5,p4); returns false. */ *p4=*p5; /* <=> copies the int from d[0] to d[9]. */ *n=0; /* <=> error (dereferencing NULL) */ -- __Pascal Bourguignon__
From: Raffael Cavallaro on 12 Mar 2010 11:56 On 2010-03-12 10:11:52 -0500, Tim Bradshaw said: > But even so, I think it turns out that many people find it a lot easier > to learn a large number of relatively easy things than a far smaller > number of hard things. The issue isn't whether it's easy to store large amounts of easy to learn things in long term memory (it is), the issue is whether it is easy to keep large amounts of easy to learn things in *short term memory*, in your brain's working set (it isn't). People have a distinctly limited working set/short term memory. Estimates vary from 7 to 10 chunks. This must also include the actual program logic one is working on, not just the grammatical encrustations of the language at hand. C++ requires one to keep an inordinate amount of language grammar and punctuation rules in one's short term memory thus reducing room for, and distracting from, the actual task at hand. One is constantly entering the char *howDoI_SayWhatI_MeanCorrectlyInThisGrammarAndPunctiation(wasteOfTime *allFsckingDay); subroutine which forces the task related set out of working memory. This constant mental stack pushing and popping breaks up the flow of thought about the actual task. warmest regards, Ralph -- Raffael Cavallaro
From: Philip Potter on 12 Mar 2010 12:17
On 12/03/2010 02:14, John Bokma wrote: > "Peter J. Holzer" <hjp-usenet2(a)hjp.at> writes: >> Actually, C pointers are quite ok in theory (for example, you can't make >> a pointer into an array point outside of the array (except "just after" >> it). > > How does C prevent this? Or I don't understand what a pointer into an > array is. C *doesn't* prevent it. If you have a pointer to a member of an array, which you keep iterating with p++ until it goes well beyond the end, the behaviour is undefined. That means that *no matter what* the compiler and resulting program does, it's a valid implementation of the C Standard. It is certainly not required that a C compiler checks that you don't do something stupid like this. http://catb.org/jargon/html/N/nasal-demons.html Phil |