Prev: integer
Next: shared memory question
From: Richard Heathfield on 5 Mar 2010 02:49 Seebs wrote: <snip> > When I see "if (x != y)" in C, I > unconsciously perceive it to be the case that x could vary and y couldn't. Why? > Consider: > for (i = 0; i < 10; ++i) > > Why do we write "i < 10" rather than "10 >= i"? Good question. > Because i's the one that > varies, so "i is less than ten" is more idiomatic than "ten is greater than > or equal to i". Why? > Now consider: > for (i = 0; i < max; ++i) > > even though "max" may vary over time, the assumption is that, for this loop, > i changes and max doesn't. Why? > If someone wrote this loop, then altered max > within the loop while modifying i to keep it constant, it would be completely > incoherent. Why? Consider: an ordinary reversal loop: while(f < h) { e = a[f]; a[f++] = a[h]; a[h--] = e; } Which is the constant now? Should it be f<h, or h>f? (Strictly, they're not equivalent, but in this case either will do.) > So, now... > for (l = head; l != NULL; l = l->next) > > Clearly, this follows the same idiom. If we flip the components of the > middle expression, we've suddenly gone off the standard idiom C&V please. > for the condition in a for loop, and the reader is justifiably surprised. Why? > And if the for loop should have "l != NULL" rather than "NULL != l" (and > it should), then so should an if statement, for consistency. Emerson. > The time when that technique caught something compilers wouldn't catch > is long gone. I don't think it's needed anymore. You'd be amazed at the antiquity of some compilers. At one recent site, I was somewhat surprised to find an entire project team still using MSC5.00a (and they seemed perfectly contented, too). -- Richard Heathfield <http://www.cpax.org.uk> Email: -http://www. +rjh@ "Usenet is a strange place" - dmr 29 July 1999 Sig line vacant - apply within
From: Casper H.S. Dik on 5 Mar 2010 03:21 David Given <dg(a)cowlark.com> writes: >On 04/03/10 18:22, Scott Lurndal wrote: >[...] >> gcc has -Wall and -Werror. Both are recommended. >But for gods' sake remember to turn -Werror off before shipping your >code. (-Werror causes any warnings to error out.) >gcc has -Werror turned on in distributions. Unfortunately as later >versions of gcc produce more warnings, you now have a situation where >trying to build gcc with gcc can fail if the host compiler is different >from the one the gcc authors were using. >... >Incidentally, the Unix I'm working on right now has the following code >in one of the system header files: >#define prfillset(x) {int __t; \ > for (__t=0;__t<sizeof(*(x))/sizeof(ulong);__t++) \ > ((ulong *)(x))[__t]=-1; } >Yes, __t is an int, it's being compared against sizeof() which returns >an unsigned int, which means that every invocation of prfillset() >produces a compile-time warning. -Werror is not useful on this platform. I'm not sure why they aren't using: memset(x, ~0, sizeof (*x)) Are you talking about Solaris? Casper -- Expressed in this posting are my opinions. They are in no way related to opinions held by my employer, Sun Microsystems. Statements on Sun products included here are not gospel and may be fiction rather than truth.
From: Ike Naar on 5 Mar 2010 03:42 In article <slrnhp1a11.hqg.usenet-nospam(a)guild.seebs.net>, Seebs <usenet-nospam(a)seebs.net> wrote: >I disagree. I was raised by mathematicians, but I view statements and >expressions as often being written to communicate additional meaning. But you must be careful not to assign additional meaning if it was unintended by the author. Suppose I want to add a and b, where a and b can have any value. I don't know (or care) which one of a and b is larger. Still, when adding them together, I must make a (in this case, arbitrary) choice between writing (a+b) or (b+a). Suppose I choose (a+b). If you read my code, it would be unfortunate if you would conclude, from the bare fact that I wrote the addition as (a+b), that a is the larger of the two. >"convince oneself" implies a volitional act taken contrary to evidence >or experience. I don't think that's involved here. I wouldn't quite >call it "unreadable", but it certainly reduces my chances of following >code correctly on the first try. When I see "if (x != y)" in C, I >unconsciously perceive it to be the case that x could vary and y couldn't. But that perception could be misleading. Consider a binary search: in this case, both operands can vary: left = lowerbound; right = upperbound; while (left < right) { inbetween = (left + right) / 2; if (some_condition) left = inbetween; else right = inbetween; } >Consider: > for (i = 0; i < 10; ++i) > >Why do we write "i < 10" rather than "10 >= i"? Because i's the one that >varies, so "i is less than ten" is more idiomatic than "ten is greater than >or equal to i". Another reason might be that "i < 10" and "10 >= i" are not the same thing.
From: Seebs on 5 Mar 2010 03:45 On 2010-03-05, Richard Heathfield <rjh(a)see.sig.invalid> wrote: > Seebs wrote: >> When I see "if (x != y)" in C, I >> unconsciously perceive it to be the case that x could vary and y couldn't. > Why? Because it's idiomatic, and most of the time, code follows that idiom. >> Because i's the one that >> varies, so "i is less than ten" is more idiomatic than "ten is greater than >> or equal to i". > Why? Idioms don't have to have any reason other than "that's how it's been done before". It's a communications tool; given a general pattern that it's the varying part on the left, and the invariant part on the right, that's what I expect whenever I see a comparison operator. >> Now consider: >> for (i = 0; i < max; ++i) >> even though "max" may vary over time, the assumption is that, for this loop, >> i changes and max doesn't. > Why? Because that's how the majority of code has been written. Why is that? I don't know. It's probably some combination of the pronunciation ("while i is less than max" is more idiomatic than "while max is greater than or equal to i") and the first few C books using it. >> If someone wrote this loop, then altered max >> within the loop while modifying i to keep it constant, it would be completely >> incoherent. > Why? Consider: an ordinary reversal loop: > while(f < h) > { > e = a[f]; > a[f++] = a[h]; > a[h--] = e; > } > Which is the constant now? Should it be f<h, or h>f? (Strictly, they're > not equivalent, but in this case either will do.) Indeed, in that case, either will do. But in many cases, there's a clear preference, and even if you don't share it, you will understand most code better and/or more quickly if you keep that pattern in mind. >> So, now... >> for (l = head; l != NULL; l = l->next) >> Clearly, this follows the same idiom. If we flip the components of the >> middle expression, we've suddenly gone off the standard idiom > C&V please. K&R. I don't think you'll find a single test in there which goes the other way. >> for the condition in a for loop, and the reader is justifiably surprised. > Why? Again, it's an idiom. It doesn't need a reason beyond the observation that people tend to follow it. There's no objective reason for most social norms, or linguistic conventions, but once we have them, it's useful to use them to communicate -- both to be aware that other people may be using them, and to use them ourselves to make communication easier. Even though it may not seem like much, in a complicated loop or set of nested loops, having all the conditions follow a consistent idiom makes it much easier to follow and comprehend code. I'm not sure that which idiom was picked would have mattered -- at this point, though, I've seen thousands of loops with "p != NULL" as a condition, and extremely few with "NULL != p", and similarly, thousands of "i < limit" and very few "limit >= i", so when I see a condition, I read it that way first, and only try something else if that works badly. >90% of the time, the heuristic is right, so I stick with it, and I encourage other people to use it, because it's a very valuable tool. It's the same reason I advocate "char *x" rather than "char* x" or "char * x" or "char\n*\nx" as a declaration -- it's a convention and it seems to generally help me and other readers understand the code. Maybe it's not helpful for everyone, but I simply haven't seen it cause any problems in living memory. -s -- Copyright 2010, all wrongs reversed. Peter Seebach / usenet-nospam(a)seebs.net http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
From: Keith Thompson on 5 Mar 2010 04:02
Vladimir Jovic <vladaspams(a)gmail.com> writes: > Casper H.S. Dik wrote: >> Keith Thompson <kst-u(a)mib.org> writes: >> >>> ``pointer1 =! NULL'', of course, parses as ``pointer1 = !NULL''. >>> ``!NULL'' evaluates to 1, and assigning an int value (other than a >>> null pointer constant) to a pointer object requires a diagnostic. >> >> And for other types the compiler or lint will also create a >> diagnostic. >> >> (4) warning: assignment operator "=" found where "==" was expected >> (4) warning: constant operand to op: "!" > > Cool. I didn't know I would get a warning. It depends on the compiler and on how you invoke it. The language doesn't require warnings in these cases. -- Keith Thompson (The_Other_Keith) kst-u(a)mib.org <http://www.ghoti.net/~kst> Nokia "We must do something. This is something. Therefore, we must do this." -- Antony Jay and Jonathan Lynn, "Yes Minister" |