From: Walter Bright on
Francis Glassborow wrote:
> In article <ddydnXqDgMDef_jYnZ2dnUVZ_tidnZ2d(a)comcast.com>, Walter Bright
> <walter(a)digitalmars-nospamm.com> writes
>> Doing so would require:
>>
>> 1) adding user defined tokens
>> 2) adding user defined syntax
>
> Did you ever write a Forth vocabulary? I have and among other things
> implemented Logo on a Forth system.

I've never done anything with Forth. I know next to nothing about it.

If Forth is so powerful, why has it remained an obscure niche language?

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Walter Bright on
Gabriel Dos Reis wrote:
> | BTW, Prof. Kahan is the person behind the design of the 8087 floating
> | point CPU chip, as well as the IEEE 754 floating point specification.
>
> As if I did not know.

That's great. Now we can move on to a technical discussion of where you
feel he's gone wrong in advocating a separate imaginary type.


> But, the 8087 floating point implementation does not meet unanimous
> consensus -- and this not just from ignorant programmers.

Objections to it I've seen are based on speed, not suitability to
accurate numerical work. If you know of other serious objections, why
not post them? Do you know of anyone building new floating point designs
that aren't based on IEEE 754?


> | If
> | there is any authority on computer floating point, it'd be him. Although
> | argument by authority is a logical fallacy,
>
> Yes, argument by authority is a logical fallacy; that is why I was
> excepting you to logically explain your points without having to
> appeal "but Prof. Kahan says so".

Beating a dead horse, since I've already provided such, and you've seen
it. I suggest we move on.

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Lourens Veen on
Terry G wrote:

>> Assuming the underlying hardware doesn't define such behaviour, the
>> compiler would have to generate a code like this for (X << Y):
>>
>> if (Y < 0)
>> evaluate X >> Y instead
>> else if (Y >= bit_sizeof(X))
>> if (X < 0)
>> return -1
>> else
>> return 0
>> else
>> evaluate X << Y
>>
>> Not all programs need/can afford this overhead. When the input is
>> known to be in a certain range, the program can skip the check.
>> Otherwise, the program can do the check itself. This is the merit
>> that leaving some corner cases undefined gives.
>
> First, let me apologize for that post. Yes, I was only thinking of
> signed-right shift.
> Reading my post, I could barely discern that. Sorry.
>
<snip>
>
> Not all projects can afford this undefined behavior.
> Most C++ programmers have never read the standard.
> They just expect reasonable looking code that compiles to work.
> Its sad, but its reality.

Hey, I resemble that remark. In fact, I was hit by exactly this
problem (well, it was a left shift rather than a right shift) a month
or two ago. Indeed, I haven't read the standard, and I did assume
that shifting by more than the number of bits in the variable would
simply yield 0.

The first thing that I noticed was that GCC emitted a warning saying
that I was shifting more than the number of bits in the operand.
Right, I thought, fine, so the operation is superfluous, but the
result of 0 is correct and it only occurs for this particular
combination of template arguments, so I'm just going to leave it and
everything will be fine.

The second thing that I noticed was that my programme crashed due to
data corruption. A bit of investigating revealed that it was the
shift that was causing problems. So I wrote a minimal test case,
experimented a bit, and found out that the shift operator in C++
didn't quite do what I expected it to do.

The next question was why. Google solved that one quickly. C++ says
it's undefined behaviour, and the x86 processor I'm working with does
something you wouldn't expect.

So, I added a check much like you describe above, recompiled, and it
worked. My programme gained some correctness, I gained some
knowledge, and all was well.

Was I surprised about all this? Yes at first, but in retrospect, now
knowing that hardware differs, I think the way it's defined is very
much the C++ way, and as such unsurprising. I may not have read the
standard, but I know that C++ is very powerful, often close to the
metal, and it doesn't have seatbelts unless you install them
yourself. I like it that way. I can understand that many people, for
many applications, don't.

Lourens


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Nemanja Trifunovic on

Binglong X wrote:
> Hello folks,
>
> I am not sure if this was brought up before, but this language is quite
> interesting when compared to C++ (and others such as C, C#, Java):
> http://www.digitalmars.com/d/index.html
>
> The comparison:
> http://www.digitalmars.com/d/comparison.html
>
>

I have looked at it several times, but haven't seen anything that would
tempt me to invest time in learning it. If you came to a different
conclusion, I would be interested to see your reasons, though.


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

From: Andrei Alexandrescu (See Website For Email) on
Walter Bright wrote:
> Bo Persson wrote:
>>>> I am very happy when I can implement, or extend, some feature
>>>> without specific compiler support.
>>> There's no way you're going to get std::string, std::vector or
>>> std::complex to work as well as core support for such with existing
>>> C++ core features.
>> In that case I would very much prefer if we could add the necessary core
>> features, rather than add the types to the core language. (I have noticed
>> that you have done some of that in D, but not enough obviously).
>
> Doing so would require:
>
> 1) adding user defined tokens
> 2) adding user defined syntax
>
> While technically possible, I suspect that such a language would be
> unusable in practice. It'd be like trying to shave holding a razor blade
> in your hands. An infinitely customizable language would have no
> touchstone, no point of commonality between one user's code and another's.

Here's where we totally disagree on a philosophical level. D's response
to the usability/core language size dichotomy was, "bah, that's too
complicated. Just put whatever is useful in the language." I recall you
told me D will also include matrix infix operations as part of the core
language.

I think that's a cheap cop-out and a response coming from straight
within the box.

The out-of-the-box answer would be to look into ways to give programmers
abilities on a par with the compiler writers', and to solve the issues
that accompany that route. That would make news. "Language X adds infix
operators to the core" is not noteworthy. "Language X allows optimal
user-defined operators" would be.

I've done research for long enough (fortunately now starting to bear
fruition :o)) that I become as alert as a soldier lost in the Korean DMZ
whenever I hear things like "can't be done", "unusable" etc. It can be
done. Philosophically, C++ has walked a long distance that direction.
Dylan does it. Scala does it. Heck, Lisp and Scheme do it in their own
ways (which are very distinct between one another, and very different
from all other ways). Arguing that these languages weren't as successful
as more rigid languages is not very convincing because languages of all
ilk succeed or fail for a variety of imponderable reasons.


Andrei

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]