From: Rob Warnock on
Vassil Nikolov <vnikolov(a)pobox.com> wrote:
+---------------
| Tim Bradshaw <tfb(a)tfeb.org> said:
| > I do think that Forth and Lisp should appeal to the same people
| > (well, they both appeal to me).
|
| I second this. As another view on it, consider the following
| (possibly well-known) thought experiment: what does one do if one
| has _only_ bare iron, but no software whatsoever (and no access to
| any, either)? A friend proposed to implement Forth as step 1 and
| based on that, to implement Lisp as step 2. (As opposed to
| "repeating philogenesis" and implementing an assembler as step 1.)
+---------------

Having been through quite a few bringups of various kinds of
"bare iron", I would have to say that the style of low-level
incremental bootstrapping you're talking about has been obsolete
for at *least* four decades.

Instead, the easiest way to bring up bare iron is to load the
absolute minimum binary boot loader you can get away with[1]
and then do *all* your software development by cross-compiling
from a separate fully-loaded system and loading full kernel
and/or filesystem images into your new platform[2]. The tools
on your cross-development system can then be in whatever language
you prefer.[3]

I know it's not as "heroic" as the 60's-style keying boot loaders
and even whole programs in hex or binary through the console
switches[4] (and, yes, I've done my share of *that*, too), but
it's a *lot* faster & more effective.


-Rob

[1] A *tiny* first-stage boot loader, just a few dozen bytes, something
even simpler than a typical PC MBR boot block, the kind of thing
you can write in less than a day and get onto your platform however
you can: program a boot ROM, use a "ROM emulator" [a RAM with a side
port to another system], shove instructions in through a CPU's JTAG
port [and stash from there into cache or main RAM], whatever.

[2] With the bits coming in from outside however you can arrange for them
to get onto your platform: via a serial port, parallel port, Ethernet
with PXE boot, USB port pretending to be a disk, in extremis even just
wiggling a few pins on some spare logic gate, *whatever*.

[3] And, yes, I like to use Common Lisp on the cross-development
system for hardware bringup/debugging. I believe I've mentioned
that here several times already. ;-} ;-}

[4] For "the" classic example, see <http://en.wikipedia.org/wiki/Mel_Kaye>
and <http://rixstep.com/2/2/20071015,01.shtml>. ;-}


-----
Rob Warnock <rpw3(a)rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

From: Eli Barzilay on
"joswig(a)corporate-world.lisp.de" <joswig(a)lisp.de> writes:

> On 11 Mrz., 09:52, Eli Barzilay <e...(a)barzilay.org> wrote:
>>
>> But in any case, I'm not arguing for any particular side in all of
>> this; the only thing I wanted to clarify is that the days where PLT
>> was "obviously slower" are gone.
>
> We have seen two different functions in PLT Scheme: one for generic
> arithmetic and one for fixnum arithmetic.
>
> In the generic case Allegro CL was almost three times faster.

If you're referring to the 9 second time of running on the repl, then
no, it's not being generic that makes it slower -- it's the fact that
it is outside of a module, which means that the compiler doesn't do a
whole bunch of obvious optimizations. (The example where I put the
code in a module form *is* using generic arithmetics.)


> In the fixnum case, you had to rewrite your program to use different
> unsafe, primitive,

That's no different than declarations to the same effect. For
example, I could define a specific `define' form which would translate

(define (foo n)
(declare blah)
...code...)

to

(define foo
(let ([+ unsafe-fx+] ...)
(lambda (n) ...code...)))


> non-standard, functions.

Seems that I need to clarify again: I have no interest in "standard
scheme". Like I said, if the options I had were "standard scheme" and
"some other language", then for almost all problems and for almost all
"other languages" I would *not* choose the former. Even more
explicitly,

> The question that interests me slightly more, is how fast generic,
> portable and standard code runs.

that's a question that does not interests me at all.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://barzilay.org/ Maze is Life!
From: Helmut Eller on
* joswig(a)corporate-world.lisp.de [2010-03-11 10:24+0100] writes:

> In the fixnum case, you had to rewrite your program to use different
> unsafe, primitive, non-standard, functions.

The fixnum functions fx+, fx* etc. are part of the R6RS and are safe in
the sense that implementations must check for overflows and raise errors
accordingly. The nice thing is that the return types are known to be
fixnums should compilers with type inference. Franky, that's more
useful than (the fixnum (+ (the fixnum ..))) which doesn't even
guarantee that the return value is checked.

Helmut
From: joswig on
On 11 Mrz., 11:16, Helmut Eller <eller.hel...(a)gmail.com> wrote:
> * jos...(a)corporate-world.lisp.de [2010-03-11 10:24+0100] writes:
>
> > In the fixnum case, you had to rewrite your program to use different
> > unsafe, primitive, non-standard, functions.
>
> The fixnum functions fx+, fx* etc. are part of the R6RS and are safe in
> the sense that implementations must check for overflows and raise errors
> accordingly.  The nice thing is that the return types are known to be
> fixnums should compilers with type inference.  Franky, that's more
> useful than (the fixnum (+ (the fixnum ..))) which doesn't even
> guarantee that the return value is checked.

You can check it.

To my defense I could say that I consider R6RS to be non-standard. ;-)

From: Tim Bradshaw on
On 2010-03-11 09:41:45 +0000, Rob Warnock said:

> Having been through quite a few bringups of various kinds of
> "bare iron", I would have to say that the style of low-level
> incremental bootstrapping you're talking about has been obsolete
> for at *least* four decades.

Yes, I can't imagine anyone doing this for a very long time. When I
did embedded stuff it was all done with a combination of
cross-assembling, blowing EPROMS and then using a logic scope or (if
you coud get time on it) an emulator to watch what the machine did.
This was 30 years ago, and the technology was mature then, in the sense
that you could just buy the tools you needed.

I guess one difference now is that, for anything but the very smallest
systems, people probably do not write things in assembler. I think
even then there may have been C compilers available, but not if you
wanted to fit everything into the 1 or (later) 2k of EPROM you had.