From: Vassil Nikolov on

On Thu, 11 Mar 2010 03:41:45 -0600, rpw3(a)rpw3.org (Rob Warnock) said:

> Vassil Nikolov <vnikolov(a)pobox.com> wrote:
> +---------------
> | Tim Bradshaw <tfb(a)tfeb.org> said:
> | > I do think that Forth and Lisp should appeal to the same people
> | > (well, they both appeal to me).
> |
> | I second this. As another view on it, consider the following
> | (possibly well-known) thought experiment: what does one do if one
> | has _only_ bare iron, but no software whatsoever (and no access to
> | any, either)? A friend proposed to implement Forth as step 1 and
> | based on that, to implement Lisp as step 2. (As opposed to
> | "repeating philogenesis" and implementing an assembler as step 1.)
> +---------------

> Having been through quite a few bringups of various kinds of
> "bare iron", I would have to say that the style of low-level
> incremental bootstrapping you're talking about has been obsolete
> for at *least* four decades.

> Instead, the easiest way to bring up bare iron is to load the
> absolute minimum binary boot loader you can get away with[1]
> and then do *all* your software development by cross-compiling
> from a separate fully-loaded system and loading full kernel
> and/or filesystem images into your new platform[2].

In any kind of practical situation, yes, certainly, both of the
above, but that means accessing software and breaks the constraint
imposed on the thought experiment.

---Vassil.


--
No flies need shaving.
From: Gergö Barany on
On 2010-03-09, Hugh Aguilar <hughaguilar96(a)yahoo.com> wrote:
> Lisp has always been described as "a
> programmable programming language." That is a pretty good description
> of Forth too. Other than Lisp and Forth, I don't know of any language
> that allows the programmer to write compile-time code --- our two
> languages are pretty much alone in that regard.

For whatever it's worth, many Prolog systems support to the de-facto
standard term_expansion/2 mechanism. This allows one to specify arbitrary
code transformations that are performed at read time. The notion of "code as
data" is quite similar to Lisp macros.


[What an inappropriate way to introduce myself to comp.lang.lisp! Well, here
I am, another Lisp newbie.]
From: Hugh Aguilar on
On Mar 11, 5:28 am, Tim Bradshaw <t...(a)tfeb.org> wrote:
> On 2010-03-11 09:41:45 +0000, Rob Warnock said:
>
> > Having been through quite a few bringups of various kinds of
> > "bare iron", I would have to say that the style of low-level
> > incremental bootstrapping you're talking about has been obsolete
> > for at *least* four decades.
>
> Yes, I can't imagine anyone doing this for a very long time. When I
> did embedded stuff it was all done with a combination of
> cross-assembling, blowing EPROMS and then using a logic scope or (if
> you coud get time on it) an emulator to watch what the machine did.
> This was 30 years ago, and the technology was mature then, in the sense
> that you could just buy the tools you needed.
>
> I guess one difference now is that, for anything but the very smallest
> systems, people probably do not write things in assembler. I think
> even then there may have been C compilers available, but not if you
> wanted to fit everything into the 1 or (later) 2k of EPROM you had.

My own experience with programming on "bare iron" was when I worked at
Testra. They built a custom micro-processor called the MiniForth and
based on the Lattice 1048isp PLD. I wrote the assembler, Forth cross-
compiler and simulator, called MFX, for it. MFX was then used to
compile a motion-control program for use in a laser etching machine. I
wrote some of the low-level assembly for the application (there is
somewhat of a blurry distinction between compiler and application in
Forth), but most of the port was done by a coworker of mine who had
written the original motion-control program (it had run on the Dallas
80c320). There obviously was no C compiler available! There wasn't
even any physical micro-controller available until quite late in the
project. I had most of the development system complete using
simulation on a desktop computer by the time that the chip became
available and a board was built.

The Forth cross-compiler and the assembler were my own design. The
assembler was the more difficult part. The MiniForth is a WISC (wide-
instruction-set-computer), meaning that several machine-code
instructions get packed into a single opcode, and they are all
executed simultaneously at run-time. My assembler would rearrange the
code in such a way as to pack as many instructions into each opcode as
possible, in order to minimize the number of NOP instructions that had
to be compiled. The assembler would move each instruction back as far
as possible, without messing up the register usage. An instruction
that uses a particular register can't get pushed back beyond the
opcode that contains the instruction that sets that register. An
instruction that sets a particular register can't get pushed back
beyond the opcode that contains an instruction that uses that
register.

This was an extremely low-level assembly language. There was no
instruction to add integers, for example. This was accomplished with
half-adders and boolean logic. The function that added two 16-bit
integers was a page long. I didn't write it. I didn't write the
multiplication or division either, as those were quite complicated
functions that would have (likely) been beyond my ability. As an
historical note, the impetus for the whole project was a fast integer
multiplication. The Dallas 80c320 was too slow at multiplying
integers, and this was the bottleneck in the motion-control program. A
second goal was fast interrupt response time.

The assembler would also generate a Forth program that would run on
the desktop computer to simulate the target program. This wasn't a
traditional simulator that decrypts the opcode and simulates what it
does. That would have been too slow and too complicated. I knew that
all the firmware being simulated had to have been assembled with my
own assembler, so it seemed easier to just have the assembler generate
a simulation program, considering that it had all the information
necessary to do this.

In regard to application programming, I don't much like cross-
compiling. I prefer to have an on-board Forth system. Interactive
development is one of Forth's virtues. With a micro-controller, you
want to have an interpretive mode (a REPL to use Lisp terminology) so
you can experiment with your functions. You can use an oscilloscope to
see the effect of your functions out in the real world. This is the
best way to write firmware! With a cross-compiler you don't get any of
this interactive development. I only wrote the development system as a
cross-compiler because there was no other option. When the micro-
controller is completely new, and doesn't actually physically exist
yet, you are pretty much obliged to program on a desktop computer.

I don't work for Testra anymore. I noticed on their website that they
now have an on-board interactive Forth system available. Presumably my
cross-compiler is now only used for writing assembly language.
From: Hugh Aguilar on
On Mar 11, 1:52 am, Eli Barzilay <e...(a)barzilay.org> wrote:
> I
> would be willing to put more work into optimizing some code (eg,
> writing bits of it in C) to keep on using PLT because I value the
> language overall, while others will happily move to an all-C code. I
> could also get more speed if I add tons of type annotations (in CL) or
> switch to Stalin (in Scheme) -- but in both cases I lose some of the
> benefits of the language.

I haven't profiled LC53, but I think that it is obvious that most of
the time is spent inside of the PRNG function. This is a good example
of how a tiny dose of assembly language can make a huge difference.
Note that writing PRNG in C won't do much good though, as C doesn't
support mixed-precision arithmetic. If you did write PRNG in C you
would have to cast the single-precision integers to double-precision,
do the arithmetic in double-precision, and then cast the result back
to single-precision. Ugh!

Does any Scheme or Common Lisp include a Pentium assembler?

I use SwiftForth from Forth Inc. for a lot of work. It doesn't
optimize very well, but it does include a pretty nice assembler, so I
just rewrite the critical portions after the whole program is complete
and speed becomes an issue. I like programming in assembly language!
To a large extent, I think of Forth as being an overgrown and super-
sophisticated macro-assembler. This viewpoint may seem alien to modern-
day programmers who only use high-level languages and who don't want
to know what is going on under the hood. Python programmers will
likely think that an unrepentant assembly-language programmer such as
myself must be the Anti-Christ, or the Anti-Guido, or something like
that. :-)
From: fortunatus on
On Mar 10, 2:28 pm, Eli Barzilay <e...(a)barzilay.org> wrote:
> fortunatus <daniel.elia...(a)excite.com> writes:
>
> > On the other hand, while PLT is a slow environment, Scheme does have
> > good compilers - look into Chicken for example.
> >http://www.call-with-current-continuation.org/
>
> Both of these claims have not been true for a while now.  
> ...

Wow - thanks for the update, examples, and links! I didn't realize
module'd code would be better optimized in PLT, good to know.