From: fortunatus on
Nicolas Neuss <lastn...(a)kit.edu> wrote:
> Since you are apparently interested in good performance I would surely
> prefer Common Lisp (at least PLT Scheme is not the way to go).

CL surely has much syntax to offer in the way of doing specific
optimizations - therefore I'd second this comment as an "end game".

On the other hand, while PLT is a slow environment, Scheme does have
good compilers - look into Chicken for example.
http://www.call-with-current-continuation.org/

Hugh Aguilar <hughaguila...(a)yahoo.com> wrote:
> Forth calculates the linear-congruential prng quickly because Forth
> has mixed-precision integer arithmetic. Forth can multiply single-

Both CL and Scheme have this feature - part of the "numerical tower".
The numerical stack includes ratios (which are fabulous IMHO) based on
flexible precision integers - integers and ratios together make up
rationals.

CL:
http://www-prod-gif.supelec.fr/docs/cltl/clm/node16.html#SECTION00610000000000000000
http://www.lispworks.com/documentation/HyperSpec/Body/12_a.htm

Scheme:
r5rs: http://schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.2
r6rs: http://www.r6rs.org/final/html/r6rs/r6rs-Z-H-6.html#node_chap_3
From: Nicolas Neuss on
fortunatus <daniel.eliason(a)excite.com> writes:

> CL surely has much syntax to offer in the way of doing specific
> optimizations - therefore I'd second this comment as an "end game".
>
> On the other hand, while PLT is a slow environment, Scheme does have
> good compilers - look into Chicken for example.
> http://www.call-with-current-continuation.org/

Although we don't exactly disagree, I want to emphasize that the
differences are enormous IMO:

* In CL I have a standard way of introducing type and optimization.
They are additions - the code itself is not changed and the resulting
code continues to work on every implementation (although it will not
be fast everywhere).

* In some CLs (especially the CMUCL family, i.e. CMUCL, SBCL, SCL), I
even do not have to insert many type declarations because most are
automatically derived.

* OTOH, in Chicken I have to change my code using special type-adapted
functions (fx+, fx*, ..., fp+, fp-, ...) . The code will probably not
work anywhere else anymore (well, maybe in Bigloo which uses a similar
approach IIRC).

Nicolas
From: Eli Barzilay on
Nicolas Neuss <lastname(a)kit.edu> writes:
>
> Since you are apparently interested in good performance I would
> surely prefer Common Lisp (at least PLT Scheme is not the way to
> go).

fortunatus <daniel.eliason(a)excite.com> writes:
>
> On the other hand, while PLT is a slow environment, Scheme does have
> good compilers - look into Chicken for example.
> http://www.call-with-current-continuation.org/

Both of these claims have not been true for a while now. As a stupid
example, I tried this in Allegro Lisp 8.1 in my machine (linux,
x86_64, exact details don't matter for obvious reasons):

(proclaim '(optimize (speed 3) (safety 1) (space 0) (debug 0)))
(defun fib (n) (if (<= n 1) n (+ (fib (- n 1)) (fib (- n 2)))))
(compile 'fib)
(time (fib 40))

It runs in ~3.5 seconds. I then added (declare (fixnum n)) and it
went down to ~3.3. With MzScheme (v4.2.4), I first measured

(define (fib n) (if (<= n 1) n (+ (fib (- n 1)) (fib (- n 2)))))
(time (fib 40))

which takes about 9 seconds, then did the same in a module (which is
the proper way to let mzscheme optimize code):

(module fib scheme/base (provide fib) (define (fib n) ...same...))
(require 'fib)
(time (fib 40))

and that goes down to ~3.2 seconds. And finally, to make it use
unsafe fixnum operations:

(module fib scheme/base
(require scheme/unsafe/ops)
(provide fib)
(define (fib n)
(if (unsafe-fx<= n 1)
n
(unsafe-fx+ (fib (unsafe-fx- n 1)) (fib (unsafe-fx- n 2))))))
(require 'fib)
(time (fib 40))

and that's now at ~2.6 seconds.

Some notes:

* As usual, benchmarks are hard to do properly, and `fib' is
especially bad. For a more thorough discussion and numbers, see
http://blog.plt-scheme.org/2010/01/benchmarks.html -- specifically,
have a look at
http://www.cs.utah.edu/%7Emflatt/benchmarks-20100126/log3/Benchmarks-Chicken.html
which has all numbers normalized to chicken and you'll see that it's
not one of the faster choices. This is all using safe generic
arithmetic operations, and there's another table on that page
comparing PLT Bigloo and Gambit with them.

* It's true that the last example is awkward to write, but adding this
to the module:

(require scheme/require
(filtered-in
(lambda (n)
(cond [(regexp-match #rx"^unsafe-fx(.*)$" n) => cadr]
[else #f]))
scheme/unsafe/ops))

makes the unsafe fixnum operations available under the usual names.
As usual, this can be bundled up in a macro.

* Obviously, none of this is "Standard Scheme" -- but that concept
should be taken in a similar way to "Lisp" -- a family of similar
languages. (And yes, a few people will disagree with that and will
try to write portable "Scheme" code. Under such conditions, my view
is that Scheme is not worth any serious non-toyish use.)


Nicolas Neuss <lastname(a)kit.edu> writes:
>
> * In some CLs (especially the CMUCL family, i.e. CMUCL, SBCL, SCL),
> I even do not have to insert many type declarations because most
> are automatically derived.

(Note that PLT has Typed Scheme, which is planned to eventually get
hooked up to the compiler.)


> * OTOH, in Chicken I have to change my code using special
> type-adapted functions (fx+, fx*, ..., fp+, fp-, ...) . The code
> will probably not work anywhere else anymore (well, maybe in
> Bigloo which uses a similar approach IIRC).

(No, Bigloo has a different way to annotate types of identifiers.)

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://barzilay.org/ Maze is Life!
From: Hugh Aguilar on
On Mar 10, 12:29 am, Nicolas Neuss <lastn...(a)kit.edu> wrote:
> Since you are apparently interested in good performance I would surely
> prefer Common Lisp (at least PLT Scheme is not the way to go).

My most immediate goal is to port my slide-rule program from Forth
into some kind of Lisp and to also provide it with a GUI at the same
time. Performance isn't much of an issue in this case. It is true that
slide-rule.4th takes between 5 and 10 seconds to generate the files
under SwiftForth on my laptop, which is somewhat slow for an
interactive program. That is not too bad though, so if PLT Scheme were
in the same ballpark, I would be okay with it. Some of this speed
issue can be alleviated by generating the scales ahead of time and
storing them, so they don't have to be generated during interactive
use.

The slide-rule program is recursive --- the scales are broken up into
groups, which are broken up into sub-groups, and so forth. The program
(the function GROUP) moves through an array of reduction values to
determine how many sub-groups each group is broken up into. This
allows the scales to have different precisions in different parts
(more precision on the left than the right on the D scale, for
example). Generating the scales recursively like this is somewhat
slow, which is the delay. I have heard that you Lisp programmers
generally prefer recursion to iteration, so this should be a good fit
for your style of programming.

A lot of programs that I write do a recursive-descent search. In my
novice package I have LC53.4th that cracks the linear-congruential
encryption system doing a recursive search through the possible keys.
I also have LowDraw.4th that analyzes low-draw poker probabilities
doing a recursive traversal through every possible hand that can be
dealt. Many of these recursive-descent programs are quite time-
consuming to run, so performance is an issue. This is why I think that
I will have to eventually upgrade to Common Lisp, which seems to be
more focused on practical programming, compared to Scheme which seems
to mostly used in schools.
From: Hugh Aguilar on
On Mar 9, 11:53 pm, Paul Donnelly <paul-donne...(a)sbcglobal.net> wrote:
> Hugh Aguilar <hughaguila...(a)yahoo.com> writes:
> > I wrote a program to crack a linear-congruential encryption system in
> > both Factor and Forth and obtained these times on my laptop:
> > assembly-language 17 seconds
> > SwiftForth 22 seconds
> > Factor 9 minutes 14 seconds
> > I expect that Common Lisp would be even worse than Factor, although I
> > haven't tried it yet.
>
> Why do you expect that? CL compilers are several decades more mature
> than Factor's compiler.

I shouldn't have said that I expected Common Lisp to be worse than
Factor, as I had no reason to believe that. If a language doesn't
support mixed-precision integer arithmetic however, then I would
expect it to be worse than Forth --- and I've never heard of any
language other than Forth providing this feature. Factor has a variety
of different number types including rationals, and it can cast between
these as necessary, but this is still not mixed-precision arithmetic.
For mixed-precision, you have to be able to multiply two single-
precision integers to produce a double-precision product. Slava did
tell me that he was disturbed by the poor showing of Factor in regard
to LC53 and that he would upgrade the compiler to handle integer
arithmetic better. This was some time ago, so Factor may now be able
to do better than I have described above.

The LC53 program is very short and simple. I would be interested in
seeing a port of LC53 into Common Lisp for speed comparison purposes.
I can describe the program here if anybody is interested:

In the linear-congruential prng, the high bits have more influence
than the low bits, so I start guessing at the high bit and work down
to the low bit. I try setting the bit at 1 and 0, and I run HOW-FAR
for each guess. HOW-FAR uses this guessed key to calculate as far as
possible, and it stops when it either decrypts the entire message or
it begins generating a crypt-stream that doesn't match the known
plaintext. If it decrypts the entire message, then we are done and
have found our key. Otherwise we compare the two HOW-FAR calls to
determine whether the 1 or 0 bit went further. We then set this bit to
1 or 0 and recursively try again with the next highest bit. If this
doesn't result in finding the seed, we toggle the bit and try again.
If HOW-FAR always told us the correct path to take, we could crack a
32-bit key with only 32 tests. In practice, most of our early calls to
HOW-FAR result in the same value (zero), in which case we have to
arbitrarily choose to test the 1 or 0 first. A lot of these choices
are wrong, so our search is a dead-end. We are still much better off
than just doing a brute-force search though. A brute-force search
would find the key after testing about 50% of the total number of
keys. My algorithm is somewhere around 10%.

The problem with my algorithm is that HOW-FAR often returns equal
values for the 1 and 0 guesses, in which case I have to arbitrarily
choose which guess to try first. An improvement would be that, if HOW-
FAR returns equal values, I would do a partial recursive search of
each guess to determine which side looks more promising. That would be
better than committing myself to a full recursive search of one side
of the tree based upon an arbitrary decision. I could do partial
recursive searches to give myself a more informed decision as to which
side is better, and then do a deeper recursive search of that side,
and so forth. For this to be efficient, I would have to save the
results of the partial recursive search on each side so that I don't
repeat that work when I do a deeper recursive search of whichever side
seems better. Lazy evaluation would be perfect for this. I would
pretend to have a full search of the every possible key, but would
fill-in the values of this search gradually as I did my partial
recursive searches. When I do deeper searches, I wouldn't have to redo
the work, but would just fetch the values that were set previously by
the partial search. If I were to upgrade the program from a 32-bit key
to a 64-bit key, I would have to also upgrade the program to do these
partial searches. This would pretty much require porting to a language
that supports lazy evaluation, as that would greatly simplify saving
that data.

I made a joke earlier about Haskell's syntax being hard to read.
Actually though, that doesn't bother me too much. What turned me off
on Haskell is that it seems to be oriented toward top-down
programming. The book I looked at said that Haskell programs are more
time-consuming to craft than other language programs, but they
generally work correctly the first time they are run. This is very
different from the way I usually program, which is bottom-up with a
lot of trial-and-error. I could be wrong about all of this though; all
I know about Haskell is what I picked up skimming through the book at
the bookstore. The top-down emphasis was just the impression that I
got.

What I have seen of Lisp syntax is pretty easy to read. Also, you guys
seem to embrace the same kind of bottom-up programming that I am
accustomed to in Forth. I would definitely be more interested in
learning Lisp than Haskell, but I would hope to have lazy evaluation
in Lisp as that seems to be pretty useful for the kinds of programs
that I often write. I've given some thought to implementing lazy
evaluation in Forth, and I might take a stab at that if I can't find a
Lisp that does what I want.