From: Eli Barzilay on
Vassil Nikolov <vnikolov(a)pobox.com> writes:
> On Wed, 10 Mar 2010 23:55:17 -0500, Eli Barzilay <eli(a)barzilay.org> said:
>> ...
>> I wasn't trying to get into any kind of a pissing contest
>
> No, you weren't, but if we discuss the declarations one could make
> in a Common Lisp program to speed it up,

I didn't.
--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://barzilay.org/ Maze is Life!
From: Nicolas Neuss on
Eli Barzilay <eli(a)barzilay.org> writes:

> Nicolas Neuss <lastname(a)kit.edu> writes:
>>
>> Since you are apparently interested in good performance I would
>> surely prefer Common Lisp (at least PLT Scheme is not the way to
>> go).
>
> fortunatus <daniel.eliason(a)excite.com> writes:
>>
>> On the other hand, while PLT is a slow environment, Scheme does have
>> good compilers - look into Chicken for example.
>> http://www.call-with-current-continuation.org/
>
> Both of these claims have not been true for a while now. As a stupid
> example, I tried this in Allegro Lisp 8.1 in my machine (linux,
> x86_64, exact details don't matter for obvious reasons):
>
> (proclaim '(optimize (speed 3) (safety 1) (space 0) (debug 0)))
> (defun fib (n) (if (<= n 1) n (+ (fib (- n 1)) (fib (- n 2)))))
> (compile 'fib)
> (time (fib 40))

Yes, maybe I was not specific enough. It is clear that PLT Scheme and
CL will be in the same ball park for some examples, like for example
this recursive Fibonacci where type declarations do not help much,
because a larger part of the work is recursive calls.

But the topic at hand was a program implementing some cryptography
algorithm involving computations with 32 bit numbers and bit twiddling
and I would simply be surprised if PLT would do this not much slower as,
say, SBCL. (My main domain of interest lies in the domain of numerical
analysis where fast floating point is important, and the problem is
similar.)

> [...]
>> * In some CLs (especially the CMUCL family, i.e. CMUCL, SBCL, SCL),
>> I even do not have to insert many type declarations because most
>> are automatically derived.
>
> (Note that PLT has Typed Scheme, which is planned to eventually get
> hooked up to the compiler.)

OK, that's interesting.

>> * OTOH, in Chicken I have to change my code using special
>> type-adapted functions (fx+, fx*, ..., fp+, fp-, ...) . The code
>> will probably not work anywhere else anymore (well, maybe in
>> Bigloo which uses a similar approach IIRC).
>
> (No, Bigloo has a different way to annotate types of identifiers.)

Thank you for the information.

Nicolas
From: Nicolas Neuss on
Hugh Aguilar <hughaguilar96(a)yahoo.com> writes:

> My most immediate goal is to port my slide-rule program from Forth
> into some kind of Lisp and to also provide it with a GUI at the same
> time. Performance isn't much of an issue in this case. It is true that
> slide-rule.4th takes between 5 and 10 seconds to generate the files
> under SwiftForth on my laptop, which is somewhat slow for an
> interactive program. That is not too bad though, so if PLT Scheme were
> in the same ballpark, I would be okay with it. Some of this speed
> issue can be alleviated by generating the scales ahead of time and
> storing them, so they don't have to be generated during interactive
> use.
>
> The slide-rule program is recursive --- the scales are broken up into
> groups, which are broken up into sub-groups, and so forth. The program
> (the function GROUP) moves through an array of reduction values to
> determine how many sub-groups each group is broken up into. This
> allows the scales to have different precisions in different parts
> (more precision on the left than the right on the D scale, for
> example). Generating the scales recursively like this is somewhat
> slow, which is the delay. I have heard that you Lisp programmers
> generally prefer recursion to iteration, so this should be a good fit
> for your style of programming.
>
> A lot of programs that I write do a recursive-descent search. In my
> novice package I have LC53.4th that cracks the linear-congruential
> encryption system doing a recursive search through the possible keys.
> I also have LowDraw.4th that analyzes low-draw poker probabilities
> doing a recursive traversal through every possible hand that can be
> dealt. Many of these recursive-descent programs are quite time-
> consuming to run, so performance is an issue. This is why I think that
> I will have to eventually upgrade to Common Lisp, which seems to be
> more focused on practical programming, compared to Scheme which seems
> to mostly used in schools.

You probably must judge yourself, which kind of performance your problem
demands, and/or (better) maybe even test how different Scheme/CL
implementations perform on some characteristic model problem. If the
work consists mostly in doing recursive function calls and or file
input/output, then there will probably not be much difference, if OTOH
it uses typed arithmetic heavily then the difference might be very large
in favor of CL.

But[*]: I would estimate that the GUI part is much more fun with PLT
Scheme than with any of the free CL possibilities.

Nicolas

[*] CAVEAT: I don't know much about GUIs, and if someone knows both the
PLT GUI and things like CL-LTk, McCLIM, or Cells please feel free to
correct me.
From: Eli Barzilay on
Nicolas Neuss <lastname(a)kit.edu> writes:

> But the topic at hand was a program implementing some cryptography
> algorithm involving computations with 32 bit numbers and bit
> twiddling and I would simply be surprised if PLT would do this not
> much slower as, say, SBCL. (My main domain of interest lies in the
> domain of numerical analysis where fast floating point is important,
> and the problem is similar.)

Unsurprisingly, the issue here is finding the right balance between a
sufficiently expressive language, your actual needs (in terms of
resources like runtime), and the work you're willing to put in to meet
these needs if you can't do that in this expressive language. This is
inherently subjective, so the tradeoffs are different. For example, I
would be willing to put more work into optimizing some code (eg,
writing bits of it in C) to keep on using PLT because I value the
language overall, while others will happily move to an all-C code. I
could also get more speed if I add tons of type annotations (in CL) or
switch to Stalin (in Scheme) -- but in both cases I lose some of the
benefits of the language.

And BTW, you mention two situations -- fast FP and fast 32-bit
numbers. In the case of FP, PLT is doing a relatively good job (in
recent versions) since it can get to a point where FP numbers are
unboxed. For 32-bit numbers, my guess is that most CLs and most
Schemes would suffer from the same problem: giving up bits that are
used for tags, making straight-up integer arithmetics problematic.
(And it's also not surprising that there are side-solutions, like a
library that produces machine code, or hooking on to libgmp -- which I
did for the shootout pidigits benchmarks, with a result that is
questionable how scheme-ish it is.)

But in any case, I'm not arguing for any particular side in all of
this; the only thing I wanted to clarify is that the days where PLT
was "obviously slower" are gone.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://barzilay.org/ Maze is Life!
From: joswig on
On 11 Mrz., 09:52, Eli Barzilay <e...(a)barzilay.org> wrote:
> Nicolas Neuss <lastn...(a)kit.edu> writes:
> > But the topic at hand was a program implementing some cryptography
> > algorithm involving computations with 32 bit numbers and bit
> > twiddling and I would simply be surprised if PLT would do this not
> > much slower as, say, SBCL.  (My main domain of interest lies in the
> > domain of numerical analysis where fast floating point is important,
> > and the problem is similar.)
>
> Unsurprisingly, the issue here is finding the right balance between a
> sufficiently expressive language, your actual needs (in terms of
> resources like runtime), and the work you're willing to put in to meet
> these needs if you can't do that in this expressive language.  This is
> inherently subjective, so the tradeoffs are different.  For example, I
> would be willing to put more work into optimizing some code (eg,
> writing bits of it in C) to keep on using PLT because I value the
> language overall, while others will happily move to an all-C code.  I
> could also get more speed if I add tons of type annotations (in CL) or
> switch to Stalin (in Scheme) -- but in both cases I lose some of the
> benefits of the language.

The amount of type declarations needed for fast code varies between
compilers.
Some compilers with better type inference don't need many
declarations.

Generally in the CL world there is also the wish to write portable
code.
Code that can be moved easily between different implementations.
Writing non-standard code for basic arithmetic functions is
not something I would want to do.

>
> And BTW, you mention two situations -- fast FP and fast 32-bit
> numbers.  In the case of FP, PLT is doing a relatively good job (in
> recent versions) since it can get to a point where FP numbers are
> unboxed.  For 32-bit numbers, my guess is that most CLs and most
> Schemes would suffer from the same problem: giving up bits that are
> used for tags, making straight-up integer arithmetics problematic.
> (And it's also not surprising that there are side-solutions, like a
> library that produces machine code, or hooking on to libgmp -- which I
> did for the shootout pidigits benchmarks, with a result that is
> questionable how scheme-ish it is.)
>
> But in any case, I'm not arguing for any particular side in all of
> this; the only thing I wanted to clarify is that the days where PLT
> was "obviously slower" are gone.

We have seen two different functions in PLT Scheme: one for generic
arithmetic and
one for fixnum arithmetic.

In the generic case Allegro CL was almost three times faster.

In the fixnum case, you had to rewrite your program to use different
unsafe, primitive, non-standard, functions. CL compilers often
provide that, too. I would avoid that most of the time.
In many implementations there are a lot of tricks to get code
to run faster by using implementation specific optimizations
(up to inline assembler). That can be useful. You have not tried
to do the same with Allegro CL (or any other CL implementation).

The question that interests me slightly more, is how fast
generic, portable and standard code runs.





>
> --
>           ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
>                    http://barzilay.org/                  Maze is Life!