From: Joerg Hoehle
Subject: Re: performance: reduce+map vs loop.
Date: 
Message-ID: <5v9mro$nhi@omega.gmd.de>
SDS (···········@cctrading.com) wrote:

: Consider the two functions computing the dot product:

: (defun dot1 (l0 l1 &key (key #'identity))
[reduce+map]

: (defun dot2 (l0 l1 &key (key #'identity))
[map for side effect]

: One would expect that the second one would be more efficient due to less
: consing, and truly so, in CLISP, it is about 10% faster.
: BUT:
: 1. It does only 5% less garbage collecting. (so, where does the other 5%
: hit come from?)

You'll soon find out that CLISP won't often meet your expectations
about what code should be faster or cons less.  CLISP contains a
byte-compiler, it does not compile to native code.  CLISP has around
500 built-in (i.e. compiled to native code) functions, MAP and REDUCE
among them.

As interpretation of byte code is slower than executing native code,
whenever you use built-in functions in CLISP you can expect to execute
faster than executing own functions.  This is very different from
compilers like CMUCL which compile to native code: there you'll be
closer to meeting your expectations.


Furthermore, CLISP will probably still need to create closures for
your MAP or REDUCE lambda functions, whereas CMUCL might better
analyze the closure use and generate native loop code.


Also, your functions are not equivalent, the default for REDUCE with
+ is 0, whereas DOT2 starts with 0.0.  Different arithmetic may
result in different consing (boxing, double-float vs. integer speed etc.).

: 2. The type declaration of res in dot2 doesn't change the performance
: noticeably (1%?), and, probably, it should not. 
CLISP completely ignores types.  CLISP tries to be as safe as possible
and includes type checks everywhere.


: 3. On a second thought, it would seem that both versions should have the
: *same* performance: a smart compiler (in a lazy language, at least)
: should be able to recognize that no consing is really necessary in dot1
: in the first place. Of course, lisp is strict, and both map and reduce
This would require a *very* smart compiler IMHO.

I'm also looking for programming languages where the most elegant
program (using mapping, sequences, tail-recursion, whatever) generates
the best (fastest, least consing) code.  That doesn't seem to be the
case with all Lisp systems (or is it just a matter of the compiler?).

: are functions, so *technically* consing is inevitable. OTOH, WIBNI lisp
: could unfold things like this into loops?
Some compilers do.  It very much depends on the situations (what
patterns the compiler recognizes).

In CLISP for example
(defun foo (x l)
  (mapcar #'(lambda (y) (+ x y)) l))
> (disassemble #'foo)
Deassemblage de la fonction FOO
2 arguments necessaires
0     (NIL&PUSH)
1     (LOAD&PUSH 2)
2     (JMP L17)
4     L4
4     (LOAD&CAR&PUSH 0)
6     (LOAD&PUSH 5)
7     (LOAD&PUSH 1)
8     (CALLSR&PUSH 2 53)                  ; +
11    (LOAD&CONS&STORE 2)
13    (SKIP 1)
15    (LOAD&CDR&STORE 0)
17    L17
17    (LOAD 0)
18    (JMPIFCONSP L4)
20    (SKIP 1)
22    (LOAD&PUSH 0)
23    (CALLS1 168)                        ; SYSTEM::LIST-NREVERSE
25    (SKIP&RET 4)
#<COMPILED-CLOSURE FOO>
You see that the lambda was optimized away, the compiler indeed
generated loop code.

	Jo"rg Ho"hle.
············@gmd.de		http://zeus.gmd.de/~hoehle/amiga-clisp.html