From: Cesare Rocchi
Subject: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <3DB815C4.C26E9AF8@itc.it>
Let me start a sibling thread about iteration and recursion:

;; ### Here is the code

(setf pippo nil)

(loop for x from 1 to 5000
    collect (push x pippo))

(defun lunghezza (list)
	   (if (null list)
	       0
	     (+ 1 (lunghezza (cdr list)))))

(defun lunghezza1 (list &optional (len 0))
	   (if (null list)
	       len
	     (lunghezza1 (rest list) (+ 1 len))))

(defun lunghezza2 (list)
  (labels 
      ((tmp (tmp-list len)
	 (if (null tmp-list)
	     len
	   (tmp (rest tmp-list) (+ 1 len)))))
    (tmp list 0)))

;; BENCHMARKS [real time (msec)]
;; 
;; Ten tests for each function
;;

(setf bench-lunghezza '(656 664 559 547 562 548 531 555 540 576))

(setf average-lunghezza (round (/ (reduce #'+ bench-lunghezza) 10)))

;; ##

(setf bench-lunghezza1 '(638 464 430 443 422 431 430 409 414 406))
      
(setf average-lunghezza1 (round (/ (reduce #'+ bench-lunghezza1)
10)))    

;; ##

(setf bench-lunghezza2 '(472 493 461 461 482 455 475 479 484 486))

(setf average-lunghezza2 (round (/ (reduce #'+ bench-lunghezza2) 10)))

;; lunghezza1 WINS !! (realtime speaking)

Tested on a ULTRA5 sparcstation with allegro60.
I caught the result of realtime computations 
returned by

(time (function-name list))

;;####################

Here is the why

Lisp interpreters and compilers trasform tail recursion in an iterative 
process. If u use tail-recursion-programming style the stack doesn't
grow.
To prove it trace the above functions. 

SOOO ? Even if, from a syntactic viewpoint, the second and the third
functions
ARE recursive, from a compiler viewpoint, they ARE iterative processes.
That's why they are faster.

I wanna remember that C compilers, for example, don't support tail
recursion. 
That's why C programmers, sometimes and somehow, don't believe in
recursion/iteration.

Cheers,

-c.

From: Christopher C. Stacy
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <uvg3q948p.fsf@dtpq.com>
>>>>> On Thu, 24 Oct 2002 17:46:12 +0200, Cesare Rocchi ("Cesare") writes:
 Cesare> Lisp interpreters and compilers trasform tail recursion in an
 Cesare> iterative process. If u use tail-recursion-programming style
 Cesare> the stack doesn't grow.

That behaviour is not defined by the language Common Lisp,
and you cannot, in general, rely on any tail recursion optimization.

 Cesare> To prove it trace the above functions. 

Some implementations might perform the optimizations, but portable
code must not rely on that.  In some implementations, your program
will blow out the stack.

You cannot "prove" things about Common Lisp by emperical experiments
with any given implementation.

If you program in such a way that you require a particular
optimization, or any other particular extension of the language,
and you are willing to restrict yourself to some vendor's
implementation that provides it, then that is fine.
Vendors certainly do compete with each other in those areas!
From: ozan s yigit
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <vi4fzuuwp2d.fsf@blue.cs.yorku.ca>
······@dtpq.com (Christopher C. Stacy) on tail recursion
optimization:

> Some implementations might perform the optimizations, but portable
> code must not rely on that.  In some implementations, your program
> will blow out the stack.

this is true, but it is also true that this optimization was known
for what, about thirty years? i would be very curious to know which
implementations of CL are unable to perform the optimization as
well as others. 

oz
--- 
people can sell their souls and freedoms one tiny inconsequential
millimeter at a time. -- anthony wallis
From: Barry Margolin
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <6Rdu9.17$eR6.1885@paloalto-snr1.gtei.net>
In article <···············@blue.cs.yorku.ca>,
ozan s yigit  <··@blue.cs.yorku.ca> wrote:
>······@dtpq.com (Christopher C. Stacy) on tail recursion
>optimization:
>
>> Some implementations might perform the optimizations, but portable
>> code must not rely on that.  In some implementations, your program
>> will blow out the stack.
>
>this is true, but it is also true that this optimization was known
>for what, about thirty years? i would be very curious to know which
>implementations of CL are unable to perform the optimization as
>well as others. 

Some implementors *intentionally* choose not to perform the optimization,
so that you can see all the state in the debugger.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Bruce Hoult
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <bruce-C600E9.09471630102002@copper.ipg.tsnz.net>
In article <·················@paloalto-snr1.gtei.net>,
 Barry Margolin <······@genuity.net> wrote:

> In article <···············@blue.cs.yorku.ca>,
> ozan s yigit  <··@blue.cs.yorku.ca> wrote:
> >······@dtpq.com (Christopher C. Stacy) on tail recursion
> >optimization:
> >
> >> Some implementations might perform the optimizations, but portable
> >> code must not rely on that.  In some implementations, your program
> >> will blow out the stack.
> >
> >this is true, but it is also true that this optimization was known
> >for what, about thirty years? i would be very curious to know which
> >implementations of CL are unable to perform the optimization as
> >well as others. 
> 
> Some implementors *intentionally* choose not to perform the optimization,
> so that you can see all the state in the debugger.

Notice that this is possible only with a recursive formulation of the 
algorithm.

An optimized tail-recursive function has *exactly* the same state as an 
iterative version.  It is no harder and no easier to debug.  But turn 
off the tail-recursion optimization and suddenly the recursive version 
has a large amount of state sitting there for you to figure out how it 
works.  You can't do the same thing with a loop.

-- Bruce
From: Joe Marshall
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <8z0en05u.fsf@ccs.neu.edu>
Bruce Hoult <·····@hoult.org> writes:

> An optimized tail-recursive function has *exactly* the same state as an 
> iterative version.  It is no harder and no easier to debug.  But turn 
> off the tail-recursion optimization and suddenly the recursive version 
> has a large amount of state sitting there for you to figure out how it 
> works.  You can't do the same thing with a loop.

Sure you can.  You simply need to record more state on each iteration
of the loop.
From: Bruce Hoult
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <bruce-DF5470.09535801112002@copper.ipg.tsnz.net>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> 
wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > An optimized tail-recursive function has *exactly* the same state as an 
> > iterative version.  It is no harder and no easier to debug.  But turn 
> > off the tail-recursion optimization and suddenly the recursive version 
> > has a large amount of state sitting there for you to figure out how it 
> > works.  You can't do the same thing with a loop.
> 
> Sure you can.  You simply need to record more state on each iteration
> of the loop.

Sure, but that's more work.

-- Bruce
From: William D Clinger
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <b84e9a9f.0210260401.7bcb76fd@posting.google.com>
Speaking of tail call optimization, ozan s yigit wrote:

> this is true, but it is also true that this optimization was known
> for what, about thirty years? i would be very curious to know which
> implementations of CL are unable to perform the optimization as
> well as others. 

I have never encountered an implementation of Common Lisp that is
able to perform tail call optimization except for a few special
cases.  The behavior shown below is typical, so no adverse
conclusion should be drawn concerning this particular system.

Will

--------

% lisp
Allegro CL Enterprise Edition 5.0.1 [SPARC] (6/29/99 16:47)
Copyright (C) 1985-1999, Franz Inc., Berkeley, CA, USA.  All Rights Reserved.
;; Optimization settings: safety 1, space 1, speed 1, debug 2.
;; For a complete description of all compiler switches given the
;; current optimization settings evaluate (EXPLAIN-COMPILER-SETTINGS).
USER(1): (defun bottom ()
           (declare (optimize (speed 3) (debugging 0) (safety 0)))
           ((lambda (x) (funcall x x)) #'(lambda (x) (funcall x x))))
BOTTOM
USER(2): (bottom)
Error: Stack overflow (signal 1000)
  [condition type: SYNCHRONOUS-OPERATING-SYSTEM-SIGNAL]

Restart actions (select using :continue):
 0: continue computation
 1: Return to Top Level (an "abort" restart)
 2: Abort #<PROCESS Initial Lisp Listener>
[1c] USER(3): :continue 1
USER(4): (compile 'bottom)
BOTTOM
NIL
NIL
USER(5): (bottom)
Error: Stack overflow (signal 1000)
  [condition type: SYNCHRONOUS-OPERATING-SYSTEM-SIGNAL]

Restart actions (select using :continue):
 0: continue computation
 1: Return to Top Level (an "abort" restart)
 2: Abort #<PROCESS Initial Lisp Listener>
[1c] USER(6): :continue 2
From: Helmut Eller
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <m27kg5wbj9.fsf@stud3.tuwien.ac.at>
······@qnci.net (William D Clinger) writes:

> I have never encountered an implementation of Common Lisp that is
> able to perform tail call optimization except for a few special
> cases.  The behavior shown below is typical, so no adverse
> conclusion should be drawn concerning this particular system.
> 
> Will
> 
> --------
> 
> % lisp
> Allegro CL Enterprise Edition 5.0.1 [SPARC] (6/29/99 16:47)
> Copyright (C) 1985-1999, Franz Inc., Berkeley, CA, USA.  All Rights Reserved.
> ;; Optimization settings: safety 1, space 1, speed 1, debug 2.
> ;; For a complete description of all compiler switches given the
> ;; current optimization settings evaluate (EXPLAIN-COMPILER-SETTINGS).

I wonder what the output of (EXPLAIN-COMPILER-SETTINGS) looks like.
Especially the switches COMPILER:TAIL-CALL-NON-SELF-MERGE-SWITCH and 
COMPILER:TAIL-CALL-SELF-MERGE-SWITCH.

Helmut
From: Duane Rettig
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <4adl1f54e.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Speaking of tail call optimization, ozan s yigit wrote:
> 
> > this is true, but it is also true that this optimization was known
> > for what, about thirty years? i would be very curious to know which
> > implementations of CL are unable to perform the optimization as
> > well as others. 
> 
> I have never encountered an implementation of Common Lisp that is
> able to perform tail call optimization except for a few special
> cases.  The behavior shown below is typical, so no adverse
> conclusion should be drawn concerning this particular system.

Don't assume that it's the implementation's fault before you debug your
own test.

> USER(1): (defun bottom ()
>            (declare (optimize (speed 3) (debugging 0) (safety 0)))
; =========================================^^^^^^^^^
;                                     [What 's this declaration?]
>            ((lambda (x) (funcall x x)) #'(lambda (x) (funcall x x))))
> BOTTOM
> USER(2): (bottom)
> Error: Stack overflow (signal 1000)
>   [condition type: SYNCHRONOUS-OPERATING-SYSTEM-SIGNAL]
> 
> Restart actions (select using :continue):
>  0: continue computation
>  1: Return to Top Level (an "abort" restart)
>  2: Abort #<PROCESS Initial Lisp Listener>
> [1c] USER(3): :continue 1
> USER(4): (compile 'bottom)

You're right to try the function again compiled.  I don't know what
the philosophy is of other CL implementations which include an
interpreter, but in Allegro CL's case, the major (perhaps only) reason
to retain interpreted code is for enhanced debugging.  That, coupled
with the fact that the interpreter will obviously ignore any _compiler_
declarations, and the choice is obvious _not_ to tail-merge any
interpreted code.

In case you don't see the problem with your code, try this one:

(defun bottom ()
           (declare (optimize (speed 3) (debug 0) (safety 0)))
           ((lambda (x) (funcall x x)) #'(lambda (x) (funcall x x))))

An additinal note:  This is a perfect example of why tail merging is not
necessarily a Good Thing.  This code has a bug in the sense that it is
an infinite loop.  And because of the optimization levels you gave it
(i.e. speed = 3 and safety = 0, which we never recommend), once it becomes
a loop, it is unbreakable; you have to use the "5 control-C's" failsafe in
order to force-break out of it.  And although I wouldn't have personally
written a loop like this, nor would I have given the safety=0 optimization
for a loop, I would have preferred to see a stack overflow than to see an
un-breakable loop...

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Adam Warner
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <pan.2002.10.26.12.50.53.930509@consulting.net.nz>
Hi William D Clinger,

> Speaking of tail call optimization, ozan s yigit wrote:
> 
>> this is true, but it is also true that this optimization was known for
>> what, about thirty years? i would be very curious to know which
>> implementations of CL are unable to perform the optimization as well as
>> others.
> 
> I have never encountered an implementation of Common Lisp that is able
> to perform tail call optimization except for a few special cases.  The
> behavior shown below is typical, so no adverse conclusion should be
> drawn concerning this particular system.

Well now you have encountered such an implementation. CMUCL is properly
tail call optimising (except for a few special cases). It has no problem
running your example code. Furthermore the declaration is unnecessary. To
disable the optimisation the debug level has to be set greater than 2.

Here are the exceptions:
http://cvs2.cons.org/ftp-area/cmucl/doc/cmu-user/compiler-hint.html#toc133

Regards,
Adam
From: William D Clinger
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <b84e9a9f.0210261428.3037745@posting.google.com>
I offer my apologies to Allegro CL and to CMUCL.

After I wrote:
> I have never encountered an implementation of Common Lisp that is able
> to perform tail call optimization except for a few special cases.

Adam Warner replied:
> Well now you have encountered such an implementation. CMUCL is properly
> tail call optimising (except for a few special cases). It has no problem
> running your example code. Furthermore the declaration is unnecessary. To
> disable the optimisation the debug level has to be set greater than 2.

I am delighted to hear this.  Thank you.

Concerning the transcript I posted, Helmut Eller wrote:
> I wonder what the output of (EXPLAIN-COMPILER-SETTINGS) looks like.
> Especially the switches COMPILER:TAIL-CALL-NON-SELF-MERGE-SWITCH and
> COMPILER:TAIL-CALL-SELF-MERGE-SWITCH.

That's not very interesting (see below), but what is interesting is that
I had typed "debugging" instead of "debug" in the declaration.  (This is
something I do all too regularly, like typing "lamdba".)  It turns out
that Allegro Common Lisp handles this tail recursion just fine when
compiled:

Allegro CL Enterprise Edition 5.0.1 [SPARC] (6/29/99 16:47)
Copyright (C) 1985-1999, Franz Inc., Berkeley, CA, USA.  All Rights Reserved.
;; Optimization settings: safety 1, space 1, speed 1, debug 2.
;; For a complete description of all compiler switches given the
;; current optimization settings evaluate (EXPLAIN-COMPILER-SETTINGS).
USER(1): (defun omega ()
           (declare (optimize (speed 3) (safety 0) (space 0) (debug 0)))
           ((lambda (x) (funcall x x)) #'(lambda (x) (funcall x x))))
OMEGA
USER(2): (compile 'omega)
OMEGA
NIL
NIL
USER(3): (omega)
^C^C^C

Please accept my apologies.

Will

--------

USER(1): COMPILER:TAIL-CALL-NON-SELF-MERGE-SWITCH
#<Function SPEED > 1 AND DEBUG < 2 @ #x406651a>
USER(2): COMPILER:TAIL-CALL-SELF-MERGE-SWITCH
#<Function SPEED > 0 @ #x4072d8a>
USER(3): (explain-compiler-settings)
;; 
;; Compiler optimization quality settings:
;;   safety   1
;;   space    1
;;   speed    1
;;   debug    2
;; 
;; To instruct the compiler globally to produce the fastest but least
;; safe and least debuggable compiled code, evaluate the following
;; before compiling:
;; 
;;    (proclaim '(optimize (speed 3) (safety 1) (space 0) (debug 0)))
;; 
;; or put the following at the top of any file that you wish to be
;; compiled in this manner:
;; 
;;    (eval-when (compile)
;;      (declaim (optimize (speed 3) (safety 1) (space 0) (debug 0))))
;; 
;; It is recommended that a combination of (speed 3) and (safety 0)
;; never be used globally.
;; 
;; For the most debuggable (and yet reasonably fast) code, use
;; 
;;    (proclaim '(optimize (speed 2) (safety 1) (space 1) (debug 3)))
;; 
;; or its equivalent.
;; These are the values returned by the compiler switch functions given
;; the current speed, safety, space and debug optimization qualities:
;;
;; COMPILER:COMPILE-FORMAT-STRINGS-SWITCH           NIL
;; COMPILER:DECLARED-FIXNUMS-REMAIN-FIXNUMS-SWITCH  NIL
;; COMPILER:GENERATE-INLINE-CALL-TESTS-SWITCH       T
;; COMPILER:GENERATE-INTERRUPT-CHECKS-SWITCH        T
;; COMPILER:INTERNAL-OPTIMIZE-SWITCH                T
;; COMPILER:OPTIMIZE-FSLOT-VALUE-SWITCH             T
;; COMPILER:PEEPHOLE-OPTIMIZE-SWITCH                T
;; COMPILER:SAVE-ARGLIST-SWITCH                     T
;; COMPILER:SAVE-LOCAL-NAMES-SWITCH                 T
;; COMPILER:SAVE-LOCAL-SCOPES-SWITCH                NIL
;; COMPILER:TAIL-CALL-NON-SELF-MERGE-SWITCH         NIL
;; COMPILER:TAIL-CALL-SELF-MERGE-SWITCH             T
;; COMPILER:TRUST-DECLARATIONS-SWITCH               NIL
;; COMPILER:TRUST-DYNAMIC-EXTENT-DECLARATIONS-SWITCH T
;; COMPILER:VERIFY-ARGUMENT-COUNT-SWITCH            T
;; COMPILER:VERIFY-CAR-CDR-SWITCH                   T
;; COMPILER:VERIFY-NON-GENERIC-SWITCH               T
;; COMPILER:VERIFY-SYMBOL-VALUE-IS-BOUND-SWITCH     T
;;
;; The compiler is not reporting type propagation.
;; The compiler is not reporting failure to inline function calls.
;; The compiler is not reporting boxing of machine data types.
;; The compiler is not reporting variable location assignments.
;; These may be controlled with the declaration or proclamation
;;    (:explain [:types|:notypes] [:calls|:nocalls]
;;              [:boxing|:noboxing] [:variables|:novariables]))
;;
;;---
;; Source file recording information is being saved during compilation.
;;    (*RECORD-SOURCE-FILE-INFO* is T)
;;---
;; Cross-reference information is being saved during compilation.
;;    (*RECORD-XREF-INFO* is T)
From: Jochen Schmidt
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <apm5vk$f79$05$1@news.t-online.com>
William D Clinger wrote:

> I offer my apologies to Allegro CL and to CMUCL.
> 
> After I wrote:
>> I have never encountered an implementation of Common Lisp that is able
>> to perform tail call optimization except for a few special cases.
> 
> Adam Warner replied:
>> Well now you have encountered such an implementation. CMUCL is properly
>> tail call optimising (except for a few special cases). It has no problem
>> running your example code. Furthermore the declaration is unnecessary. To
>> disable the optimisation the debug level has to be set greater than 2.
> 
> I am delighted to hear this.  Thank you.

The example works in LispWorks too.

After a first test with CLISP it seems as if it doesn't do this 
optimization.

I'm sure SBCL and SCL are working too because both are based on CMUCL.

ciao,
Jochen

--
http://www.dataheaven.de
From: Kaz Kylheku
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <cf333042.0210251310.3e75b1b3@posting.google.com>
Cesare Rocchi <······@itc.it> wrote in message news:<·················@itc.it>...
> Let me start a sibling thread about iteration and recursion:
> 
> ;; ### Here is the code
> 
> (setf pippo nil)
> 
> (loop for x from 1 to 5000
>     collect (push x pippo))
> 
> (defun lunghezza (list)
> 	   (if (null list)
> 	       0
> 	     (+ 1 (lunghezza (cdr list)))))
> 
> (defun lunghezza1 (list &optional (len 0))
> 	   (if (null list)
> 	       len
> 	     (lunghezza1 (rest list) (+ 1 len))))

But now you have wiped out the readability and understandability
advantage of recursion by creating a context variable. You also have
done 90% of the work of making an iterative version:

  (defun lunghezza-iter (list)
    (do ((len 0 (1+ len))
         (list list (list iter)))
        ((null list) len)))

This is easier to understand because it's obvious how the next value
of len is obtained from the prior value. In the recursive version this
is slightly obfuscated through the parameter passing mechanism; you
are instantiating a new len just to obtain a new version that is one
greater than the old.

The beauty of recursion is that you can eliminate some of the
distracting pieces of the iterative version, like dummy loop
variables. Thus the code for computing the object follows straight
from its abstract recursive definition: ``The length of a list is zero
if it is empty, or one plus the rest of the list otherwise.''

How would you state the ``reverse engineered'' specification from the
tail-recursive version? Let's see: the length of a tuple consisting of
a list and an associated variable L is L if the list is empty, or else
it is the length of the rest of the list associated with L + 1.  When
L is zero, then the length is precisely the natural length of the
list, meaning the count of its items.'' :)

If you care about efficiency rather than elegance, you might as well
go for iteration. Elegance is not piecewise negotiable relative to
performance. :)
From: Barry Margolin
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <8%cu9.7$eR6.1385@paloalto-snr1.gtei.net>
In article <·················@itc.it>, Cesare Rocchi  <······@itc.it> wrote:
>I wanna remember that C compilers, for example, don't support tail
>recursion. 
>That's why C programmers, sometimes and somehow, don't believe in
>recursion/iteration.

I think GCC has an option to do tail-recursion optimization.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Tim Bradshaw
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <ey365vlidh1.fsf@cley.com>
* Barry Margolin wrote:

> I think GCC has an option to do tail-recursion optimization.

For many functions this must be very hard to get in C!  For instance,
something like this:

(defun foo (x)
  (let ((y (cons x nil)))
    (bar y)))

is pretty trivially TR, but is this:

void bar(char *y);

void foo (char x)
{
  char y[2];
  y[0] = x; y[1] = '\0';
  bar(y);
}

I'm not sure it is, since y must be deallocated after bar returns.
(But my C is pretty rusty, so I may well be wrong).

--tim
From: Vassil Nikolov
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <u1y69ib03.fsf@poboxes.com>
    On 29 Oct 2002 13:25:30 +0000, Tim Bradshaw <···@cley.com> said:

    [...]
    TB> void bar(char *y);

    TB> void foo (char x)
    TB> {
    TB>   char y[2];
    TB>   y[0] = x; y[1] = '\0';
    TB>   bar(y);
    TB> }

    TB> I'm not sure it is, since y must be deallocated after bar returns.

But isn't there a clever way still to manage the stack frames to
allow tail call optimization?  Anyway, how is the above different
(with regards to the tail call) from

  void bar (char y);

  void foo (char x) {
    char y;
    y = x;
    bar(y);
  }

(where y is not placed into a register, of course).

---Vassil.

-- 
For an M-person job assigned to an N-person team, only rarely M=N.
From: Thomas F. Burdick
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <xcvvg3lcbmj.fsf@conquest.OCF.Berkeley.EDU>
Tim Bradshaw <···@cley.com> writes:

> * Barry Margolin wrote:
> 
> > I think GCC has an option to do tail-recursion optimization.
> 
> For many functions this must be very hard to get in C!

A lot of things are hard to do in a C compiler without breaking C
semantics.  GCC's solution seems to be "don't break the semantics most
of the time".  I'm not sure about the specific example you posted, but
I have seen GCC generate obviously incorrect code from all sorts of
optimizations, including tail call merging, far too many times to ever
turn -O up above 1 for any of my code.  Somewhat amusingly, this means
it's often easier for me to write fast code in CL.  Well, that or Fortran.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Vassil Nikolov
Subject: Re: iteration vs recursion Performance viewpoint
Date: 
Message-ID: <uhefap572.fsf@poboxes.com>
    On Thu, 24 Oct 2002 17:46:12 +0200, Cesare Rocchi <······@itc.it> said:

    [...]
    CR> I wanna remember that C compilers, for example, don't support tail
    CR> recursion. 

C compilers support recursion, including tail recursion.  But
obviously you wanted to say that they don't support tail recursion
_optimization_.  Well, if I remember correctly, Sun's C compiler
was able to do that as early as the late 1980's (with -O3 or -O4).

---Vassil.

-- 
For an M-person job assigned to an N-person team, only rarely M=N.