From: David Steuber "The Interloper
Subject: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <362d211f.89120418@news.newsguy.com>
The Lisp FAQ (which can be chased down via the ALU website) shows an
example between a recursive factorial function and a tail-recursive
factorial function.  The tail recursive one looks iterative to me.  Is
that all tail recursion is?  Just a fancy name for iteration?  What am
I missing?

--
David Steuber (ver 1.31.2a)
http://www.david-steuber.com
To reply by e-mail, replace trashcan with david.

So I have this chicken, see?  And it hatched from this egg, see?  But
the egg wasn't laid by a chicken.  It was cross-laid by a turkey.

From: Kellom{ki Pertti
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <xfz67des92i.fsf@kuovi.cs.tut.fi>
········@david-steuber.com (David Steuber "The Interloper") writes:
> Is that all tail recursion is?  Just a fancy name for iteration?  What am
> I missing?

You are pretty much correct. A tail call is essentially a goto that
passes arguments. Tail calls are fairly easy for a compiler to detect,
so you can require that tail calls do not make you run out of
memory. See Section 3.5 "Proper tail recursion" in R5RS, available at
http://www.schemers.org/Documents/.
-- 
pertti
From: Kellom{ki Pertti
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <xfz4ssys8sc.fsf@kuovi.cs.tut.fi>
Kellom{ki Pertti <··@kuovi.cs.tut.fi> writes:
> You are pretty much correct. A tail call is essentially a goto that
> passes arguments. Tail calls are fairly easy for a compiler to detect,
> so you can require that tail calls do not make you run out of
> memory. 

Before anyone else corrects me, I will correct myself. This is
comp.lang.lisp, not comp.lang.scheme as I thought, so I should
probably add that the requirement to make tail calls iterative is
specific to Scheme, not Lisp in general. This was discussed a while
back, I think.
-- 
pertti
From: Rainer Joswig
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <joswig-2110981405020001@pbg3.lavielle.com>
In article <···············@kuovi.cs.tut.fi>, Kellom{ki Pertti
<··@kuovi.cs.tut.fi> wrote:

> Before anyone else corrects me, I will correct myself. This is
> comp.lang.lisp, not comp.lang.scheme as I thought, so I should
> probably add that the requirement to make tail calls iterative is
> specific to Scheme, not Lisp in general. This was discussed a while
> back, I think.

Some Common Lisp compilers will do this, too.

-- 
http://www.lavielle.com/~joswig
From: Barry Margolin
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <%3mX1.103$Sg6.782611@burlma1-snr1.gtei.net>
In article <·······················@pbg3.lavielle.com>,
Rainer Joswig <······@lavielle.com> wrote:
>Some Common Lisp compilers will do this, too.

But they're not required to, so portable programs can't depend on it.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
From: Rainer Joswig
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <joswig-2110981658500001@pbg3.lavielle.com>
In article <····················@burlma1-snr1.gtei.net>, Barry Margolin
<······@bbnplanet.com> wrote:

> In article <·······················@pbg3.lavielle.com>,
> Rainer Joswig <······@lavielle.com> wrote:
> >Some Common Lisp compilers will do this, too.
> 
> But they're not required to, so portable programs can't depend on it.

There should be a feature info for this.

-- 
http://www.lavielle.com/~joswig
From: William Paul Vrotney
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <vrotneyF17LJ0.H9G@netcom.com>
In article <·······················@pbg3.lavielle.com> ······@lavielle.com
(Rainer Joswig) writes:

> 
> In article <···············@kuovi.cs.tut.fi>, Kellom{ki Pertti
> <··@kuovi.cs.tut.fi> wrote:
> 
> > Before anyone else corrects me, I will correct myself. This is
> > comp.lang.lisp, not comp.lang.scheme as I thought, so I should
> > probably add that the requirement to make tail calls iterative is
> > specific to Scheme, not Lisp in general. This was discussed a while
> > back, I think.
> 
> Some Common Lisp compilers will do this, too.
> 

Yes, but it is not specified, so you may not portably rely on it.  Not that
I find this important in Common Lisp.  :-)


-- 

William P. Vrotney - ·······@netcom.com
From: Rainer Joswig
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <joswig-2210980722200001@194.163.195.67>
In article <·················@netcom.com>, ·······@netcom.com (William
Paul Vrotney) wrote:

> In article <·······················@pbg3.lavielle.com> ······@lavielle.com
> (Rainer Joswig) writes:
> 
> > 
> > In article <···············@kuovi.cs.tut.fi>, Kellom{ki Pertti
> > <··@kuovi.cs.tut.fi> wrote:
> > 
> > > Before anyone else corrects me, I will correct myself. This is
> > > comp.lang.lisp, not comp.lang.scheme as I thought, so I should
> > > probably add that the requirement to make tail calls iterative is
> > > specific to Scheme, not Lisp in general. This was discussed a while
> > > back, I think.
> > 
> > Some Common Lisp compilers will do this, too.
> > 
> 
> Yes, but it is not specified, so you may not portably rely on it.  Not that
> I find this important in Common Lisp.  :-)

Everything that enables developer to express efficient computation
in an organized fashion is important.

-- 
http://www.lavielle.com/~joswig
From: David Thornley
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <G02Y1.1344$a6.4078488@ptah.visi.com>
In article <·······················@194.163.195.67>,
Rainer Joswig <······@lavielle.com> wrote:
>In article <·················@netcom.com>, ·······@netcom.com (William
>Paul Vrotney) wrote:
>
>> In article <·······················@pbg3.lavielle.com> ······@lavielle.com
>> (Rainer Joswig) writes:
>> 
>> > 
>> > In article <···············@kuovi.cs.tut.fi>, Kellom{ki Pertti
>> > <··@kuovi.cs.tut.fi> wrote:
>> > 
>> > > probably add that the requirement to make tail calls iterative is
>> > > specific to Scheme, not Lisp in general. This was discussed a while
>> > > back, I think.
>> > 
>> > Some Common Lisp compilers will do this, too.
>> > 
>> 
>> Yes, but it is not specified, so you may not portably rely on it.  Not that
>> I find this important in Common Lisp.  :-)
>
>Everything that enables developer to express efficient computation
>in an organized fashion is important.
>
IMNSHO, it's just a part of the larger question, does this implementation
produce good enough machine code for my purposes?  There are a lot of
things an implementation might or might not do.  Tail recursion is
just one of them.

The reason I don't care that much about tail recursion, in general,
is that I don't use it much.  It is usually not the natural way
to write a recursive function, and if I'm going to contort the
function into tail-recursive form it is normally no more work,
and possibly less, to convert it to some sort of iterative forms.
I believe that one difference between Common Lisp and Scheme is
that CL has umpteen different iterative forms, and Scheme has to
use tail recursion.

Consider (being written in an editor that doesn't grok Lisp, and
not tested):

(defun factorial1 (n)
  (if (< n 2)
      1
      (* n (factorial1 (1- n)))))

(defun factorial2 (n)
  (let ((result 1))
    (dotimes (i n result)
      (setf result (* result (1+ i))))))

(defun factorial-helper (result n)
  (if (< n 2) result
              (factorial-helper (* n result) (1- n))))
(defun factorial3 (n)
  (factorial-helper 1 n))

The first one is the naive recursion, easy to understand but expensive
to run.  The second one is iteration.  In this case, we see that for
0 to n-1, we're multiplying (i + 1), so we're multiplying the numbers
from 1 to n.  I think the third is less clear, although somebody with
a lot more practice in tail recursion might disagree.  It isn't likely
to be more efficient than the second one.

Not to mention that, if I want really fast factorials, I either use
a pre-calculated table or memoize the function (see Norvig's book
for a discussion of memoization).

This, IMNSHO, is likely why the CL committee declined to put this,
or any other code generation requirement, into the standard.  Tail
recursion has no effect on the semantics of a program, which is what
most standards restrict themselves to.  It only affects efficiency
(and then only with code written specifically for it), and almost
no language standard has anything to say about code generation.


--
David H. Thornley                        | These opinions are mine.  I
·····@thornley.net                       | do give them freely to those
http://www.thornley.net/~thornley/david/ | who run too slowly.       O-
From: Rainer Joswig
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <joswig-2310982148180001@194.163.195.67>
In article <·····················@ptah.visi.com>, ········@visi.com (David
Thornley) wrote:

> >Everything that enables developer to express efficient computation
> >in an organized fashion is important.
> >
> IMNSHO, it's just a part of the larger question, does this implementation
> produce good enough machine code for my purposes?  There are a lot of
> things an implementation might or might not do.  Tail recursion is
> just one of them.

Hmm, I once saw a case in MCL where tail recursion produced the
fastest code.

> The reason I don't care that much about tail recursion, in general,
> is that I don't use it much.  It is usually not the natural way
> to write a recursive function, and if I'm going to contort the
> function into tail-recursive form it is normally no more work,
> and possibly less, to convert it to some sort of iterative forms.

Actually I think that when it comes to hairy iteration noting
beats recursion in clearity. I can write tail recursive
functions and I can read my code years later (well, I hope).

> Tail
> recursion has no effect on the semantics of a program,

Almost. The recursive program may not work, due to lack of
stack space.

> which is what
> most standards restrict themselves to.  It only affects efficiency

I'd like to avoid the obvious problems (stack overflow) and still code
using recursion.

-- 
http://www.lavielle.com/~joswig
From: Georg Bauer
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <gb-2510981144270001@jill.westfalen.de>
In article <·······················@194.163.195.67>, ······@lavielle.com
(Rainer Joswig) wrote:

>Actually I think that when it comes to hairy iteration noting
>beats recursion in clearity. I can write tail recursive
>functions and I can read my code years later (well, I hope).

And there is one very important application of tail-recursion: making a
complex recursive function partially tail-recursive like the one
optimization to Quicksort: one of the two recursive calls can be laid out
tail-recursive and so can be optimized. So if you sort the two halfes and
make the call for the bigger one tail-recursive, you optimize away much
stack-space and function calls.

And as Quicksort shows, there isn't always a nice iterative form to a
recursive function.

bye, Georg

-- 
http://www.westfalen.de/hugo/
From: Barry Margolin
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <UEIY1.75$VI.1302270@burlma1-snr1.gtei.net>
In article <···················@jill.westfalen.de>,
Georg Bauer <··@hugo.westfalen.de> wrote:
>And there is one very important application of tail-recursion: making a
>complex recursive function partially tail-recursive like the one
>optimization to Quicksort: one of the two recursive calls can be laid out
>tail-recursive and so can be optimized. So if you sort the two halfes and
>make the call for the bigger one tail-recursive, you optimize away much
>stack-space and function calls.

If it's not fully tail-recursive, what's the point?  The main benefit of
tail recursion is that you can write recursive calls without worrying about
stack overflows.  Since quicksort will recurse to the same depth on both
halves, making it tail-recursive only for one of the calls will not result
in fixed stack use.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
From: Marco Antoniotti
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <lwzpaj7kw2.fsf@galvani.parades.rm.cnr.it>
Barry Margolin <······@bbnplanet.com> writes:

> In article <···················@jill.westfalen.de>,
> Georg Bauer <··@hugo.westfalen.de> wrote:
> >And there is one very important application of tail-recursion: making a
> >complex recursive function partially tail-recursive like the one
> >optimization to Quicksort: one of the two recursive calls can be laid out
> >tail-recursive and so can be optimized. So if you sort the two halfes and
> >make the call for the bigger one tail-recursive, you optimize away much
> >stack-space and function calls.
> 
> If it's not fully tail-recursive, what's the point?  The main benefit of
> tail recursion is that you can write recursive calls without worrying about
> stack overflows.  Since quicksort will recurse to the same depth on both
> halves, making it tail-recursive only for one of the calls will not result
> in fixed stack use.

Moreover, if you are really looking for SPACE optimizations as well
and TIME ones, you'd probably be better off using a well coded version
of Heapsort.

Cheers

-- 
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - (0)6 - 68 10 03 16, fax. +39 - (0)6 - 68 80 79 26
http://www.parades.rm.cnr.it
From: Georg Bauer
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <gb-2610982108290001@jill.westfalen.de>
In article <··············@galvani.parades.rm.cnr.it>, Marco Antoniotti
<·······@galvani.parades.rm.cnr.it> wrote:

>Moreover, if you are really looking for SPACE optimizations as well
>and TIME ones, you'd probably be better off using a well coded version
>of Heapsort.

Quicksort was an _example_ - of course, it is easy to avoid Quicksort (and
in most cases it is quite usefull to avoid it), but there are often
situations where you can't avoid recursion. But you can reduce the
recursions by converting at least some of them to tail-recursive versions.
Think about recursive parsers to get a more real-world example. Especially
in an macro-language that is interpreted extra recursion levels are often
annoying (since you share your stack with the application).

bye, Georg

-- 
http://www.westfalen.de/hugo/
From: Gareth McCaughan
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <86yaq3wep1.fsf@g.pet.cam.ac.uk>
Barry Margolin wrote:

> If it's not fully tail-recursive, what's the point?  The main benefit of
> tail recursion is that you can write recursive calls without worrying about
> stack overflows.  Since quicksort will recurse to the same depth on both
> halves, making it tail-recursive only for one of the calls will not result
> in fixed stack use.

No, but if you always tail-recurse on the larger sublist then
you get stack use that's O(log n) instead of O(n), and that's
well worth doing. (If e.g. you can guarantee that your array
is no bigger than 2^32 entries, and that a recursion depth of
32 won't overflow the stack -- which is pretty likely -- then
you win.)

It simply isn't true that "quicksort will recurse to the same
depth on both halves", because you can't guarantee that they
are actually "halves". That's the whole point.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Gareth McCaughan
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <86emrvurka.fsf@g.pet.cam.ac.uk>
Pierpaolo Bernardi wrote:

[I said:]
> : No, but if you always tail-recurse on the larger sublist then
>                                             ^^^^^^
> : you get stack use that's O(log n) instead of O(n),
> 
> <nitpick>
> Actually, you should recurse on the smaller sublist to get the
> O(log n) bound.
> </nitpick>

And iterate -- i.e., tail recurse -- on the larger, yes?
Hence my comment.

Or have I got myself backwards on this? It wouldn't be the
first time.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Pierpaolo Bernardi
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <909510102.334119@fire-int>
Gareth McCaughan (·····@dpmms.cam.ac.uk) wrote:
: Pierpaolo Bernardi wrote:

: [I said:]
: > : No, but if you always tail-recurse on the larger sublist then
: >                                             ^^^^^^
: > : you get stack use that's O(log n) instead of O(n),
: > 
: > <nitpick>
: > Actually, you should recurse on the smaller sublist to get the
: > O(log n) bound.
: > </nitpick>

: And iterate -- i.e., tail recurse -- on the larger, yes?
: Hence my comment.

: Or have I got myself backwards on this? It wouldn't be the
: first time.

You are right. I read your article before the one it refers to, so I
misunderstood your point.

Anyway, in this example the tail-recursiveness is irrelevant.  The
O(n) vs. O(log n) bounds are the same even if the calls are not t.r.

You just have to process the small part before the larger one to
have the O(n) bound.

Pierpaolo
From: Pierpaolo Bernardi
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <909510321.800691@fire-int>
Pierpaolo Bernardi (········@cli.di.unipi.it) (that's me) wrote:

: You just have to process the small part before the larger one to
: have the O(n) bound.
           ^^^^

should be:  O(log n)

Pierpaolo
From: Gareth McCaughan
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <86n26g4voz.fsf@g.pet.cam.ac.uk>
Pierpaolo Bernardi wrote:

> Anyway, in this example the tail-recursiveness is irrelevant.  The
> O(n) vs. O(log n) bounds are the same even if the calls are not t.r.
> 
> You just have to process the small part before the larger one to
> have the O(n) bound.
           ^^^^
           (corrected in a follow-up to O(log n) )

No, that's not true. Suppose you're "quicksort"ing an array
that starts off sorted, and you use the broken pivoting method
of using the first item in each subarray as the pivot for that
subarray. Then your call tree goes something like this:

qsort(1 2 3 4 5)
 |
 +- qsort(1)
 |
 `- qsort(2 3 4 5)
     |
     +- qsort(2)
     |
     `- qsort(3 4 5)
         |
         +- qsort(3)
         |
         `- qsort(4 5)
             |
             +- qsort(4)
             |
             `- qsort(5)

and no matter what order you do each pair of calls in, the
path (12345) (2345) (345) (45) (5) is always going to take
space of order n, unless those are properly optimised tail calls.

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Pierpaolo Bernardi
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <909595513.958593@fire-int>
Gareth McCaughan (·····@dpmms.cam.ac.uk) wrote:
: Pierpaolo Bernardi wrote:

: [some rubbish]

: No, that's not true. Suppose you're "quicksort"ing an array
: that starts off sorted, and you use the broken pivoting method
: of using the first item in each subarray as the pivot for that
: subarray. Then your call tree goes something like this:

Yes, yes. You are right (again).

Don't know what I was thinking. 
I shouldn't post when I'm in a hurry.

Sorry.

Pierpaolo
From: Georg Bauer
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <gb-2610982055320001@jill.westfalen.de>
In article <···················@burlma1-snr1.gtei.net>, Barry Margolin
<······@bbnplanet.com> wrote:

>If it's not fully tail-recursive, what's the point?

You don't need the full stack resources as you would need for a fully
recursive version. Think about it: if only the "shallow" call-hierarchies
are called fully recursive, the maximum stack needed is only that of this
shallow calls accumulated. Every recursion elimination (even if still some
other recursive calls are still there) reduces needed resources.

Tail-recursion-elimination reduces tail-recursive calls. So every normal
recursion you convert into a tail-recursive version reduces needed
resources. That's a clear winner situation in my book. As the example I
gave: Quicksort becomes _much_ faster and needs much less resources if you
tune it that way.

bye, Georg
From: Howard R. Stearns
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <36361962.B0C879B4@elwood.com>
Rainer Joswig wrote:
> 
> In article <·····················@ptah.visi.com>, ········@visi.com (David
> Thornley) wrote:
> ...
> > Tail
> > recursion has no effect on the semantics of a program,
> 
> Almost. The recursive program may not work, due to lack of
> stack space.

This depends on how you define semantics. (!)  ANSI CL doesn't actually
require garbage collection, either.  In both cases, any memory failure
of a correct program is due to implementation limits, not the language
definition.  

> 
> > which is what
> > most standards restrict themselves to.  It only affects efficiency
> 
> I'd like to avoid the obvious problems (stack overflow) and still code
> using recursion.

Fair enough, but like garbage collection, the idea is that this property
of the implementation is something for the market to work out.
From: Barry Margolin
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <gKpZ1.64$yc2.913030@burlma1-snr1.gtei.net>
In article <·················@elwood.com>,
Howard R. Stearns <······@elwood.com> wrote:
>This depends on how you define semantics. (!)  ANSI CL doesn't actually
>require garbage collection, either.  In both cases, any memory failure
>of a correct program is due to implementation limits, not the language
>definition.  

Good point.  I think language specifiers have rarely found the need to
specify garbage collection explicitly simply because the lack of a "free"
primitive is such an obvious indicator that it's needed in anything but toy
implementations.  But stack allocation of activation records is so
ingrained with implementors that it's necessary to make a special point to
make them do tail-call optimization.  Together, these two features allow
you to do pretty accurate analysis of the memory needs of an algorithm,
which allows you to determine whether it uses bounded or unbounded memory.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
From: David Steuber "The Interloper
Subject: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <364086ab.95230824@news.newsguy.com>
Robert Sedgewick, in his book "Algorithms in C++", introduces the idea
of the Abstract Data Type and the Concrete Data Type.  The stack, as
an ADT could be implemented as an array or as a list if each are CDTs.

As a C++ programmer, I've gotten used to the notion of the stack as a
fix sized array that call arguments are pushed onto and the pointer to
which is then moved to make room for the automatic variables.  I've
been thinking the same way for Lisp.

While reading the posts in the tail recursion thread (who would have
thought a simple question would generate such interesting responses?),
I was struck by a thought.  I picked myself up, brushed myself off,
and had a quick look at it.

I know there is nothing I can think of that someone else hasn't
already implemented.  So it seems certain to me that people out there
will have already heard this idea which I think is mine, even though
someone else must have already tried it out.

The fundamental structure in Lisp (even though others are supported as
primitives) is the list.  Suppose the call stack was not implemented
as a typical fixed size array (although that is almost certainly the
most efficient implementation).  Suppose the cost of heap allocation
and recycling was so cheap that it made sense to replace the stack
with a call list.  For multithreading and SMP, there could be many
such lists.

The current state of the machine would be represented by the register
variables and the head element on the list that new things are pushed
onto.  The size limit of the 'call-stack would be dependent on
available memory rather than some pre-determined size.

The picture in my mind is of a bunch of chain links waving around in
some imaginary space.  They grow and shrink as the program runs,
vanishing when the last function returns.  The thread schedular
attaches the head of the chain to the CPU when it gets allocated a
time slice.  If there are multiple CPUs, then each CPU has a chain
dangling off of it.

Meanwhile, each chain link has its own objects strung out to the
sides.  These objects are the data that the link can see.  As the link
pops off of the chain, it and all of the objects attached to it can be
reclaimed by the system.  The garbage collector floats around in this
imaginary space, itself a chain, looking at the other chains to see
what not to pick up.

It makes a pretty picture in my mind.  Sort of like the inner workings
of a cell.  The chains are like RNA strands that assemble nucleic
acids.  The acids are like the free memory.  Now you have had a
glimpse into my mind.  Please refrain from trying to shout, "hello!"
to hear your own voice echo back to you.

--
David Steuber (ver 1.31.2a)
http://www.david-steuber.com
To reply by e-mail, replace trashcan with david.

"Ignore reality there's nothing you can do about it..."
-- Natalie Imbruglia "Don't you think?"
From: Kellom{ki Pertti
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <xfz4sspw4b6.fsf@kuovi.cs.tut.fi>
········@david-steuber.com (David Steuber "The Interloper") writes:
> Suppose the cost of heap allocation
> and recycling was so cheap that it made sense to replace the stack
> with a call list.  For multithreading and SMP, there could be many
> such lists.

When a copying garbage collector is used, the amount of work done
depends solely on the amount of live data. Nothing special is done for
garbage, so the cost of a "free" is zero. With suitable parameters,
it becomes cheaper to use a garbage collected heap than a
stack. I don't know if the suitable parameters are within realistic
limits, though.

> The current state of the machine would be represented by the register
> variables and the head element on the list that new things are pushed
> onto.  The size limit of the 'call-stack would be dependent on
> available memory rather than some pre-determined size.

You have pretty much described an SML implementation. Neat stuff,
isn't it? Besides multitasking, this scheme also makes closures and
Scheme's call-with-current-continuation painless to implement.

Andrew Appel is the name to look for if you are interested to find out
more.
-- 
pertti
From: David Steuber "The Interloper
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <363dd26e.180161158@news.newsguy.com>
On 28 Oct 1998 09:09:33 +0200, Kellom{ki Pertti <··@kuovi.cs.tut.fi>
claimed or asked:

% You have pretty much described an SML implementation. Neat stuff,
% isn't it? Besides multitasking, this scheme also makes closures and
% Scheme's call-with-current-continuation painless to implement.

I never do think of anything new.  What is SML?

--
David Steuber (ver 1.31.2a)
http://www.david-steuber.com
To reply by e-mail, replace trashcan with david.

"Ignore reality there's nothing you can do about it..."
-- Natalie Imbruglia "Don't you think?"
From: Kellom{ki Pertti
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <xfzg1c7yinr.fsf@kuovi.cs.tut.fi>
········@david-steuber.com (David Steuber "The Interloper") writes:
> I never do think of anything new.  What is SML?

Standard ML, a descendant of ML (Meta Language) which was originally
designed as the extension language for a programmable theorem prover. 

SML is a mostly functional language that has many of the attractive
features of Lisp, along with static typing plus type inference.  See
comp.lang.ml for more information. It is certainly worth taking a look
at.
-- 
Pertti
From: Gareth McCaughan
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <86btmw4lu1.fsf@g.pet.cam.ac.uk>
David Steuber wrote:

> The fundamental structure in Lisp (even though others are supported as
> primitives) is the list.  Suppose the call stack was not implemented
> as a typical fixed size array (although that is almost certainly the
> most efficient implementation).  Suppose the cost of heap allocation
> and recycling was so cheap that it made sense to replace the stack
> with a call list.  For multithreading and SMP, there could be many
> such lists.

This already gets done to some extent. You need something like it
to implement closures properly (if your environments are stored
on the stack, they can't survive being popped off); there are a
number of ways of dealing with this, one of which is to heap-
-allocate all environments -- producing just the sort of effect
you describe.

In practice, I think it's been found that something like conventional
stack allocation works better when it's possible at all. It doesn't
require you to have a *fixed-size* stack; when the stack overflows
you can do something like walking along it and moving each stack
frame into the heap. This is just the same sort of thing as you
need to do when an environment gets captured in a closure and
therefore needs to continue to exist after it would normally
have been popped off the stack. You need to be careful in doing
this, but only in the same sort of way as you need to be careful
when doing GC.

There is (at least) one complication. Suppose your compiler assumes
that all environments get allocated on the stack. Then it may
be able to generate nice code for access to variables in
environments other than the innermost one, making use of the
fact that it knows where they are on the stack. You have to
cope with this possibility, somehow. (Simple approach: refrain
from generating such code for references across the boundary
of an environment that can be captured.)

The reason why stack allocation is better when it's possible
is that, actually, stack allocation is very very cheap. You
just increment a pointer to do the allocation, and decrement
it to deallocate. And environments are almost always allocated
in LIFO fashion, so it's usually possible to know for sure
when the deallocation needs to happen.

There's quite a bit of work on this sort of thing in the Scheme
literature; Schemers need even more of this kind of thing than
Common-Lisp-ers, because the "call-with-current-continuation"
function in Scheme can preserve the entire state of the call
stack in a way even more dramatic than closures can, and because
the definition of the language means that almost any function
call can, in principle, save a copy of the current continuation,
so that it's quite hard to be certain that anything can be
stack-allocated. I don't remember the authors or titles of any
of these papers right now.

You might find Henry Baker's paper "Cons should not cons its
arguments, part 2: Cheney on the MTA" interesting: it suggests
an interesting way to blur the distinction between the stack
and the heap. (You can get at Baker's papers -- many of which
are interesting, actually -- on his ftp site somewhere in
ftp.netcom.com. (ftp.netcom.com/pub/hb/hbaker/... ?))

-- 
Gareth McCaughan       Dept. of Pure Mathematics & Mathematical Statistics,
·····@dpmms.cam.ac.uk  Cambridge University, England.
From: Jrm
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <363764ee$0$29749@nntp1.ba.best.com>
Gareth McCaughan wrote in message <··············@g.pet.cam.ac.uk>...
>
>The reason why stack allocation is better when it's possible
>is that, actually, stack allocation is very very cheap. You
>just increment a pointer to do the allocation, and decrement
>it to deallocate. And environments are almost always allocated
>in LIFO fashion, so it's usually possible to know for sure
>when the deallocation needs to happen.
>


Conventional machines also have hardware assists for this type
of allocation.
From: Tom Breton
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <m31znsnfgl.fsf@world.std.com>
Gareth McCaughan <·····@dpmms.cam.ac.uk> writes:


> 
> You might find Henry Baker's paper "Cons should not cons its
> arguments, part 2: Cheney on the MTA" interesting: it suggests
> an interesting way to blur the distinction between the stack
> and the heap. (You can get at Baker's papers -- many of which
> are interesting, actually -- on his ftp site somewhere in
> ftp.netcom.com. (ftp.netcom.com/pub/hb/hbaker/... ?))

Specifically, ftp://ftp.netcom.com/pub/hb/hbaker/

He's got quite a few unique papers there.

-- 
Tom Breton
From: Rob Warnock
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <719i2g$1gb49@fido.engr.sgi.com>
Gareth McCaughan  <·····@dpmms.cam.ac.uk> wrote:
+---------------
| The reason why stack allocation is better when it's possible
| is that, actually, stack allocation is very very cheap. You
| just increment a pointer to do the allocation, and decrement
| it to deallocate.
+---------------

With a generational copying garbage collector, almost all allocations
are just a pointer bump too, and don't require deallocation at all.
The "nursery" or "ephemeral" area memory is just reclaimed (in toto)
at the next minor collection. That is, the small amount of still-live
data in the ephemeral area is scavenged (copied), and the allocation
pointer is reset to the beginning of the ephemeral area. With a properly-
sized ephemeral area (and assuming typical allocation patterns), the
cost of the minor collection may be as low *or lower* than the cost of
all those individual pointer decrementings in the stack case.

Note also that Baker's CPS/only-call-never-return/allocate-on-the-C-stack-
and-let-the-GC-longjmp is essentially the same method, but it (1) uses
the C stack pointer *as* the allocation pointer for the ephemeral area,
and (2) it *requires* the program be converted to CPS.

Some people [e.g., Appel] claim that you can get similar performance
with a separate ephemeral/generational GC, without forcing the CPS
conversion on the code.


-Rob

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
2011 N. Shoreline Blvd.		FAX: 650-964-0811
Mountain View, CA  94043	PP-ASEL-IA
From: Bernhard Pfahringer
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <719qam$8c9e$1@www.univie.ac.at>
In article <············@fido.engr.sgi.com>,
Rob Warnock <····@rigden.engr.sgi.com> wrote:
>
>With a generational copying garbage collector, almost all allocations
>are just a pointer bump too, and don't require deallocation at all.
>The "nursery" or "ephemeral" area memory is just reclaimed (in toto)
>at the next minor collection. That is, the small amount of still-live
>data in the ephemeral area is scavenged (copied), and the allocation
>pointer is reset to the beginning of the ephemeral area. With a properly-
>sized ephemeral area (and assuming typical allocation patterns), the
>cost of the minor collection may be as low *or lower* than the cost of
>all those individual pointer decrementings in the stack case.
>

You also need to adapt your programming style to get the full benefit of
a generational copying garbage collector.

E.g. recently there has been a thread about "static" (in C sense) arrays
for a function to avoid re-allocating at each call. So instead of

(defun foo ()
  (let ((ar (make-array n)))
    ...))

the following was proposed:

(defun foo ()
  (let ((ar (load-time-value (make-array n))))
    ...))

If you really need a general array (elements point to newly allocated stuff),
you're probably better off with the original solution. Why?

For the latter, each ephemeral collection must trace 'ar' and keep all 
pointed-to structure, whereas for the former this only happens if a
collection is triggered during the execution of foo. OTOH, if 'ar' is big
enough as to always trigger collections during foo, then the latter probably
wins. The latter *is* also clearly beneficial in cases where your intermediate
stuff is more specialized, say you want to store fixnums only. But then you
should also inform the compiler accordingly:

(defun foo ()
  (let ((ar (load-time-value (make-array n :element-type 'fixnum))))
    ...))

Now you only allocate once and there is no need for an ephemeral
collection to inspect 'ar'. Actually, if you are using Allegro CL,
you might even want to use their non-standard static array extension:

(defun foo ()
  (let ((ar (load-time-value (make-array n :element-type 'fixnum
                               #+allegro :allocation #+allegro :static))))
    ...))

Then no collection will ever see 'ar'.

So what's the morale? In the presence of an ephemeral GC consing can be 
cheaper than manual recycling of structure *and* may simplify your code.
Always profile if you need more speed.

Bernhard
-- 
--------------------------------------------------------------------------
Bernhard Pfahringer
Austrian Research Institute for  http://www.ai.univie.ac.at/~bernhard/
Artificial Intelligence          ········@ai.univie.ac.at 
From: Barry Margolin
Subject: Re: Stacks, lists, and the cost of consing (was Re: Tail recursion)
Date: 
Message-ID: <PcJZ1.97$yc2.1335983@burlma1-snr1.gtei.net>
In article <·················@news.newsguy.com>,
David Steuber "The Interloper" <········@david-steuber.com> wrote:
>The fundamental structure in Lisp (even though others are supported as
>primitives) is the list.  Suppose the call stack was not implemented
>as a typical fixed size array (although that is almost certainly the
>most efficient implementation).  Suppose the cost of heap allocation
>and recycling was so cheap that it made sense to replace the stack
>with a call list.  For multithreading and SMP, there could be many
>such lists.

While most real-world Lisp implementations aren't so simple, take a look at
the meta-circular Scheme implementation in SICP.  I don't recall if it uses
a list as the call stack (it may just use the underlying implementation's
call stack) but I think it implements the data stack this way.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Don't bother cc'ing followups to me.
From: William Paul Vrotney
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <vrotneyF1BLnw.EFC@netcom.com>
In article <·······················@194.163.195.67> ······@lavielle.com
(Rainer Joswig) writes:

> In article <·················@netcom.com>, ·······@netcom.com (William
> Paul Vrotney) wrote:
> 
> > In article <·······················@pbg3.lavielle.com> ······@lavielle.com
> > (Rainer Joswig) writes:
> > 
> > > 
> > > In article <···············@kuovi.cs.tut.fi>, Kellom{ki Pertti
> > > <··@kuovi.cs.tut.fi> wrote:
> > > 
> > > > Before anyone else corrects me, I will correct myself. This is
> > > > comp.lang.lisp, not comp.lang.scheme as I thought, so I should
> > > > probably add that the requirement to make tail calls iterative is
> > > > specific to Scheme, not Lisp in general. This was discussed a while
> > > > back, I think.
> > > 
> > > Some Common Lisp compilers will do this, too.
> > > 
> > 
> > Yes, but it is not specified, so you may not portably rely on it.  Not that
> > I find this important in Common Lisp.  :-)
> 
> Everything that enables developer to express efficient computation
> in an organized fashion is important.
> 

You missed my gist, which is that "Both iteration and recursion exist in the
universe".  Common Lisp reflects this.  Scheme culture reflects that somehow
iteration is a special manifestation of recursion.  I find this almost a
denial of iteration.  Every time that I've implemented recursion I've had to
use iteration.  Perhaps this is the result of having to implement on our
primitive Von Neumann computers.  But in the event that truly recursive
computers are built in the future, I suspect that iterative language
constructs will still be expressive even if implemented using recursion.

I use recursion where it seems appropriate.  I find that coding an iterative
idea using recursion and justifying it by relying on a particular Common
Lisp implementation generating efficient iterative code out of proper tail
recursive code unsettling.  I would rather code that particular piece of
Lisp code using iterative constructs since that is what was intended in the
first place.  If a Common Lisp implementation wants to make the most
efficient code out of my intended recursive expression, kudos.


-- 

William P. Vrotney - ·······@netcom.com
From: Jrm
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <36325005$0$29749@nntp1.ba.best.com>
I think tail recursion (tail calling) is a bit more fundamental than
just a technique for getting rid of iterative constructs.

There are some iterations that are rather hard to express without
tail recursion.

You can't effectively use continuation-passing style without
having tail recursion.
From: Barry Margolin
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <qtBY1.61$VI.1167623@burlma1-snr1.gtei.net>
In article <················@nntp1.ba.best.com>, Jrm <···@rebol.com> wrote:
>You can't effectively use continuation-passing style without
>having tail recursion.

Aye, there's the rub.  I don't think most humans programmers use CPS
directly.  However, it's likely to be used by computer-generated code.
Since program-generating programs are common in the Lisp family of
languages, tail-call optimization is necessary to make these work properly.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
From: Rob Warnock
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <70veca$79o6l@fido.engr.sgi.com>
Barry Margolin  <······@bbnplanet.com> wrote:
+---------------
| >You can't effectively use continuation-passing style without
| >having tail recursion.
| 
| Aye, there's the rub.  I don't think most humans programmers use CPS
| directly.
+---------------

Not in the bulk of one's coding, no, but occasionally I find myself
quite naturally writing CPS code when I'm coding some sort of finite-state
machine (e.g., a lexical analyzer, a communications protocol, etc.).

When you consider that a tail call is really a "goto, with arguments",
a number of opportunities arise to employ it usefully. That is, the
fact that you can pass arguments with the goto partially negates the
disadvantage of the spaghetti-coding that gotos would otherwise encourage.

Or to say it another way: Dijkstra said that gotos were harmful; Wulf
asserted that mutable variables were exactly equivalent in harm to gotos
if you wrap a loop around the mutations; I now claim that for *either*
to harm you you really need both! Gotos hurt because you can get somewhere
with your global (or at least, shared-scope) variables set in who *knows*
what kind of shape; mutable variables (especially global ones) IN THE
PRESENCE OF LOOPS (iteration) have an isomorphic problem.

But by using goto-with-argument and *omitting* the shared-scope mutable
variables, the transfer of control is intimately tied to each change in
value bound to a lexical appearance of a variable. [Yes, I just realized
that I'm, rather awkwardly, describing functional programming, but I seem
to have come at it from an odd (to me) angle, and perhaps that angle might
be helpful to someone else as well...]

Finally, to circle back 'round again to the topic of "interative" versus
"tail recursive", the classic iterative solutions require mutable variables,
while the tail-recursive solutions don't *mutate* anything, they simply
create *new* bindings of the same names with the new values. For anyone who
has ever written a generational copying garbage collector, the difference
can be extremely significant! [For those not familiar, the key phrase is
"maintaining the remembered set" of objects in older generations which
point to newer generations...]

So, yes, there *are* (possibly rare) cases where the tail-recursive style
can be *more* efficient than the apparently-equivalent iterative style.


-Rob

[p.s. Apologies in advance: Back from sabbatical 11/2/98, but
until then email will still get a "vacation" bounce message...]

-----
Rob Warnock, 8L-855		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
2011 N. Shoreline Blvd.		FAX: 650-964-0811
Mountain View, CA  94043	PP-ASEL-IA
From: Kellom{ki Pertti
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <xfzhfwr1zg3.fsf@kuovi.cs.tut.fi>
·······@netcom.com (William Paul Vrotney) writes:
> I would rather code that particular piece of
> Lisp code using iterative constructs since that is what was intended in the
> first place.

One of the differences between a CL programmer and a Scheme programmer
is that for a Scheme programmer, tail recursion *is* an iterative
construct. As Russ McManus pointed out, there is an iterative form
called "do" in Scheme, but I find that I can express myself more
directly using tail recursion, and rarely use do. Many schemers also
like named let, which relieves you from having to write a "wrapper"
function often involved in the use of tail recursion.
-- 
pertti
From: Barry Margolin
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <HHgX1.82$Sg6.657889@burlma1-snr1.gtei.net>
In article <·················@news.newsguy.com>,
David Steuber "The Interloper" <········@david-steuber.com> wrote:
>The Lisp FAQ (which can be chased down via the ALU website) shows an
>example between a recursive factorial function and a tail-recursive
>factorial function.  The tail recursive one looks iterative to me.  Is
>that all tail recursion is?  Just a fancy name for iteration?  What am
>I missing?

Tail recursion is basically a way to get the behavior of iteration using
the syntax of recursion.  Since people often think of processes in
recursive ways, this can permit the expression to mirror the way you think
of the algorithm, but not have the problem size limitations that would
occur if all the recursive calls were stacked (i.e. you can't get a stack
overflow with tail recursion).

My problem with this philosphy is that while many algorithms are natually
recursive, the simple mapping is often not tail-recursive, and the code has
to be contorted to make it so.  For instance, the natural expression of
the recursion step in factorial is (* n (fact (1- n))), which is not a
tail-recursive call (the recursive call is embedded in an expression, it's
not the last call).


-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
From: Felix Schroeter
Subject: Re: Tail recursion:  Is it just a fancy name for iteration?
Date: 
Message-ID: <712e4r$n3j$1@c3po.schlund.de>
Hello!

In article <···················@burlma1-snr1.gtei.net>,
Barry Margolin  <······@bbnplanet.com> wrote:
>[...]

>My problem with this philosphy is that while many algorithms are natually
>recursive, the simple mapping is often not tail-recursive, and the code has
>to be contorted to make it so.  For instance, the natural expression of
>the recursion step in factorial is (* n (fact (1- n))), which is not a
>tail-recursive call (the recursive call is embedded in an expression, it's
>not the last call).

But you can *automatically* transform this

(define fact (n)
  (if (eq n 0)
      1
      (* n (fact (1- n)))))

into something like this:

(define fact (n)
  (let loop ((ac 1)
             (i 1))
            (if (<= i n)
                (loop (* ac i) (+1 i))
                ac)))

Transforming that to elminiate i shouldn't be too difficult, either.

A technique to achieve that is explained in

Field, Anthony J.
Functional programming / Anthony J. Field, Peter G. Harrison. - Wokingham :
  Addison-Wesley, 1988. - XIV, 602 S. : graph. Darst.; (engl.)
(International computer science series)
ISBN 0-201-19249-7
Includes bibliography and index

Regards, Felix.