From: Franz Kafka
Subject: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b3b6b110.0304121624.1bec15d9@posting.google.com>
Is it possible to implement the Scheme operator call/cc which provides
dynamic continuations in ANSI CL? How much work would be involved to
extend CL to handle call/cc? Scheme continuations are a more abstract
version of throw and catch which operate with dynamic extent?

I think you might have to add environments and worlds to CL inorder to
add call/cc but am not sure about how to extend Lisp in this way? I am
not talking about writing a Scheme in ANSI CL--just adding the call/cc
function to ANSI CL using only ANSI CL code; to make it portable
between many different vendors versions of ANSI CL.

Thanks for any pointers; Thank Gawd Lisp does not force programmers to
use yucky pointers.

From: Jim Bender
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <I23ma.3$tP1.5618924@newssvr11.news.prodigy.com>
Have a look at Jeffrey Mark Siskind's "Screamer"...
http://www.ece.purdue.edu/~qobi/

Jim Bender


"Franz Kafka" <·································@hotmail.com> wrote in
message ·································@posting.google.com...
> Is it possible to implement the Scheme operator call/cc which provides
> dynamic continuations in ANSI CL? How much work would be involved to
> extend CL to handle call/cc? Scheme continuations are a more abstract
> version of throw and catch which operate with dynamic extent?
>
> I think you might have to add environments and worlds to CL inorder to
> add call/cc but am not sure about how to extend Lisp in this way? I am
> not talking about writing a Scheme in ANSI CL--just adding the call/cc
> function to ANSI CL using only ANSI CL code; to make it portable
> between many different vendors versions of ANSI CL.
>
> Thanks for any pointers; Thank Gawd Lisp does not force programmers to
> use yucky pointers.
From: Phil Bewig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <455f7154.0304130901.6e591cb1@posting.google.com>
·································@hotmail.com (Franz Kafka) wrote in message news:<····························@posting.google.com>...
> Is it possible to implement the Scheme operator call/cc which provides
> dynamic continuations in ANSI CL? How much work would be involved to
> extend CL to handle call/cc? Scheme continuations are a more abstract
> version of throw and catch which operate with dynamic extent?

Paul Graham implements something similar to call/cc in Chapter 20 of
his book On Lisp, which is available for download from
www.paulgraham.com/onlisp.html.

Phil
From: Mark Conrad
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <140420030650327747%nospam@iam.invalid>
In article <····························@posting.google.com>, Franz
Kafka <·································@hotmail.com> wrote:

> Is it possible to implement the Scheme operator call/cc which provides
> dynamic continuations in ANSI CL?

According to page 273 of Paul Graham's "On Lisp" book, the full power
of call/cc can be implemented in CL.


> How much work would be involved to extend CL to handle call/cc?

A heck of a lot of work, according to Paul Graham. (page 273)

According to Graham, once the necessary groundwork is done, call/cc
itself can be written as one simple CL macro.


Don't confuse all of what I wrote above with Graham's six macros on
page 267 of his book.  Those six macros implement a crippled version of
continuations, which have many drawbacks compared to "real"
continuations.

Mark-
From: tom berger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <3e9d36f4$0$17708$afc38c87@news.easynet.co.uk>
Paul Graham has a very interesting chapter about implementing 
continuation on top of common lisp in his book On Lisp (which is freely 
available from his website www.paulgraham.com)

Anyway, since common lisp doesn't give you access to continuation, 
you'll have to transform ALL the scheme code to continuation passing 
style to get the full effect. Possibly there are ready made packages for 
doing that floating around, although i don't know of any particular one.

tom

Franz Kafka wrote:
> Is it possible to implement the Scheme operator call/cc which provides
> dynamic continuations in ANSI CL? How much work would be involved to
> extend CL to handle call/cc? Scheme continuations are a more abstract
> version of throw and catch which operate with dynamic extent?
> 
> I think you might have to add environments and worlds to CL inorder to
> add call/cc but am not sure about how to extend Lisp in this way? I am
> not talking about writing a Scheme in ANSI CL--just adding the call/cc
> function to ANSI CL using only ANSI CL code; to make it portable
> between many different vendors versions of ANSI CL.
> 
> Thanks for any pointers; Thank Gawd Lisp does not force programmers to
> use yucky pointers.
From: Fred Gilham
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <u7k7duxysc.fsf@snapdragon.csl.sri.com>
tom berger <········@ucl.ac.uk> wrote:
> Anyway, since common lisp doesn't give you access to continuation,
> you'll have to transform ALL the scheme code to continuation passing
> style to get the full effect. Possibly there are ready made packages
> for doing that floating around, although i don't know of any
> particular one.

People who are interested in continuation-like stuff in Common Lisp
should check out the Screamer system.  Here is some information from
the Screamer FAQ:

     Screamer is an extension of Common Lisp that adds support for
     nondeterministic programming.  Screamer consists of two levels.
     The basic nondeterministic level adds support for backtracking
     and undoable side effects.  On top of this nondeterministic
     substrate, Screamer provides a comprehensive constraint
     programming language in which one can formulate and solve mixed
     systems of numeric and symbolic constraints.  Together, these two
     levels augment Common Lisp with practically all of the
     functionality of both Prolog and constraint logic programming
     languages such as CHiP and CLP(R).  Furthermore, Screamer is
     fully integrated with Common Lisp. Screamer programs can coexist
     and interoperate with other extensions to Common Lisp such as
     CLOS, CLIM and Iterate.

     In several ways Screamer is more efficient than other
     implementations of backtracking languages.  First, Screamer code
     is transformed into Common Lisp which can be compiled by the
     underlying Common Lisp system.  Many competing implementations of
     nondeterministic Lisp are interpreters and thus are far less
     efficient than Screamer.  Second, the backtracking primitives
     require fairly low overhead in Screamer.  Finally, this overhead
     to support backtracking is only paid for those portions of the
     program which use the backtracking primitives.  Deterministic
     portions of user programs pass through the
     Screamer-to-Common-Lisp transformation unchanged.  Since in
     practise, only small portions of typical programs utilize the
     backtracking primitives, Screamer can produce more efficient code
     than compilers for languages in which backtracking is more
     pervasive.

Screamer's implementation is interesting because it uses a code-walker
to transform code into continuation-passing style, doing this more or
less behind the scenes with macros.  It's therefore exemplary of the
ability of Lisp to give you powerful features in a transparent way.

-- 
Fred Gilham                                          ······@csl.sri.com
The Net interprets Microsoft products as damage, and routes around them.
From: Jeffrey Mark Siskind
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <a520654a.0304161048.31ed58c2@posting.google.com>
> Anyway, since common lisp doesn't give you access to continuation, 
> you'll have to transform ALL the scheme code to continuation passing 
> style to get the full effect. Possibly there are ready made packages for 
> doing that floating around, although i don't know of any particular one.

ftp://ftp.ecn.purdue.edu/qobi/screamer.tar.Z
From: Christopher C. Stacy
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <uvfxdn5ea.fsf@dtpq.com>
If the most important thing about your programming is that it use
continuation-passing style, then you would be better off using Scheme,
rather than Common Lisp.
From: Franz Kafka
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b3b6b110.0304170944.6630382d@posting.google.com>
······@dtpq.com (Christopher C. Stacy) wrote in message news:<·············@dtpq.com>...
> If the most important thing about your programming is that it use
> continuation-passing style, then you would be better off using Scheme,
> rather than Common Lisp.

I am also intrested in macros. Does Scheme have a standard way to
define macros -- ANSI CL has defmacro. (Does Scheme now have ANSI/IEEE
standard
macros.)

Does Scheme have standard OO extensions? Is there a CLOS for Scheme?

Can I used Scheme to create Windows/ GNU\Linux executables?

Is there a standard Interface Managers, GUI builder for Scheme?

ANSI CL has CLIM -- or should have it.

How portable is IEEE Scheme compaired to ANSI CL?
From: Marco Antoniotti
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <CiDna.4$L4.365@typhoon.nyu.edu>
Franz Kafka wrote:

> ······@dtpq.com (Christopher C. Stacy) wrote in message news:...
>
> >If the most important thing about your programming is that it use
> >continuation-passing style, then you would be better off using Scheme,
> >rather than Common Lisp.
>
>
> I am also intrested in macros. Does Scheme have a standard way to
> define macros -- ANSI CL has defmacro. (Does Scheme now have ANSI/IEEE
> standard
> macros.)

Of a sort.  It has a pattern matching extension facility.  Pretty nice, 
but without the full power of CL Macros.


> Does Scheme have standard OO extensions? Is there a CLOS for Scheme?


The short answer is NO.  The long answer is "a godzillion different 
subsets with incopatible features".  The keyword here is "subset".

> Can I used Scheme to create Windows/ GNU\Linux executables?

Just like you can with Common Lisp.

>
>
> Is there a standard Interface Managers, GUI builder for Scheme?


Just like in Common Lisp.

>
>
> ANSI CL has CLIM -- or should have it.

CLIM is not ANSI.


>
>
> How portable is IEEE Scheme compaired to ANSI CL?

IEEE Scheme is very portable.  Unless, of course, you want to use 2D 
arrays or record structures. :)

So, what is more important? Flexible and complete support for 
multidimensional arrays or call/cc?

Cheers

--
Marco Antoniotti
From: Jens Axel S�gaard
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <D7Ena.20062$y3.1841998@news010.worldonline.dk>
Marco Antoniotti wrote:
> Franz Kafka wrote:

>> I am also intrested in macros. Does Scheme have a standard way to
>> define macros -- ANSI CL has defmacro. (Does Scheme now have
>> ANSI/IEEE standard
>> macros.)
>
> Of a sort.  It has a pattern matching extension facility.  Pretty
> nice, but without the full power of CL Macros.

Which is why people use Dybvig's syntax-case.

http://library.readscheme.org/page3.html


And while digging up the reference to Dybvigs work, I found
the page on object systems:

   http://library.readscheme.org/page4.html

There *are* a godzillion different ones. There are some interesting
articles from the late nineties included her too.

--
Jens Axel S�gaard
From: felix
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <e36dad49.0304170325.4b0f4669@posting.google.com>
····@purdue.edu (Jeffrey Mark Siskind) wrote in message news:<····························@posting.google.com>...
> > Anyway, since common lisp doesn't give you access to continuation, 
> > you'll have to transform ALL the scheme code to continuation passing 
> > style to get the full effect. Possibly there are ready made packages for 
> > doing that floating around, although i don't know of any particular one.
> 
> ftp://ftp.ecn.purdue.edu/qobi/screamer.tar.Z

I haven't tried Screamer yet, but how can CPS work without
tail-recursion?


cheers,
felix
From: Pascal Costanza
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b7m60t$h1u$1@f1node01.rhrz.uni-bonn.de>
felix wrote:
> ····@purdue.edu (Jeffrey Mark Siskind) wrote in message news:<····························@posting.google.com>...
> 
>>>Anyway, since common lisp doesn't give you access to continuation, 
>>>you'll have to transform ALL the scheme code to continuation passing 
>>>style to get the full effect. Possibly there are ready made packages for 
>>>doing that floating around, although i don't know of any particular one.
>>
>>ftp://ftp.ecn.purdue.edu/qobi/screamer.tar.Z
> 
> 
> I haven't tried Screamer yet, but how can CPS work without
> tail-recursion?

Most Common Lisp implementations optimize tail calls, or allow you to 
configure it. It's just not a requirement imposed by the specification. 
Note that in general, tail call optimized programs are harder to debug, 
so it doesn't make a lot of sense to require this optimization.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Michael Sperber [Mr. Preprocessor]
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <y9lfzohns37.fsf@informatik.uni-tuebingen.de>
>>>>> "Pascal" == Pascal Costanza <········@web.de> writes:

Pascal> Most Common Lisp implementations optimize tail calls, or allow you to
Pascal> configure it. It's just not a requirement imposed by the
Pascal> specification. Note that in general, tail call optimized programs are
Pascal> harder to debug, so it doesn't make a lot of sense to require this
Pascal> optimization.

That is nonsense, except when speaking in the context of certain
implementations.  As it's a pretty common misconception (as is the
general idea that proper tail recursion is merely an "optimization"),
you might want to check out

http://zurich.ai.mit.edu/pipermail/rrrs-authors/1998-January/002231.html

-- 
Cheers =8-} Mike
Friede, V�lkerverst�ndigung und �berhaupt blabla
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <44r4xt8r1.fsf@beta.franz.com>
·······@informatik.uni-tuebingen.de (Michael Sperber [Mr. Preprocessor]) writes:

> >>>>> "Pascal" == Pascal Costanza <········@web.de> writes:
> 
> Pascal> Most Common Lisp implementations optimize tail calls, or allow you to
> Pascal> configure it. It's just not a requirement imposed by the
> Pascal> specification. Note that in general, tail call optimized programs are
> Pascal> harder to debug, so it doesn't make a lot of sense to require this
> Pascal> optimization.
> 
> That is nonsense, except when speaking in the context of certain
> implementations.  As it's a pretty common misconception (as is the
> general idea that proper tail recursion is merely an "optimization"),
> you might want to check out
> 
> http://zurich.ai.mit.edu/pipermail/rrrs-authors/1998-January/002231.html

This thread is crossposted.  The two newsgroups have different contexts.
Please don't use the term 'proper tail recursion' in comp.lang.lisp
without qualifying it, either by calling it "Scheme's 'proper tail
recursion'" or "'proper tail recursion' as defined by Scheme'.  Outside
of Scheme, tail recursion is not proper or improper; it just is what it
is defined to be.  In fact, I have always objected to the term
"tail recursion", because it describes a state of being in the code,
and not what the compiler does with it - (defun foo (x) (bar x))
is tail-recursive by virtue of the position of the call to bar
within foo.  Whether the tail recursion is merged or not (i.e. a call
deeper into the stack changed to a jump with no new stack) is another
question.  I prefer the term "tail merging" or "tail-call merging"
instead, which is much more descriptive.  But I don't require Schemers
to use that phrase, as long as they qualify "tail recustion" with
the context in which they use the phrase, which is the Scheme context.

As for the link; you must take that whole message in context.  Guy
Steele is a language expert, experienced in many languages, and when
he talks of concepts and truths about concepts, you must understand
those truths in the context of which he is speaking.  I don't remember
the timing of when he worked on Scheme, and it doesn't really even
matter - the message you link to was clearly talking about Scheme,
and thus the context was clearly Scheme, otherwise his statement that
a goto is merely an impoverished function call would be ludicrous.  And
as I understand it, Scheme's "proper tail recursion" is truly not an
optimization, it is mandatory in Scheme.  But only in Scheme.

We've discussed before on comp.lang.lisp what tail call merging is and
isn't - Scheme doesn't define the most comprehensive tail call merging
requirement; as I understand it there are circumstances in Scheme where
tail calls are not merged, but since it is called out by definition in
Scheme, it poses no problem to Scheme users because they know when
their calls will not be merged.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304180821.7930fd9c@posting.google.com>
Duane Rettig wrote:
> This thread is crossposted.  The two newsgroups have different contexts.
> Please don't use the term 'proper tail recursion' in comp.lang.lisp
> without qualifying it, either by calling it "Scheme's 'proper tail
> recursion'" or "'proper tail recursion' as defined by Scheme'.

Whatever the merits of the phrase "proper tail recursion" may be, it
is the phrase that was coined by the researchers who investigated and
defined this concept.  It means what they defined it to mean, and as
clarified and elaborated by those of us who continued their line of
research.

The concept of proper tail recursion is not unique to Scheme.  That
and related concepts such as Appel's safe-for-space-complexity have
proved useful in quite a few languages.  Scheme is no longer the only
language that mandates proper tail recursion.

The compiler community has developed a separate tradition of tail call
optimization.  This has created some confusion, as many people assumed
that proper tail recursion is the same thing as tail call optimization.
They are not, however, the same.  There are cases in which it is not
possible to achieve the asymptotic space efficiency of proper tail
recursion while allocating variables on a stack.  In the compiler
community's definition of tail call optimization, these conflicts
between stack allocation and space efficiency are resolved by using
stack allocation at the expense of space efficiency.  Proper tail
recursion requires these conflicts to be resolved in favor of space
efficiency, at the expense of stack allocation.

So it is a technical fact that tail call optimization and proper tail
recursion are different things.  Nonetheless it remains easy for people
to confuse the two.  That is why people who understand the distinction
are inclined to object when someone proposes to replace the established
names of these concepts by names that are likely to create even more
confusion.

For example, it is not at all clear to me whether Duane intends for
his preferred terms, "tail merging" or "tail-call merging", to mean
tail call optimization or proper tail recursion.  It is not at all
clear that Duane even understands the difference.

Will
From: Erann Gat
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <gat-1804030943300001@k-137-79-50-101.jpl.nasa.gov>
In article <····························@posting.google.com>,
······@qnci.net (William D Clinger) wrote:

> For example, it is not at all clear to me whether Duane intends for
> his preferred terms, "tail merging" or "tail-call merging", to mean
> tail call optimization or proper tail recursion.  It is not at all
> clear that Duane even understands the difference.

I certainly don't understand the difference (didn't even know there was
one until just now).  Would you please explain it, or point us to an
explanation?  Thanks.

E.
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304191015.346ea56e@posting.google.com>
Quoting me, Erran Gat wrote:
> > For example, it is not at all clear to me whether Duane intends for
> > his preferred terms, "tail merging" or "tail-call merging", to mean
> > tail call optimization or proper tail recursion.  It is not at all
> > clear that Duane even understands the difference.
>
> I certainly don't understand the difference (didn't even know there was
> one until just now).  Would you please explain it, or point us to an
> explanation?  Thanks.

The following paper contains several relevant citations:

    William D Clinger. Proper tail recursion and space efficiency.
    In the Proceedings of the 1998 ACM Conference on Programming
    Language Design and Implementation, June 1998, pages 174-185.
    ftp://ftp.ccs.neu.edu/pub/people/will/tail.ps.gz

    Abstract: The IEEE/ANSI standard for Scheme requires implementations
    to be properly tail recursive. This ensures that portable code can
    rely upon the space efficiency of continuation-passing style and
    other idioms. On its face, proper tail recursion concerns the
    efficiency of procedure calls that occur within a tail context. When
    examined closely, proper tail recursion also depends upon the fact
    that garbage collection can be asymptotically more space-efficient
    than Algol-like stack allocation.

    Proper tail recursion is not the same as ad hoc tail call optimization
    in stack-based languages. Proper tail recursion often precludes stack
    allocation of variables, but yields a well-defined asymptotic space
    complexity that can be relied upon by portable programs.

    This paper offers a formal and implementation-independent definition
    of proper tail recursion for Scheme. It also shows how an entire
    family of reference implementations can be used to characterize
    related safe-for-space properties, and proves the asymptotic
    inequalities that hold between them.

I regard this paper as a commentary on the classic papers by Steele
and Sussman from the early 1970s.  If you read those papers carefully,
you will see that the property they called "proper tail recursion" has
to do with space efficiency.  Somewhere (and I regret that I don't
remember the exact citation) Steele actually wrote that it should be
possible to formalize proper tail recursion as a matter of asymptotic
space complexity.  Several later authors (Appel, Harper, Morrisette,
Felleisen) have said the same thing, often adding that it should be
trivial to do this.  The chief technical contribution of my paper is
that I actually did this trivial thing.

The motivations for my paper were twofold.  First of all, in addition
to the very smart people who had said the formalization of proper tail
recursion should be trivial, there were other very smart people who
said it couldn't be done.  I got tired of arguing with them, so I did
it.

Having done this trivial thing, I decided to use it to explain the
distinction between proper tail recursion as defined by Steele and
Sussman and the various tail call optimizations that had sprung up
in the compiler community.

The problem here is that Steele and Sussman were not all that careful
to distinguish the space efficiency property that they called proper
tail recursion from the implementation techniques that they used to
achieve that property.  The compiler community zeroed in on the
techniques without understanding the property.  In the process, the
compiler community added restrictions as necessary to prevent these
techniques from interfering with stack allocation, which they took
for granted, completely missing the point that these added restrictions
destroyed the property that Steele and Sussman were at such pains to
describe.  This happened even in the Lisp community, where the terms
"tail merging" and "tail call merging" were used to refer to various
optimizations, not to the space efficiency property.

On the other hand, there are a few notable papers in which the authors
understood full well that proper tail recursion is a space efficiency
property, and their concern is whether a proposed optimization is legal
with respect to that property (that is, does the optimization preserve
proper tail recursion?).  See for example the papers I cited by Appel,
Chase, Hanson, and Serrano and Feeley.  These papers give real-world
examples of the conflict between stack allocation (or some related
optimization) and proper tail recursion.

The examples in my paper aren't at all realistic; they were the simplest
I could contrive.  Here is the example I gave:

    (define (f n)
      (let ((v (make-vector n)))
        (if (zero? n)
            0
            (f (- n 1)))))

If v is stack-allocated, then this will not be properly tail-recursive.
This is not a convincing example, because most compilers (though not most
interpreters) would recognize that v isn't used, and wouldn't allocate
any storage for it at all.  But you can modify this example in various
ways to make it more convincing.  For example:

    (define (f v)
      (let* ((n (vector-length v 0))
             (v (make-vector (- n 1))))
        (if (zero? n)
            0
            (f v))))

This still isn't very convincing in Lisp-like languages, because the
location allocated for v in f is quite clearly dead once it has been
dereferenced to evaluate the actual parameter for the tail call to f.
To keep that location alive beyond a tail call in Lisp or Scheme, we'd
have to introduce a lambda expression that closes over v.  In some
other languages we might pass v by reference or pass its address.
Either way, it becomes clear that allocating v on a stack would
interfere with proper tail recursion.

Will
From: Erann Gat
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <gat-2104031612340001@k-137-79-50-101.jpl.nasa.gov>
In article <····························@posting.google.com>,
······@qnci.net (William D Clinger) wrote:

[xnip]

> The examples in my paper aren't at all realistic; they were the simplest
> I could contrive.  Here is the example I gave:
> 
>     (define (f n)
>       (let ((v (make-vector n)))
>         (if (zero? n)
>             0
>             (f (- n 1)))))
> 
> If v is stack-allocated, then this will not be properly tail-recursive.
> This is not a convincing example, because most compilers (though not most
> interpreters) would recognize that v isn't used, and wouldn't allocate
> any storage for it at all.  But you can modify this example in various
> ways to make it more convincing.  For example:
> 
>     (define (f v)
>       (let* ((n (vector-length v 0))
>              (v (make-vector (- n 1))))
>         (if (zero? n)
>             0
>             (f v))))
> 
> This still isn't very convincing in Lisp-like languages, because the
> location allocated for v in f is quite clearly dead once it has been
> dereferenced to evaluate the actual parameter for the tail call to f.
> To keep that location alive beyond a tail call in Lisp or Scheme, we'd
> have to introduce a lambda expression that closes over v.

I think you have to do more than that.  You have to introduce the lambda
expression in some particular way.  For example, here I "introduce a
lambda expression that closes over v":

(define (f v)
  (let* ((n (vector-length v 0))
         (v (make-vector (- n 1))))
    (if (zero? n)
      (lambda () v)
      (f v))))

but that doesn't seem to me to change anything as far as illuminating the
difference between proper tail recursion and tail-call merging.

> In some
> other languages we might pass v by reference or pass its address.
> Either way, it becomes clear that allocating v on a stack would
> interfere with proper tail recursion.

But if you introduce a closure that closes over V (with the intention of
making V live somewhere that it isn't without the closure) then you will
probably force V to be allocated on the heap anyway.

Bottom line: even after carefully studying your post and your paper I
still don't understand what you and Duane are arguing about.

I also find it rather frustrating that you generate a whole series of
examples that you yourself admit are unconvincing, and then instead of
finally generating a convincing example you instead do some some
hand-waving about "introducing a lambda expression that closes over V"
when it is pretty obvious to me that that's not all that is needed.  I am
beginning to understand why Erik Naggum would get so infuriated with
Schemers.

E.
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304221122.2f00e59e@posting.google.com>
Erann Gat complained:
> I think you have to do more than that.  You have to introduce the lambda
> expression in some particular way.

True.  BTW, my last example wasn't tested, and contained two
unrelated bugs.

> I also find it rather frustrating that you generate a whole series of
> examples that you yourself admit are unconvincing, and then instead of
> finally generating a convincing example you instead do some some
> hand-waving about "introducing a lambda expression that closes over V"
> when it is pretty obvious to me that that's not all that is needed.

Fair comment.  Here is a complete and tested example.  (For
an equivalent in Common Lisp, see the end of this message.)

    (define (f thunk)
      (let ((n (thunk)))
        (if (zero? n)
            0
            (let* ((v (make-vector (- n 1)))
                   (newthunk (lambda () (vector-length v))))
              (f newthunk)))))

    (define (g n) (f (lambda () n)))

What do you think is the time complexity of (g n)?  There are
n iterations, each of which allocates and initializes O(n)
storage, so I think most programmers would guess that (g n)
runs in O(n^2) time.  It might be fun to test that guess in
your favorite system before reading further.

In an implementation that is safe for space complexity, so its
space consumption is O(S_{sfs}), (g n) runs in O(n) space.

In an implementation that is properly tail-recursive but not
safe for space complexity, so its space consumption is in
O(S_{tail}), we don't know that (g n) runs in O(n) space.
All we know is that (g n) runs in O(n^2) space.  Furthermore
we might find that excessive garbage collection prevents the
program from running in O(n^2) time.

In implementations that are properly tail-recursive but not
safe for space complexity, we have to rewrite our code to
avoid performance problems.  Here, for example, is a version
of the program that will run in O(n) space (and really should
run in O(n^2) time) in any implementation that is properly
tail recursive:

    (define (f thunk)
      (let ((n (thunk)))
        (if (zero? n)
            0
            (f (make-thunk (- n 1))))))

    (define (make-thunk n)
      (let ((v (make-vector n)))
        (lambda () (vector-length v))))

    (define (g n) (f (lambda () n)))

In an implementation that uses garbage collection to reclaim
the storage occupied by variables but is not properly tail
recursive, so its space consumption is in O(S_{gc}), the above
version of the program still might not run in O(n) space.
To get it to run in O(n) space and O(n^2) time, we might
need to rewrite the program as something like

    (define (f thunk)
      (do ((thunk thunk (make-thunk (- (thunk) 1))))
          ((zero? (thunk)) 0)))

    (define (make-thunk n)
      (let ((v (make-vector n)))
        (lambda () (vector-length v))))

    (define (g n) (f (lambda () n)))

Using different examples, we can show that implementations
that use garbage collection to reclaim variable storage can
be asymptotically more efficient than implementations that
use Algol-like stack allocation.  These proper inclusions
are proved in my Theorems 24 and 25, and are summarized by
the line that preceeds Definition 4.

> I am
> beginning to understand why Erik Naggum would get so infuriated with
> Schemers.

Was this an attempt to invoke Godwin's law?
:)

Will

--------

; Common Lisp code for the examples.

; First example.

    (defun f (thunk)
      (let ((n (funcall thunk)))
        (if (zerop n)
            0
            (let* ((v (make-array (- n 1)))
                   (newthunk #'(lambda () (length v))))
              (f newthunk)))))

    (defun g (n) (f #'(lambda () n)))

; Second example.

    (defun f (thunk)
      (let ((n (funcall thunk)))
        (if (zerop n)
            0
            (f (make-thunk (- n 1))))))

    (defun make-thunk (n)
      (let ((v (make-array n)))
        #'(lambda () (length v))))

    (defun g (n) (f #'(lambda () n)))

; Third example.

    (defun f (thunk)
      (do ((thunk thunk (make-thunk (- (funcall thunk) 1))))
          ((zerop (funcall thunk)) 0)))

    (defun make-thunk (n)
      (let ((v (make-array n)))
        #'(lambda () (length v))))

    (defun g (n) (f #'(lambda () n)))

; End of Common Lisp code.
From: Erann Gat
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <gat-2204031323170001@k-137-79-50-101.jpl.nasa.gov>
In article <····························@posting.google.com>,
······@qnci.net (William D Clinger) wrote:

> Here is a complete and tested example.

Thanks!

> Was this an attempt to invoke Godwin's law?

No, just an expression of frustration.  Perhaps a little more forceful
than I actually intended.

E.
From: Joe Marshall
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <8yu27b3v.fsf@ccs.neu.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> Bottom line: even after carefully studying your post and your paper I
> still don't understand what you and Duane are arguing about.
> 
> I also find it rather frustrating that you generate a whole series of
> examples that you yourself admit are unconvincing, and then instead of
> finally generating a convincing example you instead do some some
> hand-waving about "introducing a lambda expression that closes over V"
> when it is pretty obvious to me that that's not all that is needed.  I am
> beginning to understand why Erik Naggum would get so infuriated with
> Schemers.

The problem is that tail-recursion is a lot more subtle than most
people believed.  The obvious `stack frame based' explanation has
problems when you admit implementations that have no stack, and
although it is trivial to convert a program such that every call is a
tail call (by CPS conversion), that doesn't necessarily make the
program `tail recursive' (you've simply moved the stack into the
heap).

What Will and Duane are arguing about is implementation and
terminology.  Allegro Common Lisp has two compiler switches that
affect how function calls are compiled.  When set, the compiler is
able to compile *many* syntactically tail-recursive calls (i.e., calls
that are in `tail position') into calls that re-use the current stack
frame.  Duane likes to call this `tail merging', Will calls it `tail
call optimization'.  Duane doesn't like the term `proper tail
recursion' and wishes that people would not use the phrase, especially
when what they really mean is `tail call optimization'.  Will, on
the other hand, points out that `proper tail recursion' is widely
understood to programming language theorists to mean `asymptotically
space efficient', and that he does not mean `tail call optimization'.

Will asserts that simple `tail call optimization' does not guarantee
`asymptotically space efficiency', but programs that illustrate the
difference have to be rather complex because they have to `trick the
compiler'.  As it happens, however, (if I remember correctly) Allegro
Common Lisp will not by default allocate closures or closed over
variables on the stack, so the simple `tail call' compiler
optimization will result in `asymptotically space efficient' programs
in virtually every case.  (I'm sure there are screw cases where an
ever-growing structure is captured in a shared closure even though the
variable referencing it is `dead', and this *technically* disqualifies
the implementation from being `asymptotically space efficient'.)

My guess is that almost everyone that comes to comp.lang.lisp and asks
about `proper tail recursion' is in fact thinking only about simple
`tail call optimization' and not the weird screw cases.
From: Fred Gilham
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <u7fzoal978.fsf@snapdragon.csl.sri.com>
Joe Marshall <···@ccs.neu.edu> wrote:
> ....
> 
> My guess is that almost everyone that comes to comp.lang.lisp and
> asks about `proper tail recursion' is in fact thinking only about
> simple `tail call optimization' and not the weird screw cases.

Thanks for the explanation.  It seems to me that one can see this as
follows:

Proper tail recursion:  A property relating to space complexity
(bounded space consumption for tail calls)

Tail recursion: A syntactic property (somewhat complicated definition
which I omit)

Tail call: A syntactic property (more general than tail recursion)

Tail call merging: In essence automated use of the JRST hack for tail
calls.  The BASIC version of the JRST hack, which is where I first saw
it, is replacing

100 GOSUB 200
110 RETURN

with

100 GOTO 200


Tail call optimization: Optional use of tail call merging with the
intent of minimizing space complexity


Scheme requires tail call merging (which is why it's not an
optimization for Scheme) and requires that it have the property of
proper tail recursion.

Common Lisp makes no guarantees about tail call merging, which implies
both that it's an optimization and also that it's not necessarily
proper tail recursion.

-- 
Fred Gilham                                 ······@csl.sri.com
[My tutors] got bored sooner than I, and laid down a general rule
that all statements about languages had to be in a higher level
language.  I thereupon asked in what level of language that rule was
formulated.  I got a very bad report.    -- J. R. Lucas
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <44r4qtnje.fsf@beta.franz.com>
Fred Gilham <······@snapdragon.csl.sri.com> writes:

> Joe Marshall <···@ccs.neu.edu> wrote:
> > ....
> > 
> > My guess is that almost everyone that comes to comp.lang.lisp and
> > asks about `proper tail recursion' is in fact thinking only about
> > simple `tail call optimization' and not the weird screw cases.
> 
> Thanks for the explanation.  It seems to me that one can see this as
> follows:
> 
> Proper tail recursion:  A property relating to space complexity
> (bounded space consumption for tail calls)

Yes, when taken as a whole (preferably quoted).

> Tail recursion: A syntactic property (somewhat complicated definition
> which I omit)

This one is problematic.  If you google for it, you find two major
definitions for it: one is the one you describe, and the other carries
with it the implication that besides the syntax, the implementation
turns the call into a jump.

> Tail call: A syntactic property (more general than tail recursion)

Sort of.  I think that this term is relatively unencumbered, so it
shouldn't refer to the encumbered term...

> Tail call merging: In essence automated use of the JRST hack for tail
> calls.  The BASIC version of the JRST hack, which is where I first saw
> it, is replacing
> 
> 100 GOSUB 200
> 110 RETURN
> 
> with
> 
> 100 GOTO 200

This is my term (I am not the first to use it, but I have been
promoting it on c.l.l for several years, now).

> Tail call optimization: Optional use of tail call merging with the
> intent of minimizing space complexity

I do object to this term, because it is not clear what is being
"optimized", nor at the expense of what-else.  Consider heap space,
time, stack space, and debuggability as qualities that might be
optimized or pessimized, and then tell me what this term really
means...

> Scheme requires tail call merging (which is why it's not an
> optimization for Scheme) and requires that it have the property of
> proper tail recursion.

No, Scheme require "proper tail recusursion" alone.  Whatever tail
call merging is required is taken care of in this definitional
requirement.

> Common Lisp makes no guarantees about tail call merging, which implies
> both that it's an optimization and also that it's not necessarily
============^^^^
> proper tail recursion.

Mostly correct, but I would modify "it's" with "it can be"

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joe Marshall
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <3ckapcub.fsf@ccs.neu.edu>
> Fred Gilham <······@snapdragon.csl.sri.com> writes:
> 
> > Tail call optimization: Optional use of tail call merging with the
> > intent of minimizing space complexity

Duane Rettig <·····@franz.com> writes:
> I do object to this term, because it is not clear what is being
> "optimized", nor at the expense of what-else.  Consider heap space,
> time, stack space, and debuggability as qualities that might be
> optimized or pessimized, and then tell me what this term really
> means...

I think most people understand it to mean optimization of stack
depth.  Very few people understand `highly optimized' to mean `very
debuggable'.  I understand your objection that tail call optimization
may very well lead to slower code.  But there are many `optimizations'
that can lead to slower code under certain circumstances.
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4vfx6s3cm.fsf@beta.franz.com>
Joe Marshall <···@ccs.neu.edu> writes:

> > Fred Gilham <······@snapdragon.csl.sri.com> writes:
> > 
> > > Tail call optimization: Optional use of tail call merging with the
> > > intent of minimizing space complexity
> 
> Duane Rettig <·····@franz.com> writes:
> > I do object to this term, because it is not clear what is being
> > "optimized", nor at the expense of what-else.  Consider heap space,
> > time, stack space, and debuggability as qualities that might be
> > optimized or pessimized, and then tell me what this term really
> > means...
> 
> I think most people understand it to mean optimization of stack
> depth.

Perhaps, but at least quite a few people understand optimization
in only one way, and that is speed.

>  Very few people understand `highly optimized' to mean `very
> debuggable'.

Perhaps, but it is at least one of the OPTIMIZE qualities that can
be declared in Common Lisp.

>  I understand your objection that tail call optimization
> may very well lead to slower code.  But there are many `optimizations'
> that can lead to slower code under certain circumstances.

In order to be clear, optimizations are best qualified by whatever
one is optimizing toward (e.g. "optimize for speed", "optimize for space",
"optimize for shortest flight distance", "optimize for lowest price",
etc).  It is usually understood, of course, that whatever one optimizes
for might pessimize another related quality.   If what is being optimized
is unclear, though, then the cause and effect relationship between
the "optimization" and the quality that is in fact pessimized becomes
confusing to the observer.

From my reading, it seems that the "space" in "optimize for space" tends
mean "stack space" to Schemers, whereas it tends to mean "heap space"
or "consing" to Common Lispers.  This overload of "space" when a Schemer
describes "tail call optimization" and even "proper tail recursion" is
confusing to a Common Lisper.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304231250.94aea5@posting.google.com>
Duane Rettig wrote:
> From my reading, it seems that the "space" in "optimize for space" tends
> mean "stack space" to Schemers, whereas it tends to mean "heap space"
> or "consing" to Common Lispers.

I think Schemers are more likely to use "space" to mean space
in general, without distinguishing between stack and heap,
partly because there are so many Scheme interpreters that
allocate all run-time structures in the heap, partly because
there has been a lot of dialogue between the Scheme, ML, and
FP communities, and partly because optimizing for heap space
usually goes hand-in-hand with optimizing for stack space.

That last bit has come as a surprise to some compiler people,
I think, but it is becoming common knowledge in the gc community.
When I first benchmarked Larceny against Java on the gc benchmark
that is most widely used in the Java world, I was surprised by
how much better Larceny performed.  Looking into it, however, it
turns out that this benchmark contains a non-tail call that a
compiler can easily turn into a tail call by removing an assignment
to a dead variable.  Then a properly tail recursive implementation
will use about half of the peak live storage for that phase of the
benchmark that an ordinary Java system would use.  On other gc
benchmarks, Lars Hansen and I have found that the variation
attributable to the garbage collectors is often less than the
differences in space efficiency that are attributable to the
compilers, even when we are comparing different implementations
of Scheme.  Lars ended up implementing five different garbage
collectors for Larceny so we could keep the compiled code constant
across the collectors whose performance we were trying to measure.
Other gc researchers have reported similar experiences.

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4d6jcoq9d.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote:
> > From my reading, it seems that the "space" in "optimize for space" tends
> > mean "stack space" to Schemers, whereas it tends to mean "heap space"
> > or "consing" to Common Lispers.
> 
> I think Schemers are more likely to use "space" to mean space
> in general, without distinguishing between stack and heap,
> partly because there are so many Scheme interpreters that
> allocate all run-time structures in the heap, partly because
> there has been a lot of dialogue between the Scheme, ML, and
> FP communities, and partly because optimizing for heap space
> usually goes hand-in-hand with optimizing for stack space.
> 
> That last bit has come as a surprise to some compiler people,
> I think, but it is becoming common knowledge in the gc community.
> When I first benchmarked Larceny against Java on the gc benchmark
> that is most widely used in the Java world, I was surprised by
> how much better Larceny performed.  Looking into it, however, it
> turns out that this benchmark contains a non-tail call that a
> compiler can easily turn into a tail call by removing an assignment
> to a dead variable.  Then a properly tail recursive implementation
> will use about half of the peak live storage for that phase of the
> benchmark that an ordinary Java system would use.  On other gc
> benchmarks, Lars Hansen and I have found that the variation
> attributable to the garbage collectors is often less than the
> differences in space efficiency that are attributable to the
> compilers, even when we are comparing different implementations
> of Scheme.  Lars ended up implementing five different garbage
> collectors for Larceny so we could keep the compiled code constant
> across the collectors whose performance we were trying to measure.
> Other gc researchers have reported similar experiences.

Yes, I can understand this.  Stack size becomes especially important
when dealing with real-time gc implementation; ther depth of stack
traversal is critical in bounding the hard real time guarantees.

However, in a language which does not allocate "stacks" on the heap,
the gc can be optimized to separate stack space from heap space,
because only the pointer-forwarding scanning need be done on stacks,
and the stacks themselves (as well as structures allocated on the
stack) are considered "static" wrt gc, and thus need not be moved
themselves.  This is probably why we do tend to make such a
distinction.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304240854.33822415@posting.google.com>
Duane Rettig wrote:
> However, in a language which does not allocate "stacks" on the heap,
> the gc can be optimized to separate stack space from heap space,
> because only the pointer-forwarding scanning need be done on stacks,
> and the stacks themselves (as well as structures allocated on the
> stack) are considered "static" wrt gc, and thus need not be moved
> themselves.  This is probably why we do tend to make such a
> distinction.

I think you're missing the point here, which is that the space that
is occupied by unnecessary stack frames is seldom as large as the
heap space that the garbage collector considers live only because
it is reachable from unnecessary stack frames.

The O(1) versus O(n) space of loops that are implemented by tail
calls is easy to understand, so people tend to focus on it.  In
particular, the Steele and Sussman papers and most introductory
textbooks that use Scheme emphasize this case.  People who use
languages like Common Lisp therefore tend to think that proper
tail recursion is unimportant, because they don't see any advantage
to writing loops as tail calls.  They also don't see much use for
CPS, which is another pedagogical example.

So let's look at a more realistic example.  A lot of programs work
by rewriting some large recursive data structure.  Sometimes the
structure is rewritten in place, but sometimes the rewriting is
done by creating a modified copy, especially when independent or
concurrent modules may have been using the original version.  As
an abstraction of this common program structure, consider these
two ways to copy a binary tree:

; Given a tree, copies it by a postorder traversal.

(define (copy-tree-postorder t)
  (if (isLeaf? t)
      (make-leaf (leafInfo t))
      (make-tree (copy-tree-postorder (leftChild t))
                 (copy-tree-postorder (rightChild t)))))

; Given a tree, copies it by a preorder traversal.

(define (copy-tree-preorder t)
  (if (isLeaf? t)
      (make-leaf (leafInfo t))
      (let ((newtree (make-tree-node)))
        (copy-tree! (leftChild t) leftChild-set! newtree)
        (copy-tree! (rightChild t) rightChild-set! newtree)
        newtree)))

(define (copy-tree! t store-into-parent! parent)
  (if (isLeaf? t)
      (store-into-parent! parent (make-leaf (leafInfo t)))
      (let ((newtree (make-tree-node)))
        (store-into-parent! parent newtree)
        (copy-tree! (leftChild t) leftChild-set! newtree)
        (copy-tree! (rightChild t) rightChild-set! newtree))))

Now suppose the tree we want to copy is a balanced binary tree
that occupies 200 megabytes of storage.  If COPY-TREE-POSTORDER
is used to copy the tree, we will need about 400 megabytes of
heap storage during the copy: 200 for the original, and 200 for
the copy.  If COPY-TREE-PREORDER is used, and it turns out that
no other module is working on the original, then we will need
only 300 megabytes of heap storage, because the garbage collector
will discard the original as we make the copy.

The difference between 300 and 400 megabytes of heap storage can
make a big difference for the efficiency of garbage collection.
Note, however, that this efficiency will be realized only if the
implementation is properly tail recursive.  If the tail call to
COPY-TREE! is compiled just like a non-tail call, then the simpler
postorder copy will use no more space than the preorder copy, and
is likely to run at least as fast.

Now it is certainly possible to rewrite COPY-TREE! as a loop to
get rid of the tail call, but I doubt whether many Common Lisp
programmers would do that as a matter of course.  Furthermore it
might be very difficult to replace the tail call by an iterative
construct in the realistic programs that I am using COPY-TREE! to
model.  For one thing, the tail calls in realistic programs are
less likely to be self-tail calls, so getting rid of tail calls
becomes an interprocedural transformation.

So I am not at all convinced by the argument that proper tail
recursion doesn't matter to Common Lisp programmers because they
don't use tail calls to write loops.  I think this argument is
based in part on the belief that proper tail recursion is about
stack space, not space in general.  I think this belief comes
not only from the rather simplistic treatment of proper tail
recursion in introductory textbooks, and also from some of the
classic papers by Steele and Sussman, where they were concerned
to popularize these ideas by explaining them in the simplest
ways possible.  The use of simple examples is pedagogically
sound, but I think it has led a lot of people, in the Scheme
world as well as in the Common Lisp world, to underestimate the
subtlety and importance of this whole topic.

So here I am, cross-posting to both c.l.l. and c.l.s., trying
to make a point that I think is fairly subtle.
:)

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4he8npxbu.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote:
> > However, in a language which does not allocate "stacks" on the heap,
> > the gc can be optimized to separate stack space from heap space,
> > because only the pointer-forwarding scanning need be done on stacks,
> > and the stacks themselves (as well as structures allocated on the
> > stack) are considered "static" wrt gc, and thus need not be moved
> > themselves.  This is probably why we do tend to make such a
> > distinction.
> 
> I think you're missing the point here, which is that the space that
> is occupied by unnecessary stack frames is seldom as large as the
> heap space that the garbage collector considers live only because
> it is reachable from unnecessary stack frames.

Yes, you're right.  I had forgotten about that kind of case.
I don't see it too often.  Many times when CLers are recursing,
we tend to be acutely aware of what structures are being copied and
which are being modified.  And for objects which have other references
to them besides what is on the stack, the only real gc cost is the
forwarding of the pointer.  We don't tend to look too much at
objects no longer pointed to, but that usually comes out in the
profiling/optimization phase of development.

> The O(1) versus O(n) space of loops that are implemented by tail
> calls is easy to understand, so people tend to focus on it.  In
> particular, the Steele and Sussman papers and most introductory
> textbooks that use Scheme emphasize this case.  People who use
> languages like Common Lisp therefore tend to think that proper
> tail recursion is unimportant, because they don't see any advantage
> to writing loops as tail calls.  They also don't see much use for
> CPS, which is another pedagogical example.

I think that you hit the mark later when you stated that we CLers don't
see the need for tail calls because we tend to choose iteration when it
is warranted.  Whether we always make the correct decision is possibly a
matter of discussion (and I would venture to say that we probably
in fact don't always make the right decision), but I do consider it a
Good Thing that CL gives us that choice anyway.

> So let's look at a more realistic example.  A lot of programs work
> by rewriting some large recursive data structure.  Sometimes the
> structure is rewritten in place, but sometimes the rewriting is
> done by creating a modified copy, especially when independent or
> concurrent modules may have been using the original version.

I'm not sure that a CLer would consider your example one that is
likely to come up (but at this point I must say that I definitely
don't speak for CLers for this situation, each should speak for
themselves).  The issue is that your reasoning for saving the space
presumes that the bad case (where the structure is copied recursively
but not pointed to by any other objects) is likely, in which case I
as a CLer might tend to decide that if such knowledge is available,
then perhaps it _is_ better to modify the structure instead of copying
it, and if the fact of nobody pointing to the original is not knowable,
then it doesn't matter anyway.  But in CL this is in fact a matter
of optimization anyway, one which some CL implementations actually
do efficiently, even though not required to by the spec.


Let me propose what I consider to be a better example for CLers to
understand:  The printing of bignums.  If a factorial function is 
defined, and (factorial 1000) is calculated, a very large number
is produced.  Now, the obvious way to convert a bignum (which usually
has bigits that each contain some power of 2 bits) to decimal notation,
is to recursively divide the bignum by 10, saving off each digit until
the last digit is gotten.  However, with (factorial 1000), which has
2568 decimal digits, a large number of un-reused intermediate bignums
are retained on the stack, which is the kind of inefficiency to which
I think you are referring.

The first iteration [sic] in making such an algorithm more efficient
is to turn such a recursive algorithm into an iterative one.  This
causes only two bignums to be present on the stack at any one time:
the previous newly-dead bignum B1 and the new bignum B2 which is
(floor B1 10) - in the next iteration B2 becomes B1.

However, this is not the best we can do.  It is possible to do this
printing by knowing that these intermediate bignums are in fact never
reused, and thus we can establish a primitive which does the division
in place (i.e. modifying and pruning the contents of the bignum), thus
causing _no_ consing at all, other than an initial copy of the original
bignum and potentially a new result string.

  As
> an abstraction of this common program structure, consider these
> two ways to copy a binary tree:
> 
> ; Given a tree, copies it by a postorder traversal.
> 
> (define (copy-tree-postorder t)
>   (if (isLeaf? t)
>       (make-leaf (leafInfo t))
>       (make-tree (copy-tree-postorder (leftChild t))
>                  (copy-tree-postorder (rightChild t)))))
> 
> ; Given a tree, copies it by a preorder traversal.
> 
> (define (copy-tree-preorder t)
>   (if (isLeaf? t)
>       (make-leaf (leafInfo t))
>       (let ((newtree (make-tree-node)))
>         (copy-tree! (leftChild t) leftChild-set! newtree)
>         (copy-tree! (rightChild t) rightChild-set! newtree)
>         newtree)))
> 
> (define (copy-tree! t store-into-parent! parent)
>   (if (isLeaf? t)
>       (store-into-parent! parent (make-leaf (leafInfo t)))
>       (let ((newtree (make-tree-node)))
>         (store-into-parent! parent newtree)
>         (copy-tree! (leftChild t) leftChild-set! newtree)
>         (copy-tree! (rightChild t) rightChild-set! newtree))))

I think that if these were translated into CL, some or most
implementations would perform these copies with similar efficiency.

 [ ... ]

> Now it is certainly possible to rewrite COPY-TREE! as a loop to
> get rid of the tail call, but I doubt whether many Common Lisp
> programmers would do that as a matter of course.  Furthermore it
> might be very difficult to replace the tail call by an iterative
> construct in the realistic programs that I am using COPY-TREE! to
> model.  For one thing, the tail calls in realistic programs are
> less likely to be self-tail calls, so getting rid of tail calls
> becomes an interprocedural transformation.

I don't think that the CL programmer necessarily needs to do this
transformation, because (if I read this Scheme program correctly)
many of the CL implementations would do this for them, if asked.

> So I am not at all convinced by the argument that proper tail
> recursion doesn't matter to Common Lisp programmers because they
> don't use tail calls to write loops.  I think this argument is
> based in part on the belief that proper tail recursion is about
> stack space, not space in general.  I think this belief comes
> not only from the rather simplistic treatment of proper tail
> recursion in introductory textbooks, and also from some of the
> classic papers by Steele and Sussman, where they were concerned
> to popularize these ideas by explaining them in the simplest
> ways possible.  The use of simple examples is pedagogically
> sound, but I think it has led a lot of people, in the Scheme
> world as well as in the Common Lisp world, to underestimate the
> subtlety and importance of this whole topic.

Understood.  I think that this is a basic difference between the
CL and Scheme world - not that one is right and the other wrong,
but that different priority choices are made in each world which
do not translate easily into language which the other world
understands.

> So here I am, cross-posting to both c.l.l. and c.l.s., trying
> to make a point that I think is fairly subtle.
> :)

And the subtlety will be missed by some, but hopefully some will
see it.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304241353.c836afe@posting.google.com>
Duane Rettig wrote:
> I don't think that the CL programmer necessarily needs to do this
> transformation, because (if I read this Scheme program correctly)
> many of the CL implementations would do this for them, if asked.

Right on.  If the leading implementations of a language are properly
tail recursive (as in Standard ML), or can be made so at the flip of
a switch (as in Common Lisp), then it doesn't matter so much whether
the definition of the language actually requires this.

I know Duane Rettig is one of the people who is most responsible for
making proper tail recursion (by whatever name) available to Common
Lisp programmers, and I thank him for that.

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <47k9jpmk4.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote:
> > I don't think that the CL programmer necessarily needs to do this
> > transformation, because (if I read this Scheme program correctly)
> > many of the CL implementations would do this for them, if asked.
> 
> Right on.  If the leading implementations of a language are properly
> tail recursive (as in Standard ML), or can be made so at the flip of
> a switch (as in Common Lisp), then it doesn't matter so much whether
> the definition of the language actually requires this.
> 
> I know Duane Rettig is one of the people who is most responsible for
> making proper tail recursion (by whatever name) available to Common
> Lisp programmers, and I thank him for that.

Thanks.  But hang on: it's not really "proper tail recursion" that we
do, the transformations we do are not "safe-for-space", but instead
they are just "less-unsafe-for-space" :-) than purely non-tail-merged
programs.  We do tail merging when it is convenient: when there are
no bound specials, and when there are no active catch-frames in the
stack frame.  The functions you showed did none of these, as far as
I could tell from my limited ability to read Scheme, so I guessed
that direct translations of these programs into CL would be tail-mergable
in our own implementation of CL.

I'm not ready to be a hero yet :-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Thomas F. Burdick
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <xcvhe8n1l2k.fsf@conquest.OCF.Berkeley.EDU>
Duane Rettig <·····@franz.com> writes:

> Thanks.  But hang on: it's not really "proper tail recursion" that we
> do, the transformations we do are not "safe-for-space", but instead
> they are just "less-unsafe-for-space" :-) than purely non-tail-merged
> programs.  We do tail merging when it is convenient: when there are
> no bound specials, and when there are no active catch-frames in the
> stack frame.  The functions you showed did none of these, as far as
> I could tell from my limited ability to read Scheme, so I guessed
> that direct translations of these programs into CL would be tail-mergable
> in our own implementation of CL.
> 
> I'm not ready to be a hero yet :-)

Heh -- maybe the compiler should raise a not-ready-for-heroics warning
if it comes across a situation where it would have merged a tail-call,
but was stopped by eg a special binding.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <43ck7pfet.fsf@beta.franz.com>
···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Thanks.  But hang on: it's not really "proper tail recursion" that we
> > do, the transformations we do are not "safe-for-space", but instead
> > they are just "less-unsafe-for-space" :-) than purely non-tail-merged
> > programs.  We do tail merging when it is convenient: when there are
> > no bound specials, and when there are no active catch-frames in the
> > stack frame.  The functions you showed did none of these, as far as
> > I could tell from my limited ability to read Scheme, so I guessed
> > that direct translations of these programs into CL would be tail-mergable
> > in our own implementation of CL.
> > 
> > I'm not ready to be a hero yet :-)
> 
> Heh -- maybe the compiler should raise a not-ready-for-heroics warning
> if it comes across a situation where it would have merged a tail-call,
> but was stopped by eg a special binding.

I could do that :-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <pan.2003.04.25.01.59.05.820709@shimizu-blume.com>
On Thu, 24 Apr 2003 15:17:47 -0700, Duane Rettig wrote:

> ······@qnci.net (William D Clinger) writes:
> 
>> Duane Rettig wrote:
>> > I don't think that the CL programmer necessarily needs to do this
>> > transformation, because (if I read this Scheme program correctly)
>> > many of the CL implementations would do this for them, if asked.
>> 
>> Right on.  If the leading implementations of a language are properly
>> tail recursive (as in Standard ML), or can be made so at the flip of
>> a switch (as in Common Lisp), then it doesn't matter so much whether
>> the definition of the language actually requires this.
>> 
>> I know Duane Rettig is one of the people who is most responsible for
>> making proper tail recursion (by whatever name) available to Common
>> Lisp programmers, and I thank him for that.
> 
> Thanks.  But hang on: it's not really "proper tail recursion" that we
> do, the transformations we do are not "safe-for-space", but instead
> they are just "less-unsafe-for-space" :-) than purely non-tail-merged
> programs.  We do tail merging when it is convenient: when there are
> no bound specials, and when there are no active catch-frames in the
> stack frame.  The functions you showed did none of these, as far as
> I could tell from my limited ability to read Scheme, so I guessed
> that direct translations of these programs into CL would be tail-mergable
> in our own implementation of CL.
> 
> I'm not ready to be a hero yet :-)

Hey, you could easily get your hero status by simply defining the problem
away.  (Remember that "safe-for-space" means nothing without the axiom
set, and you get to pick the axiom set.)

If there are active catch frames in the current frame (in SML lingo
this means that there is an exception handler around the expression, I
suppose -- correct me if I am wrong), then this was not a tail context.
In SML, for example, the expression to the left of a "handle" is not in
a tail context either.) Same thing when there are bound specials.  In
other words, the calls you don't merge aren't tail calls to begin with...

Congratulations, you are now officially a hero!  :-)

Matthias
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4ptna36yh.fsf@beta.franz.com>
"Matthias Blume" <········@shimizu-blume.com> writes:

> On Thu, 24 Apr 2003 15:17:47 -0700, Duane Rettig wrote:
> 
> > ······@qnci.net (William D Clinger) writes:
> > 
> >> Duane Rettig wrote:
> >> > I don't think that the CL programmer necessarily needs to do this
> >> > transformation, because (if I read this Scheme program correctly)
> >> > many of the CL implementations would do this for them, if asked.
> >> 
> >> Right on.  If the leading implementations of a language are properly
> >> tail recursive (as in Standard ML), or can be made so at the flip of
> >> a switch (as in Common Lisp), then it doesn't matter so much whether
> >> the definition of the language actually requires this.
> >> 
> >> I know Duane Rettig is one of the people who is most responsible for
> >> making proper tail recursion (by whatever name) available to Common
> >> Lisp programmers, and I thank him for that.
> > 
> > Thanks.  But hang on: it's not really "proper tail recursion" that we
> > do, the transformations we do are not "safe-for-space", but instead
> > they are just "less-unsafe-for-space" :-) than purely non-tail-merged
> > programs.  We do tail merging when it is convenient: when there are
> > no bound specials, and when there are no active catch-frames in the
> > stack frame.  The functions you showed did none of these, as far as
> > I could tell from my limited ability to read Scheme, so I guessed
> > that direct translations of these programs into CL would be tail-mergable
> > in our own implementation of CL.
> > 
> > I'm not ready to be a hero yet :-)
> 
> Hey, you could easily get your hero status by simply defining the problem
> away.  (Remember that "safe-for-space" means nothing without the axiom
> set, and you get to pick the axiom set.)
> 
> If there are active catch frames in the current frame (in SML lingo
> this means that there is an exception handler around the expression, I
> suppose -- correct me if I am wrong), then this was not a tail context.
> In SML, for example, the expression to the left of a "handle" is not in
> a tail context either.) Same thing when there are bound specials.  In
> other words, the calls you don't merge aren't tail calls to begin with...
> 
> Congratulations, you are now officially a hero!  :-)

Thanks, and it's been gratifying to be able to wear the cape and
colored underwear for a day, but now I need to give it back to whom
it is due.

I assume that when people are talking about me being the hero that
it is really Franz Inc. they are talking about, and that I just happened
to be a major voice of Franz Inc. on comp.lang.lisp (and, more recently,
comp.lang.scheme :-).  And it is true that I am the current keeper of
the compiler, and I have made enhancements to tail merging.
However, the original tail merging code was written almost from the
beginning by John Foderaro, and the concepts and discussions that led
to the creation of the splitting of behaviors between self and non-self
tail calls was participated by John, Steve Haflich, other Franz developers,
and former colleagues Chris Richardson and Jim Larus.

Having searched through the archives, the oldest message I can see on
the subject of tail merging (we called it that back then as well) was
in 1989, and it looks like the tail-merge-switch was separated into
self and non-self switches in 1991.

I also have to point out a splotch on the cape; tail merging in
Allegro CL is not done if the argument list is complex (i.e. if
there are &optional, &rest, or &key arguments),.or if the number of
arguments sent to the to-be-merged function is more than the max of
the number of args passed to this function and the number of args
normally passed in registers.  In other words, in:

(defun foo (x y z) ... (bar x y z))

might be tail-merged, but:

(defun foo (x y z) (let ((w ...)) (bar x y z w)))

might not be tail-merged on some architectures (e.g. the x86 passes
only the first two arguments in registers in Allegro CL) and

(defun foo (a b c d e f g h) (let ((i ... )) (bar a b c d e f g h i)))

will not be tail merged, because no architecture we support passes more
than 8 arguments in registers.  The circumstances for tail-merging in
the examples above are indicative of the way we allocate space on the
stack for arguments, and thus whether or not that space will be available
for reuse when tail-merging.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joe Marshall
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <bryug10o.fsf@ccs.neu.edu>
"Matthias Blume" <········@shimizu-blume.com> writes:

> If there are active catch frames in the current frame (in SML lingo
> this means that there is an exception handler around the expression, I
> suppose -- correct me if I am wrong), then this was not a tail context.
> In SML, for example, the expression to the left of a "handle" is not in
> a tail context either.) Same thing when there are bound specials.  In
> other words, the calls you don't merge aren't tail calls to begin with...

But both of these cases have the `buried binding' problem:  the
specials that are being unbound (or the handlers that are being
de-established) can end up being unreachable, yet the stack frame and
associated heap structures are retained until the dynamic scope is
exited. 

> Congratulations, you are now officially a hero!  :-)

I'm nitpicking.  I don't expect any implementation to actually deal
with `buried bindings'.  Duane definitely gets points for tail merging
in ACL.


By the way, there is an interesting article by Darius Bacon at
http://www.accesscom.com/~darius/writings/dynatail.html
that discusses the dead binding problem and how to achieve proper tail
recursion in a dynamically scoped lisp.  He gives two compelling
examples for the utility of tail recursion:

    (defun read (stream)
      (let ((char (read-char stream)))
        (funcall (svref *readtable* (char-code char))
                 char
                 stream)))

    (defun install-read-macro (char handler)
      (setf (svref *readtable* (char-code char)) handler))

    (install-read-macro #\' 
      (lambda (char stream) 
        (list 'quote (read stream))))

    (dolist (whitespace '(#\space #\tab #\newline))
      (install-read-macro char 
        (lambda (char stream) 
          (read stream))))

    Without tail recursion the stack depth grows in proportion to the
    length of the input.

    If you still lack faith, ponder the Windows event loop.
From: ····@pobox.com
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <7eb8ac3e.0304241749.5734813d@posting.google.com>
William Clinger wrote:
> ; Given a tree, copies it by a postorder traversal.
> 
> (define (copy-tree-postorder t)
>   (if (isLeaf? t)
>       (make-leaf (leafInfo t))
>       (make-tree (copy-tree-postorder (leftChild t))
>                  (copy-tree-postorder (rightChild t)))))

> (define (copy-tree-preorder t)
>   (if (isLeaf? t)
>       (make-leaf (leafInfo t))
>       (let ((newtree (make-tree-node)))
>         (copy-tree! (leftChild t) leftChild-set! newtree)
>         (copy-tree! (rightChild t) rightChild-set! newtree)
>         newtree)))

> Now suppose the tree we want to copy is a balanced binary tree
> that occupies 200 megabytes of storage.  If COPY-TREE-POSTORDER
> is used to copy the tree, we will need about 400 megabytes of
> heap storage during the copy: 200 for the original, and 200 for
> the copy.  If COPY-TREE-PREORDER is used, and it turns out that
> no other module is working on the original, then we will need
> only 300 megabytes of heap storage, because the garbage collector
> will discard the original as we make the copy.

I believe we can do better -- and without resorting to side
effects. Furthermore, our code would achieve the best memory
utilization without compromising correctness and without requiring any
extra knowledge as to who might be using our source tree concurrently
with us. Furthermore, we only require tail-call-merging.  The solution
is a "*light*" CPS conversion of copy-tree-postorder.  The result
retains the benefits of CPS (almost automatic conversion) without the
drawbacks of creating closures and abusing the heap.

(define (copy-tree-au t k args)
  (if (isLeaf? t)
      (apply k (make-leaf (leafInfo t)) args)
      ; When copy-left is about to be executed, the node t
      ; can be discarded (if nobody else uses our tree)
      (copy-left (leftChild t) (rightChild t) k args)))

(define (copy-left tl tr k args)
  (copy-tree-au tl 
    copy-right (list tr k args)))

; ntl is the copy of the left side of the tree. The original is no
; longer needed (nor referenced), and can be safely discarded
(define (copy-right ntl tr k args)
  (copy-tree-au tr 
    build (list ntl k args)))

; Node of the trees that were originals for ntl and ntr are needed.
(define (build ntr ntl k args)
  (apply k (make-node ntl ntr) args))

(define (copy-tree t)
  (copy-tree-au t id '()))

(define (id x) x)

As we can see, no closures are created. We need at most d *
sizeof(3-element-list) of extra storage, where d is the max depth of
the tree.  The code runs:

(define make-leaf vector)
(define make-node vector)
(define (isLeaf? x) (and (vector? x) (= 1 (vector-length x))))
(define (leftChild x) (vector-ref x 0))
(define (rightChild x) (vector-ref x 1))
(define (leafInfo x) (vector-ref x 0))

(define (make-tree n)
  (let loop ((n n) (counter 0))
    (if (zero? n) (make-leaf counter)
      (make-node (loop (- n 1) (* 2 counter))
	         (loop (- n 1) (+ 1 (* 2 counter)))))))

; Here's the correctness test. It prints #t
(let* ((tree (make-tree 10))
       (tc  (copy-tree tree)))
  (and (not (eq? tree tc)) (equal? tree tc)))

Duane Rettig wrote:
> The issue is that your reasoning for saving the space
> presumes that the bad case (where the structure is copied recursively
> but not pointed to by any other objects) is likely, in which case I
> as a CLer might tend to decide that if such knowledge is available,
> then perhaps it _is_ better to modify the structure instead of copying
> it, and if the fact of nobody pointing to the original is not knowable,
> then it doesn't matter anyway. 

Unfortunately such a knowledge is not binary. In general it's rare
that we have an exact knowledge that someone is definitely pointing to
the root of the tree -- or someone is definitely not pointing to any
tree node at all. Frequently we know that there _might_ be a client
that points to some, probably small, part of the source tree. Alas, we
can't say which part of the tree the client is pointing to -- if it
does. Therefore, it's preferably to write the tree copying procedure
in such a way that it automatically behaves in the best possible way
-- in all circumstances. Let the garbage collector decide who points
to what, and dispose of unreferenced nodes.


Our procedure at best needs only d * 4 extra cells to copy the
tree. When the source tree is not referenced, we effectively replace
the tree inplace, only at the extra cost of 4d cells. If the tree
occupied 256 MB (and we assume the size of the node is four 4-byte
cells), we need only 92 extra cells. In the worst case, when the
entire source tree is referenced by somebody, we need 256MB for the
copy -- and still 92 extra cells. If a client points to a part of the
source tree, the only extra space we need is equal to the size of that
extra part, plus 92 cells. We achieve the ultimate efficiency without
sacrificing correctness -- we never mutate data that are in use by
someone else -- in fact we never mutate.

Duane Rettig wrote:
> Let me propose what I consider to be a better example for CLers to
> understand:  The printing of bignums.  If a factorial function is 
> defined, and (factorial 1000) is calculated, a very large number
> is produced.  Now, the obvious way to convert a bignum (which usually
> has bigits that each contain some power of 2 bits) to decimal notation,
> is to recursively divide the bignum by 10, saving off each digit until
> the last digit is gotten.  However, with (factorial 1000), which has
> 2568 decimal digits, a large number of un-reused intermediate bignums
> are retained on the stack, which is the kind of inefficiency to which
> I think you are referring.  ... It is possible to do this
> printing by knowing that these intermediate bignums are in fact never
> reused, and thus we can establish a primitive which does the division
> in place (i.e. modifying and pruning the contents of the bignum), thus
> causing _no_ consing at all, other than an initial copy of the original
> bignum and potentially a new result string.


If bignums are lists of bigits, then there is no need to resort to
mutations. Let us consider a purely functional algorithm, which relies
on a function (bignum-div n) that takes n and returns two values,
floor(n/10) and the remainder. It helps to divide by 100, btw. Using
the traditional long division, we can divide bigit by bigit, from
the most significant down -- using the recursion. The used bigit of
the dividend can be immediately collected. Our (bignum-div n) needs
only sizeof(n)+sizeof(one-bigit) space requirement -- at all
times. Thus the total (bignum->str n) needs only
sizeof(n)+sizeof(one-bigit) space in the best case (n is not
referenced) or 2*sizeof(n)+sizeof(one-bigit) space when n is
referenced. This is in the same space complexity class as your
in-place division primitive.


The two algorithms here are self-adapting and have the best space
utilization at all circumstances. We don't need extra knowledge about
aliasing of arguments to our functions and about data that can be
safely mutated in-place.  We let the garbage collector decide, which
has all that information anyway. The absence of mutations should make
a generational collector happier.
From: Joe Marshall
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <n0ifiym3.fsf@ccs.neu.edu>
······@qnci.net (William D Clinger) writes:

> It might be very difficult to replace the tail call by an iterative
> construct in the realistic programs that I am using COPY-TREE! to
> model.  For one thing, the tail calls in realistic programs are less
> likely to be self-tail calls, so getting rid of tail calls becomes
> an interprocedural transformation.

Since `tail recursion' is a dynamic property (as per Matthias Blume's
usage of the term), it is virtually impossible to replace them with
iterations without whole-program analysis.  (Consider the hoops that
Scheme->C or Scheme->Java programs have jump through to achieve
proper-tail-recursion).  In many cases there are obvious iterative
constructs that can be used, but in cases like your tree example the
difference between iteration and recursion may depend on the shape of
the tree at runtime.

(I am reminded of a Pascal compiler that used some sort of tree
internally.  It turned out that the tree was usually nearly linear, so
that iterating along one side was frequently possible and had much
higher performance.  The compiler writers re-wrote that part of the
algorithm as a state machine to achieve the performance.  When the
algorithm was adapted to Scheme, however, the original program would
recurse when necessary and iterate otherwise simply because the usual
path through the code was tail recursive.)

> So I am not at all convinced by the argument that proper tail
> recursion doesn't matter to Common Lisp programmers because they
> don't use tail calls to write loops.

Common Lisp is hardly alone in this.  Very few languages *require*
proper tail recursion.  There are even so-called scheme
implementations that don't provide it (a far more egregious error than
not providing indefinite extent continuations!).  But since nearly
every serious Common Lisp implementation has the means to enable tail
merging, it seems to me to be rather pointless to get all worked up
about it.  It really doesn't matter to me whether Duane percieves tail
recursion as a simple compiler trick or a fundamental language
property so long as his compiler lets *me* elide the stack frames.

> I think this argument is based in part on the belief that proper
> tail recursion is about stack space, not space in general.  I think
> this belief comes not only from the rather simplistic treatment of
> proper tail recursion in introductory textbooks, and also from some
> of the classic papers by Steele and Sussman, where they were
> concerned to popularize these ideas by explaining them in the
> simplest ways possible.  The use of simple examples is pedagogically
> sound, but I think it has led a lot of people, in the Scheme world
> as well as in the Common Lisp world, to underestimate the subtlety
> and importance of this whole topic.

I agree.  I still see people complain that their programs aren't any
faster when they make them tail recursive!  The pragmatic standpoint
is this:  there exist a number of interesting programs that will not
run to completion unless the implementation is tail recursive.
Therefore a language in which tail recursion is simply not possible
(C, C++, Java, etc.) is simply less capable than one that supports
tail recursion either by requirement (Scheme, ML?), or by user choice
(Common Lisp), or even by accident (????).

> So here I am, cross-posting to both c.l.l. and c.l.s., trying
> to make a point that I think is fairly subtle.

You are posting to Usenet and you are worried about subtlety?
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m2bryv912p.fsf@tti5.uchicago.edu>
Joe Marshall <···@ccs.neu.edu> writes:

> ······@qnci.net (William D Clinger) writes:
> 
> > It might be very difficult to replace the tail call by an iterative
> > construct in the realistic programs that I am using COPY-TREE! to
> > model.  For one thing, the tail calls in realistic programs are less
> > likely to be self-tail calls, so getting rid of tail calls becomes
> > an interprocedural transformation.
> 
> Since `tail recursion' is a dynamic property (as per Matthias Blume's
> usage of the term), it is virtually impossible to replace them with
> iterations without whole-program analysis.  (Consider the hoops that
> Scheme->C or Scheme->Java programs have jump through to achieve
> proper-tail-recursion).  In many cases there are obvious iterative
> constructs that can be used, but in cases like your tree example the
> difference between iteration and recursion may depend on the shape of
> the tree at runtime.

This is, indeed, the core of the problem which separates "Properly
Tail-Recursive" implementations from those that do not have this
property.  Basically, I see three major cases:

1. There are programs where local static analysis can identify cases
   of tail-recursion that *must* occur at runtime due to the way the
   program is written.  Compilers have an easy time implementing these
   cases of tail-recursion the same way they implement an explicit
   loop construct.

2. There are programs where it takes a global static analysis (e.g.,
   CFA) to identify where tail-recursion necessarily will occur at
   runtime.  Compilers will generally have a hard (but not impossible)
   time with these -- unless all tail-calls get "merged".

3. In general, the static analysis problem is in fact undecidable, so
   for any compiler there will be programs that are necessarily
   tail-recursive but which the compiler is unable to identify as
   such.  There is no chance to turn tail-recursion into an explicit
   loop.  However, an implementation that merges all tail-calls will
   guarantee the space safety requirement anyway by turning such
   tail-recursion into what amounts to an implicit loop.

> > I think this argument is based in part on the belief that proper
> > tail recursion is about stack space, not space in general.  I think
> > this belief comes not only from the rather simplistic treatment of
> > proper tail recursion in introductory textbooks, and also from some
> > of the classic papers by Steele and Sussman, where they were
> > concerned to popularize these ideas by explaining them in the
> > simplest ways possible.  The use of simple examples is pedagogically
> > sound, but I think it has led a lot of people, in the Scheme world
> > as well as in the Common Lisp world, to underestimate the subtlety
> > and importance of this whole topic.
> 
> I agree.  I still see people complain that their programs aren't any
> faster when they make them tail recursive!  The pragmatic standpoint
> is this:  there exist a number of interesting programs that will not
> run to completion unless the implementation is tail recursive.
> Therefore a language in which tail recursion is simply not possible
> (C, C++, Java, etc.) is simply less capable than one that supports
> tail recursion either by requirement (Scheme, ML?), or by user choice
> (Common Lisp), or even by accident (????).

For the record, the (S)ML definition makes no claim whatsoever about
space safety.  Appel formulated a space safety requirement in one of
his book (CwC, IIRC), and his definition happens to imply Proper Tail
Recursion.

There are in fact programming techniques (for example, those
exemplified by the Concurrent ML implementation) that inherently rely
on Proper Tail Recursion to be useful.  This usually happens when
control constructs are abstracted as higher-order values.  The generic
CML example is that of "events" and CML's even combinators (in
particular "wrap" and "choose").

Matthias
From: Lars Brinkhoff
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <85of2vhi6i.fsf@junk.nocrew.org>
Joe Marshall <···@ccs.neu.edu> writes:
> Therefore a language in which tail recursion is simply not possible
> (C, C++, Java, etc.)

GCC supports the JRST hack (literally):

  int fact (int n, int m)
  {
    return n <= 1 ? m : fact (n - 1, n * m);
  }

becomes

  fact:
  %3:!
          CAIG 1,1
          JRST %2
          MOVE 6,1
          IMUL 6,2
          SUBI 1,1
          MOVE 2,6
          JRST %3
  %2:!
          MOVE 1,2
          POPJ 17,
From: Joe Marshall
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <he8ng066.fsf@ccs.neu.edu>
Lars Brinkhoff <·········@nocrew.org> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> > Therefore a language in which tail recursion is simply not possible
> > (C, C++, Java, etc.)
> 
> GCC supports the JRST hack (literally):
> 
>   int fact (int n, int m)
>   {
>     return n <= 1 ? m : fact (n - 1, n * m);
>   }
> 
> becomes
> 
>   fact:
>   %3:!
>           CAIG 1,1
>           JRST %2
>           MOVE 6,1
>           IMUL 6,2
>           SUBI 1,1
>           MOVE 2,6
>           JRST %3
>   %2:!
>           MOVE 1,2
>           POPJ 17,

Ok, it is *possible* to do tail recursion in C, but the nature of the
language discourages it.  So long as you don't take the address of an
object on the stack, you're ok.

You *could* do it in Java as well, but the standard security
implementation requires that you leave dead frames on the stack so you
can see who your caller is.
From: ····@pobox.com
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <7eb8ac3e.0304251518.5a00dc2a@posting.google.com>
> I think you're missing the point here, which is that the space that
> is occupied by unnecessary stack frames is seldom as large as the
> heap space that the garbage collector considers live only because
> it is reachable from unnecessary stack frames.

The problem of logically dead data being physically alive and growing
is well recognized in Java, Perl, Python communities. They usually
call it just "memory leak". For example, the following article
discussed the problem in great detail.

	How Do You Plug Java Memory Leaks?
	By Ethan Henry and Ed Lycklama 
	Dr. Dobb's Journal February 2000, pp. 115-121
	http://www.ddj.com/documents/s=888/ddj0002l/0002l.htm

See especially a section on "limbos"
	"Limbos occur when an object being reference from the stack is
	pinned in memory by a long running thread. The problem is that the
	garbage collector can't do what's referred to as "liveness analysis"
	where it would be able to find out that an object won't be used
	anywhere in th rest of a method, thus making it eligible for garbage
	collection.

A typical example (in pseudo-code)
	def foo:
	   big_object = make_big_object
	   small_object = condense big_object
	   return long_computation(small_object)

Although the long_computation doesn't care about the big object, the
latter is referenced from the stack frame of foo. The frame is 
preserved until the long_computation finishes. The long_computation
may find that all memory is exhausted.

Things are actually simpler in Java/Perl/Python. There is no notion of
tail-call, tail-recursion or tail-whatever. Therefore, there is
nothing to argue about. The problem doesn't disappear however. The
most common advice amounts to manual memory management:

	def foo:
	   big_object = make_big_object
	   small_object = condense big_object
	   big_object = null
	   return long_computation(small_object)

Needless to say, functions are often longer than three lines and often
include elaborate conditional statements. If we zero out a variable
too soon, we get the annoying null pointer exception.
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m2el3tkzpp.fsf@tti5.uchicago.edu>
Duane Rettig <·····@franz.com> writes:

> > Tail recursion: A syntactic property (somewhat complicated definition
> > which I omit)
> 
> This one is problematic.  If you google for it, you find two major
> definitions for it: one is the one you describe, and the other carries
> with it the implication that besides the syntax, the implementation
> turns the call into a jump.

Wrong.  It is neither.  (And googling for definitions is not a very
good idea given the amount of misinformation that's out there on the
web.)

Tail-recursion is a property of (a portion of) the dynamic execution
of a program -- namely when all function calls (within that portion)
are tail calls.

> > Tail call optimization: Optional use of tail call merging with the
> > intent of minimizing space complexity
> 
> I do object to this term, because it is not clear what is being
> "optimized", nor at the expense of what-else.  Consider heap space,
> time, stack space, and debuggability as qualities that might be
> optimized or pessimized, and then tell me what this term really
> means...

Nonsense.  Most so-called "optimizations" can turn out to be
pessimizations in some situations.  Moreover, I don't see how
"debuggability" could possibly be "pessimized".  If I can't rely on
tail-call optimization, I have to write explicit loops -- which have the
same (or worse!) "debuggability" than the program with tail calls.

Matthias
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4ist5nnnb.fsf@beta.franz.com>
Matthias Blume <········@shimizu-blume.com> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > > Tail recursion: A syntactic property (somewhat complicated definition
> > > which I omit)
> > 
> > This one is problematic.  If you google for it, you find two major
> > definitions for it: one is the one you describe, and the other carries
> > with it the implication that besides the syntax, the implementation
> > turns the call into a jump.
> 
> Wrong.  It is neither.

Saying it is wrong doesn't make it so.  Please give references for your
assertion.

>  (And googling for definitions is not a very
> good idea given the amount of misinformation that's out there on the
> web.)

It is precisely the amount of "misinformation" (i.e. disagreement)
which has caused this discussion.  William Clinger has been so good
as to discuss with me the issues involved with these terms, and we
have been discussing them from actual documents to which he referred.
If you wish to enter this discussion, please either point out references
to these documents that support your position, or else bring new
documents to light.

> Tail-recursion is a property of (a portion of) the dynamic execution
> of a program -- namely when all function calls (within that portion)
> are tail calls.

Not in Common Lisp.

> > > Tail call optimization: Optional use of tail call merging with the
> > > intent of minimizing space complexity
> > 
> > I do object to this term, because it is not clear what is being
> > "optimized", nor at the expense of what-else.  Consider heap space,
> > time, stack space, and debuggability as qualities that might be
> > optimized or pessimized, and then tell me what this term really
> > means...
> 
> Nonsense.

Only because you don't take the trouble to make sense of it.

>  Most so-called "optimizations" can turn out to be
> pessimizations in some situations.

Of course.  Further down this same thread, I explain further how
optimizing one quality might cause pessimization in another.

>  Moreover, I don't see how "debuggability" could possibly be "pessimized".

That's because when you write in Scheme, the tail-recursion is not an
optimization, thus there is nothing to optimize or pessimize.

>  If I can't rely on
> tail-call optimization, I have to write explicit loops -- which have the
> same (or worse!) "debuggability" than the program with tail calls.

Common Lispers don't consider it a Bad Thing to "have to write explicit
loops".  Also, we don't count on tail calls to be optimized to conserve
stack, so when such stack optimization is not done, there are more frames
on the stack that can be examined.  This constitutes an optimization for
debugging.  When the tail calls are optimized for stack space, fewer stack
frames are available for examination, thus debugging is pessimized.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m28yu1knbf.fsf@tti5.uchicago.edu>
Duane Rettig <·····@franz.com> writes:

> Matthias Blume <········@shimizu-blume.com> writes:
> 
> > Duane Rettig <·····@franz.com> writes:
> > 
> > > > Tail recursion: A syntactic property (somewhat complicated definition
> > > > which I omit)
> > > 
> > > This one is problematic.  If you google for it, you find two major
> > > definitions for it: one is the one you describe, and the other carries
> > > with it the implication that besides the syntax, the implementation
> > > turns the call into a jump.
> > 
> > Wrong.  It is neither.
> 
> Saying it is wrong doesn't make it so.  Please give references for your
> assertion.

Saying it is right doesn't make it so either.  Please give references
for your assertion (other than google).

> 
> >  (And googling for definitions is not a very
> > good idea given the amount of misinformation that's out there on the
> > web.)
> 
> It is precisely the amount of "misinformation" (i.e. disagreement)
> which has caused this discussion.

Exactly.  So why don't we just stop googling for the right definition
and refer to terms agreed upon by researchers of the matter (as
opposed to the many hobbyists out there).

>  William Clinger has been so good
> as to discuss with me the issues involved with these terms, and we
> have been discussing them from actual documents to which he referred.
> If you wish to enter this discussion, please either point out references
> to these documents that support your position, or else bring new
> documents to light.
> 
> > Tail-recursion is a property of (a portion of) the dynamic execution
> > of a program -- namely when all function calls (within that portion)
> > are tail calls.
> 
> Not in Common Lisp.

So what is it then, in CL, according to you?  Where are your
references?  Is "tail recursion" something else in Common Lisp?  In
that case we will just have to agree on some other phrase for the same
concept.  


> > > > Tail call optimization: Optional use of tail call merging with
> > > > the intent of minimizing space complexity
> > > 
> > > I do object to this term, because it is not clear what is being
> > > "optimized", nor at the expense of what-else.  Consider heap space,
> > > time, stack space, and debuggability as qualities that might be
> > > optimized or pessimized, and then tell me what this term really
> > > means...
> > 
> > Nonsense.
> 
> Only because you don't take the trouble to make sense of it.

Not so.  See below.

> >  Most so-called "optimizations" can turn out to be
> > pessimizations in some situations.
> 
> Of course.  Further down this same thread, I explain further how
> optimizing one quality might cause pessimization in another.

I know you do, but that is not the whole story.  Even compilers that
"optimize for speed" occasionally pessimize speed.  I bet that even CL
compilers (and even those that come out of Franz Inc.) have this
problem.  Otherwise one would be stuck with a set of fairly simple
program transformations which provably never pessimize.  In the real
world, compiler writers use heuristics, and heuristics can be wrong.

> >  Moreover, I don't see how "debuggability" could possibly be "pessimized".
> 
> That's because when you write in Scheme, the tail-recursion is not an
> optimization, thus there is nothing to optimize or pessimize.

That's not what I mean.  First of all, "tail-recursion" IS NOT AT ALL
AN OPTIMIZATION but a (dynamic) property of a program, independent of
its implementation.  Please, take the minor trouble and don't
intentionally use the wrong terminology, please.  Second, I was
talking about what you call "tail-merging" for the case of languages
where it is NOT mandatory (e.g., CL).

And what I am saying for that case is that it does not typically
pessimize debuggability because people will have a tendency to write
loops instead of tail-recursive programs, so in either case (case 1:
loops, case 2: tail-merging) there will not be much of a difference in
the amount of debugging information available.

To be fair, there is one case where there is a difference -- and that
is when one is in the middle of a chain of tail-calls where this chain
does not end up forming a loop, so the CL programmer couldn't have
used an explicit loop construct for them.  However, this difference
can be overcome, and I have personally implemented a debugging
facility for a compiler that is Properly Tail-Recursive(tm).

[FYI: It is a back-trace mechanism that shows all active non-tail
calls, all active tail-calls that do not (yet) form a loop, and
clusters of program points that have already formed a loop.  The only
information that it lost is what paths were taking within loop
clusters.  Programs compiled with this debugging facility turned on
have the same asymptotic space- and time-complexity as their original
Properly Tail-Recursive form (so they are also Properly
Tail-Recursive).]

> >  If I can't rely on
> > tail-call optimization, I have to write explicit loops -- which have the
> > same (or worse!) "debuggability" than the program with tail calls.
> 
> Common Lispers don't consider it a Bad Thing to "have to write explicit
> loops".

I know.  All I was saying is that when you do, you don't have any
stack-frames for debugging around either.

Matthias
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <48yu0ops8.fsf@beta.franz.com>
Matthias Blume <········@shimizu-blume.com> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Matthias Blume <········@shimizu-blume.com> writes:
> > 
> > > Duane Rettig <·····@franz.com> writes:
> > > 
> > > > > Tail recursion: A syntactic property (somewhat complicated definition
> > > > > which I omit)
> > > > 
> > > > This one is problematic.  If you google for it, you find two major
> > > > definitions for it: one is the one you describe, and the other carries
> > > > with it the implication that besides the syntax, the implementation
> > > > turns the call into a jump.
> > > 
> > > Wrong.  It is neither.
> > 
> > Saying it is wrong doesn't make it so.  Please give references for your
> > assertion.
> 
> Saying it is right doesn't make it so either.  Please give references
> for your assertion (other than google).

References have been given (not all by me) in the conversation that I
had with William Clinger, and I'm not going to repeat them here.  But
I am not a Scheme expert, so perhaps your argument over whether the
Scheme definition of tail recursion is a syntactic or dynamic property
should be discussed with him instead.

You, on the other hand, have not yet responded to my request for
references.

> > >  (And googling for definitions is not a very
> > > good idea given the amount of misinformation that's out there on the
> > > web.)
> > 
> > It is precisely the amount of "misinformation" (i.e. disagreement)
> > which has caused this discussion.
> 
> Exactly.  So why don't we just stop googling for the right definition
> and refer to terms agreed upon by researchers of the matter (as
> opposed to the many hobbyists out there).

Because the term "tail recursion" is not agreed upon.  Why do you assume
that it is hobbyists only that are found by googling?  And where are the
references that support your position that tail recursion is a dynamic
property of a program?

> >  William Clinger has been so good
> > as to discuss with me the issues involved with these terms, and we
> > have been discussing them from actual documents to which he referred.
> > If you wish to enter this discussion, please either point out references
> > to these documents that support your position, or else bring new
> > documents to light.
> > 
> > > Tail-recursion is a property of (a portion of) the dynamic execution
> > > of a program -- namely when all function calls (within that portion)
> > > are tail calls.
> > 
> > Not in Common Lisp.
> 
> So what is it then, in CL, according to you?  Where are your
> references?  Is "tail recursion" something else in Common Lisp?  In
> that case we will just have to agree on some other phrase for the same
> concept.  

Since the term "tail recursion" is not defined in CL, there are two
possibilities:

 1. Accept the definition of "tail recursion" as used by the designers
of Scheme.

 2. build the concept of tail recursion by composition; by combining the
components "tail" (having to do with a call being in tail position, which
I don't think is at all in contention) and "recursion", which has to do with
dividing the problem into smaller parts, solving the sub-problem parts, and
combining the results.

The CL community tend to gravitate toward the second possibility.

> > > > > Tail call optimization: Optional use of tail call merging with
> > > > > the intent of minimizing space complexity
> > > > 
> > > > I do object to this term, because it is not clear what is being
> > > > "optimized", nor at the expense of what-else.  Consider heap space,
> > > > time, stack space, and debuggability as qualities that might be
> > > > optimized or pessimized, and then tell me what this term really
> > > > means...
> > > 
> > > Nonsense.
> > 
> > Only because you don't take the trouble to make sense of it.
> 
> Not so.  See below.
> 
> > >  Most so-called "optimizations" can turn out to be
> > > pessimizations in some situations.
> > 
> > Of course.  Further down this same thread, I explain further how
> > optimizing one quality might cause pessimization in another.
> 
> I know you do, but that is not the whole story.  Even compilers that
> "optimize for speed" occasionally pessimize speed.  I bet that even CL
> compilers (and even those that come out of Franz Inc.) have this
> problem.  Otherwise one would be stuck with a set of fairly simple
> program transformations which provably never pessimize.  In the real
> world, compiler writers use heuristics, and heuristics can be wrong.

Optimizations have many axes.  When you optimize along one axis, there
is a chance that another optimization axis is pessimized.  This is not
a question of right or wrong; it is a set of tradeoffs.

> > >  Moreover, I don't see how "debuggability" could possibly be
> > >  "pessimized".
> > 
> > That's because when you write in Scheme, the tail-recursion is not an
> > optimization, thus there is nothing to optimize or pessimize.
> 
> That's not what I mean.  First of all, "tail-recursion" IS NOT AT ALL
> AN OPTIMIZATION but a (dynamic) property of a program, independent of
> its implementation.

This would be funny if it weren't so sad;  I say "tail recursion is
not an optimization", and you shout "No! tail recursion is not at all an
optimization!".  Calm down, take a breath, and tell me what you really
objected to here.

>  Please, take the minor trouble and don't
> intentionally use the wrong terminology, please.  Second, I was
> talking about what you call "tail-merging" for the case of languages
> where it is NOT mandatory (e.g., CL).

You have not yet established that the terminology I am using is wrong.
If the trouble it will take you to do so is too major, then just
don't worry about it, and we'll just agree to disagree on terminology.

> And what I am saying for that case is that it does not typically
> pessimize debuggability because people will have a tendency to write
> loops instead of tail-recursive programs, so in either case (case 1:
> loops, case 2: tail-merging) there will not be much of a difference in
> the amount of debugging information available.

I understand this, but your premise is that tail-call-optimization does
not pessimize debugging, and in Common Lisp that does not involve
loop constructs;  the argument would be more to the point if you had
made case 1: non-merged tail calls, and case 2: merged tail calls.

> To be fair, there is one case where there is a difference -- and that
> is when one is in the middle of a chain of tail-calls where this chain
> does not end up forming a loop, so the CL programmer couldn't have
> used an explicit loop construct for them.  However, this difference
> can be overcome, and I have personally implemented a debugging
> facility for a compiler that is Properly Tail-Recursive(tm).
> 
> [FYI: It is a back-trace mechanism that shows all active non-tail
> calls, all active tail-calls that do not (yet) form a loop, and
> clusters of program points that have already formed a loop.  The only
> information that it lost is what paths were taking within loop
> clusters.  Programs compiled with this debugging facility turned on
> have the same asymptotic space- and time-complexity as their original
> Properly Tail-Recursive form (so they are also Properly
> Tail-Recursive).]

Steele suggested something similar, and perhaps your mechanism is an
adaptation of his suggestion to use a circular buffer to record tail
recursions (in the Scheme sense here, of course) and thus allow the same
ayymtyotic space/time complexity as execution without such extra
debugging.

> > >  If I can't rely on
> > > tail-call optimization, I have to write explicit loops -- which have the
> > > same (or worse!) "debuggability" than the program with tail calls.
> > 
> > Common Lispers don't consider it a Bad Thing to "have to write explicit
> > loops".
> 
> I know.  All I was saying is that when you do, you don't have any
> stack-frames for debugging around either.

Yes, we do agree on this.  However, when we write loops, we tend to have
the loop variables available for debug.  This is one reason why we Cl-ers
don't really consider loops and merged tail-calls to be equivalent.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <pan.2003.04.23.23.23.57.904960@shimizu-blume.com>
On Wed, 23 Apr 2003 14:41:11 -0700, Duane Rettig wrote:

> Because the term "tail recursion" is not agreed upon.

What do you mean by "not agreed upon"?  Just because one person
(e.g., you) disagrees?  Well, then I guess nothing is "agreed upon"
in this world.

 
> Since the term "tail recursion" is not defined in CL, there are two
> possibilities:
> 
>  1. Accept the definition of "tail recursion" as used by the designers
> of Scheme.
> 
>  2. build the concept of tail recursion by composition; by combining the
> components "tail" (having to do with a call being in tail position, which
> I don't think is at all in contention) and "recursion", which has to do with
> dividing the problem into smaller parts, solving the sub-problem parts, and
> combining the results.
> 
> The CL community tend to gravitate toward the second possibility.

Don't be silly.  If someone designs a brand-new programming language,
then one might ask the question of whether certain programs in that
language are tail-recursive.  After nailing down what tail-calls are
(which is a syntactic property that, indeed, depends on the language),
and after nailing down the dynamic semantics, this is a valid endeavor.
But according to you, in a brand-new language, just like in CL (according
to you), the term "tail recursion" is not defined...

In other words, we cannot apply any PL theory to a new language, because
the old terminology is "not defined" there.

So, no, I don't care at all what the "CL community" tends to gravitate
towards.  Obviously, some very important parts of that community (for
example, I rank GLS much higher there than, say, Duane Rettig) do not
gravitate in this direction.  If it is indeed the case that some term
such as "tail recursion" is not defined in the CL setting, the sensible
thing is to assume that whoever uses it in this context means the
PL term -- properly adapted to fit the CL setting.

>> I know you do, but that is not the whole story.  Even compilers that
>> "optimize for speed" occasionally pessimize speed.  I bet that even CL
>> compilers (and even those that come out of Franz Inc.) have this
>> problem.  Otherwise one would be stuck with a set of fairly simple
>> program transformations which provably never pessimize.  In the real
>> world, compiler writers use heuristics, and heuristics can be wrong.
> 
> Optimizations have many axes.  When you optimize along one axis, there
> is a chance that another optimization axis is pessimized.  This is not
> a question of right or wrong; it is a set of tradeoffs.

What I was saying is that even when trying to optimize on a particular
axis (e.g., speed), one might end up pessimizing along this very axis.


>> That's not what I mean.  First of all, "tail-recursion" IS NOT AT ALL
>> AN OPTIMIZATION but a (dynamic) property of a program, independent of
>> its implementation.
> 
> This would be funny if it weren't so sad;  I say "tail recursion is
> not an optimization", and you shout "No! tail recursion is not at all an
> optimization!".  Calm down, take a breath, and tell me what you really
> objected to here.

You got me there.  No really.  Are you not capable of correcting for a
minor slipup in my English?  Of course, what I was trying to say was
"tail recursion is not some sort of implementation technology".  So,
*of course* it is no optimization -- it is in a completely different
category.  With a bit of good will, you would have known that this is
what I meant.  (Actually, my guess is that you did know that.)

>>  Please, take the minor trouble and don't
>> intentionally use the wrong terminology, please.  Second, I was
>> talking about what you call "tail-merging" for the case of languages
>> where it is NOT mandatory (e.g., CL).
> 
> You have not yet established that the terminology I am using is wrong.

You have said yourself that the terminology in question is not defined
in the CL setting.  So it is wrong outside CL and merely undefined within.
Well, I guess you got me again.

> If the trouble it will take you to do so is too major, then just
> don't worry about it, and we'll just agree to disagree on terminology.

Why does it take you so much trouble of adopting the
agreed-upon-outside-CL definition for a term that was not defined
(according to yourself) within?

>> And what I am saying for that case is that it does not typically
>> pessimize debuggability because people will have a tendency to write
>> loops instead of tail-recursive programs, so in either case (case 1:
>> loops, case 2: tail-merging) there will not be much of a difference in
>> the amount of debugging information available.
> 
> I understand this, but your premise is that tail-call-optimization does
> not pessimize debugging, and in Common Lisp that does not involve
> loop constructs;  the argument would be more to the point if you had
> made case 1: non-merged tail calls, and case 2: merged tail calls.

No.  You have to compare the programs that people write to solve the same
tasks.  When looping is natural, CL programmers will use a loop and Scheme
programmers will use tail recursion.  At runtime, the amount of debugging
information available on the basis of what stack frames exist will
be roughly the same.
(By the way, the whole debugging issue is a red herring.  When I am
debugging a Scheme program I am perfectly willing to turn on a few
switches that -- for the purpose of debugging -- turns off Proper
Tail-Recursion.)

> 
>> To be fair, there is one case where there is a difference -- and that
>> is when one is in the middle of a chain of tail-calls where this chain
>> does not end up forming a loop, so the CL programmer couldn't have
>> used an explicit loop construct for them.  However, this difference
>> can be overcome, and I have personally implemented a debugging
>> facility for a compiler that is Properly Tail-Recursive(tm).
>> 
>> [FYI: It is a back-trace mechanism that shows all active non-tail
>> calls, all active tail-calls that do not (yet) form a loop, and
>> clusters of program points that have already formed a loop.  The only
>> information that it lost is what paths were taking within loop
>> clusters.  Programs compiled with this debugging facility turned on
>> have the same asymptotic space- and time-complexity as their original
>> Properly Tail-Recursive form (so they are also Properly
>> Tail-Recursive).]
> 
> Steele suggested something similar, and perhaps your mechanism is an
> adaptation of his suggestion to use a circular buffer to record tail
> recursions (in the Scheme sense here, of course) and thus allow the same
> ayymtyotic space/time complexity as execution without such extra
> debugging.

It is similar, but not the same.  Close enough, though, for the purpose
of this discussion.

> 
>> > >  If I can't rely on
>> > > tail-call optimization, I have to write explicit loops -- which have the
>> > > same (or worse!) "debuggability" than the program with tail calls.
>> > 
>> > Common Lispers don't consider it a Bad Thing to "have to write explicit
>> > loops".
>> 
>> I know.  All I was saying is that when you do, you don't have any
>> stack-frames for debugging around either.
> 
> Yes, we do agree on this.  However, when we write loops, we tend to have
> the loop variables available for debug.  This is one reason why we Cl-ers
> don't really consider loops and merged tail-calls to be equivalent.

I don't see what you think the difference would be.  Loop variables turn
into function arguments when you write the loop tail-recursively.  Surely,
those function arguments are available when debugging.

Matthias
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4he8ospry.fsf@beta.franz.com>
"Matthias Blume" <········@shimizu-blume.com> writes:

> >> That's not what I mean.  First of all, "tail-recursion" IS NOT AT ALL
> >> AN OPTIMIZATION but a (dynamic) property of a program, independent of
> >> its implementation.
> > 
> > This would be funny if it weren't so sad;  I say "tail recursion is
> > not an optimization", and you shout "No! tail recursion is not at all an
> > optimization!".  Calm down, take a breath, and tell me what you really
> > objected to here.
> 
> You got me there.  No really.  Are you not capable of correcting for a
> minor slipup in my English?  Of course, what I was trying to say was
> "tail recursion is not some sort of implementation technology".  So,
> *of course* it is no optimization -- it is in a completely different
> category.  With a bit of good will, you would have known that this is
> what I meant.  (Actually, my guess is that you did know that.)

No, sorry, I still apparently don't know what you mean, since the way
you've rephrased it makes me believe that you are saying the same thing as
you said before.  And the sad and ironic thing is: it is precisely what I
said before that.  Let's go back and look at what was said originally:

MB: Moreover, I don't see how "debuggability" could possibly be "pessimized".

DR: That's because when you write in Scheme, the tail-recursion is not an
DR: optimization, thus there is nothing to optimize or pessimize.

Now, what I might have said to make it more clear was that I was taking
on the Scheme (really, the continuation-passing-style) point of view, and
of course with that point-of-view, tail recursion has nothing whatsoever
to do with optimization or pessimization.  But if you remove yourself
from the CPS point of view, then tail recursion is not defined as a term,
and thus it might be taken as some kind of optimization.

> >>  Please, take the minor trouble and don't
> >> intentionally use the wrong terminology, please.  Second, I was
> >> talking about what you call "tail-merging" for the case of languages
> >> where it is NOT mandatory (e.g., CL).
> > 
> > You have not yet established that the terminology I am using is wrong.
> 
> You have said yourself that the terminology in question is not defined
> in the CL setting.  So it is wrong outside CL and merely undefined within.
> Well, I guess you got me again.
> 
> > If the trouble it will take you to do so is too major, then just
> > don't worry about it, and we'll just agree to disagree on terminology.
> 
> Why does it take you so much trouble of adopting the
> agreed-upon-outside-CL definition for a term that was not defined
> (according to yourself) within?

Because "tail recursion" is not agreed-upon-outside-CL, it is only
agreed-upon-inside-the-CPS-community.  And with your contention above,
it apparently isn't even agreed upon there; you really do need to
resolve with William Clinger whether it is a dynamic property of a
program, or a syntactic one.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <pan.2003.04.24.03.03.26.456899@shimizu-blume.com>
On Wed, 23 Apr 2003 17:27:45 -0700, Duane Rettig wrote:

> "Matthias Blume" <········@shimizu-blume.com> writes:
> 
>> >> That's not what I mean.  First of all, "tail-recursion" IS NOT AT ALL
>> >> AN OPTIMIZATION but a (dynamic) property of a program, independent of
>> >> its implementation.
>> > 
>> > This would be funny if it weren't so sad;  I say "tail recursion is
>> > not an optimization", and you shout "No! tail recursion is not at all an
>> > optimization!".  Calm down, take a breath, and tell me what you really
>> > objected to here.
>> 
>> You got me there.  No really.  Are you not capable of correcting for a
>> minor slipup in my English?  Of course, what I was trying to say was
>> "tail recursion is not some sort of implementation technology".  So,
>> *of course* it is no optimization -- it is in a completely different
>> category.  With a bit of good will, you would have known that this is
>> what I meant.  (Actually, my guess is that you did know that.)
> 
> No, sorry, I still apparently don't know what you mean, since the way
> you've rephrased it makes me believe that you are saying the same thing as
> you said before.  And the sad and ironic thing is: it is precisely what I
> said before that.  Let's go back and look at what was said originally:
> 
> MB: Moreover, I don't see how "debuggability" could possibly be "pessimized".
> 
> DR: That's because when you write in Scheme, the tail-recursion is not an
> DR: optimization, thus there is nothing to optimize or pessimize.

I take you to mean by the above that "in Scheme, tail recursion is
mandatory, so there is nothing to optimize".  My reply refers to the fact
that what's mandatory in Scheme is not called "tail recursion" (because
"tail recursion" is a program property, not a property of a language or
an implementation).  What's mandatory in Scheme is called "Proper
Tail-Recursion" (which is *not* a special kind of tail recursion!), aka.
"pervasive tail-call merging" or "tail-call elimination" or "tail-call
optimization" or "turning tail-calls into gotos" etc.

> Now, what I might have said to make it more clear was that I was taking
> on the Scheme (really, the continuation-passing-style) point of view, and
> of course with that point-of-view, tail recursion has nothing whatsoever
> to do with optimization or pessimization.

Huh?

>  But if you remove yourself
> from the CPS point of view, then tail recursion is not defined as a term,
> and thus it might be taken as some kind of optimization.

Huh?  "Recursion" is when a function calls itself (directly or
indirectly). "Tail recursion" is when that function does so in such
a way that the call chain leading to the call of itself consists of
tail calls only.  Where do you see "CPS" in this fairly simple
(albeit informal) definition?

Of course, there are some connections between CPS and tail recursion.
The most obvious one is that in CPS all recursion is trivially of the
"tail recursion" kind.  Another one (but this is already going into
implementation territory) is that tail-call optimization (what you
like to call "tail-call merging") in a DS program is nothing more and
nothing less than statically reducing certain \eta-redexes in the
corresponding CPS program.
In any case, all this has absolutely nothing to do with the definition
of tail recursion.  Tail recursion can be explained without ever
mentioning CPS.

> Because "tail recursion" is not agreed-upon-outside-CL, it is only
> agreed-upon-inside-the-CPS-community.

Sorry to use a very strong word here, but this is utter bullshit.
Throwing in "CPS" here shows that you are simply confused (see above).

>  And with your contention above,
> it apparently isn't even agreed upon there; you really do need to
> resolve with William Clinger whether it is a dynamic property of a
> program, or a syntactic one.

I think that as far as practical matters are concerned, I am in perfect
agreement with Will.  In fact, I believe that it does not even matter
what one thinks what "tail recursion" is.  If we just agree on how
to distinguish tail calls from non-tail calls (and that *is* a syntactic
property), then we can already state what constitutes Proper Tail
Recursion.

(By the way, I sense that your whole line of argument is basically driven
by emotion:  You object to the somewhat less-than-politically-correct
phrase "*proper* tail recursion" -- which seems to suggest that anyone
who does not implement it is somehow "improper".  If that is indeed so,
I'd suggest that you try and ignore the english meaning of the adjective
"proper" and read a sentence such as "Scheme implementations must be
properly tail recursive" as "Scheme implementations must have property
ASR", with "property ASR" being defined the way we currently define
"proper tail recursion".

(Now be my guest and guess how I came up with the acronym ASR. I won't
spell it out because it is supposed to be abstract. :-)

Matthias
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4fzo8qs23.fsf@beta.franz.com>
"Matthias Blume" <········@shimizu-blume.com> writes:

> Sorry to use a very strong word here, but this is utter bullshit.

I doubt that (that you're sorry - otherwise you wouldn't have felt the
need to say it...)

 [ ... ]

> By the way, I sense that your whole line of argument is basically driven
> by emotion

Well, I wasn't, but I do have to admit that you've now stirred up
the emotion of disgust.

And now, after replying ad hominem to ad hominem replies (for which I
am not now sorry, but I'm certain I will be in the morning), I now
disqualify myself from further discussions with you.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Jeffrey Mark Siskind
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <a520654a.0304240522.1a5c3e1f@posting.google.com>
In case anybody is wondering why one would want a nondeterministic LALR parser
that allows multiple possible actions per state-token pair, consider the fact
that the context-free grammar for C is ambiguous. Consider, for example, the
expression (x)&y. (x) could be a cast with & being a unary address-of operator
or & could be a binary and operator. This depends on whether x is a typedef
name or an identifier which is a context-sensitive property of the input
string. I.e.
  typedef int x; ... (x)&y ...
vs.
  int x; ... (x)&y ...
There are other analogous issues like this in C.
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <pan.2003.04.24.12.17.19.428508@shimizu-blume.com>
On Thu, 24 Apr 2003 00:21:24 -0700, Duane Rettig wrote:

> "Matthias Blume" <········@shimizu-blume.com> writes:
> 
>> Sorry to use a very strong word here, but this is utter bullshit.
> 
> I doubt that (that you're sorry - otherwise you wouldn't have felt the
> need to say it...)

Whatever.  (Now this is *truly* an ad hominem, uttered only to offend, not
to further the discussion.  See below.)

>  [ ... ]
> 
>> By the way, I sense that your whole line of argument is basically driven
>> by emotion
> 
> Well, I wasn't, but I do have to admit that you've now stirred up
> the emotion of disgust.

I am sorry (truly!) if my comment about emotion was so disgusting 
to you.  It certainly wasn't meant to be.  I am simply trying to
understand why we are having such a deep rift.  (And by "we" I don't mean
"you and I".)

> And now, after replying ad hominem to ad hominem replies (for which I
> am not now sorry, but I'm certain I will be in the morning),
> I now disqualify myself from further discussions with you.

Well, yes, I addressed your "emotion" -- which, I guess, technically
qualifies as "ad hominem".  If you feel offended by this, I am sorry.

However, I am not sure now how one could possibly have a meaningful
argument avoiding this when one side feels that 1. there is no factual
ground for the discussion and that 2. the reason for having the argument
in the first place seems to be emotional on part of the other side.  Can
the first side not at least try and bring this up?  Basically, what
I said was meant to further the discussion (by trying to take the
emotional element *out of it* -- an utter failure as it turns out).
I don't think this is the same as an ad hominem that first attacks
the character of the opponent and then concludes that *because of that
character* the opponent must be wrong.
In other words, while I indeed think that you are wrong, I don't think
that this is so because you are emotional about it.  What I do think is
that we have this discussion (and this style of discussion) because you
are emotional about it, that's all.

Anyway, you are probably right in that any further discussion will not
get us anywhere.

Matthias
From: Duane Rettig
Subject: Meta Discussion about emotion (Off Topic: was: Re: Implementing call/cc ...)
Date: 
Message-ID: <4llxzq1qe.fsf_-_@beta.franz.com>
[Readers: This is a meta-discussion, and is off-topic, so if you
don't want to read such a post, please skip this one.]

[Matthias, on the strength of your two apologies, I will make
an attempt to resolve the meta-issue at hand, so that we can decide
whether or not it is worth getting back to the actual discussion.]

"Matthias Blume" <········@shimizu-blume.com> writes:

> On Thu, 24 Apr 2003 00:21:24 -0700, Duane Rettig wrote:
> 
> > "Matthias Blume" <········@shimizu-blume.com> writes:

> >> By the way, I sense that your whole line of argument is basically driven
> >> by emotion
> > 
> > Well, I wasn't, but I do have to admit that you've now stirred up
> > the emotion of disgust.
> 
> I am sorry (truly!) if my comment about emotion was so disgusting 
> to you.  It certainly wasn't meant to be.  I am simply trying to
> understand why we are having such a deep rift.  (And by "we" I don't mean
> "you and I".)

The disgust I was feeling was not about your comment about my emotion
(though that comment was in fact wrong) but about your style of argument,
which is not conducive to discourse and understanding.  But I do accept
your apology, and hope that it is useful to have this metadiscussion
to resolve just what it is about your style of argumentation that I
find to be a problem.

> > And now, after replying ad hominem to ad hominem replies (for which I
> > am not now sorry, but I'm certain I will be in the morning),
> > I now disqualify myself from further discussions with you.
> 
> Well, yes, I addressed your "emotion" -- which, I guess, technically
> qualifies as "ad hominem".  If you feel offended by this, I am sorry.

I am not at all offended.  Offense and disgust are two different
emotions.

> However, I am not sure now how one could possibly have a meaningful
> argument avoiding this when one side feels that 1. there is no factual
> ground for the discussion and that 2. the reason for having the argument
> in the first place seems to be emotional on part of the other side.  Can
> the first side not at least try and bring this up?

OK, let's start with your factual bases.  I have continually asked you
for references to support your position, and you have supplied absolutely
none but your own word.  If you wish to continue this discussion, all I
ask for is _one_ substantial reference to describe any of the definitions
you keep saying are facts.

>  Basically, what
> I said was meant to further the discussion (by trying to take the
> emotional element *out of it* -- an utter failure as it turns out).

Yes, a failure.  You may not realize it, but you have been injecting
emotion into this discussion all along, not taking it out.  See below.

> I don't think this is the same as an ad hominem that first attacks
> the character of the opponent and then concludes that *because of that
> character* the opponent must be wrong.

I agree, of course.  But what "ad hominem" actually means is "to the man",
thus implying not to the conversation at hand.  Emotion is not a bad thing
in and of itself, but to the extent that it detracts from the conversation,
and especially to the extent that the person being emotional doesn't even
attempt to understand what the other is saying, it stalls progress in
the discourse.  My attempts to downplay your own emotional outbursts
(and/or to work around them to get to the heart of the conversation)
failed, and when I realized that I was getting emotional myself about
the way in which you were presenting yourself, I decided to stop the
conversation.  If in this meta-conversation we can agree to each stop
the emotional outbursts, to take a step back and to actually try to
understand the other, then we can resume the actual conversation.

> In other words, while I indeed think that you are wrong, I don't think
> that this is so because you are emotional about it.  What I do think is
> that we have this discussion (and this style of discussion) because you
> are emotional about it, that's all.

Interestingly, I have been thinking the same about you all along.

Here's why:

This whole subsection of thread started with my comment about the
problematic nature of Fred Gilham's attempt to define "tail recursion",
and the entire content of your first paragraph to me was:

! Wrong.  It is neither.  (And googling for definitions is not a very
! good idea given the amount of misinformation that's out there on the
! web.)

Starting a whole discourse with the single word "Wrong" is a
good way to shut off the discourse.  And disqualifying sources
of information without providing altrnative such sources is also
a good way to shut off discourse.

Later in the same reply you started your thought with the word
"Nonsense",  which was not necessary, nor was it conducive to
discusssion and understanding.  Instead, it is an epithet, of sorts,
designed to put the other party in his place without argument.
If argument follows which backs up the epithet, then the epithet
was unneseccary and only serves to try to inflame.

Again, when I said:

= Because the term "tail recursion" is not agreed upon.

You repied:

! What do you mean by "not agreed upon"?  Just because one person
! (e.g., you) disagrees?  Well, then I guess nothing is "agreed upon"
! in this world.

This is arguably the first ad hominem argument, and in fact indicated to
me that you weren't interested in either doing any research on the subject,
nor understanding the side of the argument I represent.  It speaks to me
of you throwing up your hands in resignation that I will never understand
your point of view (which has always been my goal in this discussion -
I have done quite a bit of research into the Scheme documents to try
to understand _where_ the apparent disagreements have been, and am still
in fact under the impression that William Clinger and I are very close
to understanding each other).

In that same message, after I had said

= The CL community tend to gravitate toward the second possibility.

You replied with

!  Don't be silly...

Another emotional outburst.  And yet again, in the same message, you
apparently misunderstood my statement attempting to take on the Scheme
point of view, and responded with an obviously defensive and emotional
outburst:

! You got me there.  No really.  Are you not capable of correcting for a
! minor slipup in my English?  Of course, what I was trying to say was
! "tail recursion is not some sort of implementation technology".  So,
! *of course* it is no optimization -- it is in a completely different
! category.  With a bit of good will, you would have known that this is
! what I meant.  (Actually, my guess is that you did know that.)

Note that it wasn't any slipup in your English; I understood what you had
said perfectly, but didn't understand why there was such emotion in
your staement which completely agreed with mine.

> Anyway, you are probably right in that any further discussion will not
> get us anywhere.

It would truly be unfortunate if we could not resolve our differences
and get back to discussing the actual matter that was at hand.  I'm
willing to try, if you are willing to modify your style of discussion.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: back to the object level [Re: Meta Discussion about emotion (Off Topic: was: Re: Implementing call/cc ...)]
Date: 
Message-ID: <m27k9j8wq8.fsf_-_@tti5.uchicago.edu>
[Ok, let me start with another apology for those short "Wrong"s and
"Nonsense"s.  Indeed, they are rude, and if I would argue with you (or
anyone) over a glass of beer, I probably wouldn't use them.  Of
course, this does not make the fact that I did use them here any
better.  I'll try to behave from now on. (Usenet will never be the
same again... :-)]

Anyway, let's focus on the substance:

You asked for references.  Well, here we go.  I'll try to discuss the
subject matter and embed references as I go:

1. "Tail call".  I think this one should be uncontroversial.  This is
   a syntactic property, and as such depends on the syntax of the
   language in question.

   Definite references for this are surprisingly hard to come by (at
   least on my bookshelf) -- many authors seem to simply assume that
   everyone already understands what a call in tail position is.  This
   is probably a reasonable thing.

   One reference I found is in Appel's "Modern Compiler Implementation in X"
   (where in my case X = ML).  Page 329 (chapter 15.6) states (referring to
   the Fun-Tiger language):

   "A function call f(x) within the body of another function g(y) is
   in tail position if 'calling f is the last thing that g will do
   before returning.'  More formally, in each of the following
   expressions, the B_i are in tail contexts, but the C_i are not:
      1. let var x := C_1 in B_1 end
      2. C_1 (C_2)
      3. if C_1 then B_1 else B_2
      4. C_1 + C_2

    ...

    If a function call f(x) is an a tail context with respect to its enclosing
    expression, and that expression is in a tail context, and so on all the
    way to the body of the enclosing function definition ..., then f(x) is
    a tail call."

    So "tail call" is a syntactically defined property of a particular occurence
    of a function call within a given program.

    Anyway, I won't dwell on this any further unless someone complains and
    comes with a different notion of "tail call".

2.  "Tail recursion".  This one is not so clear.  Different authors
    have different ideas of what they mean by it.  In all cases I
    could find during a cursory search (for a sample, see, e.g.,
    Sethi's "Programming Languages, Concepts and Constructs",
    Reynolds' "Theories of Programming Languages", the Dragon Book)
    the author refers to a property of a program (or a program part
    such as a particular function).

    Also, in nearly all cases they in fact describe it as if it were a
    syntactic property.  Nobody seems to bother defining it formally
    enough, though.  Most of the time it is done by example, and
    examples tend to show things syntactically.  It is not clear
    whether or not they had the more general notion (see below) in
    mind, but given the context, some of them don't seem to.
    (If you have seen my other article where I
    listed 3 possible cases -- locally obvious, globally obvious,
    undecidable -- they always refer to the locally obvious case.
    The discussion invariable is about how to turn this case into
    explicit loops.)

    I also decided to at least see what Google comes up with.  To my
    surprise, there wasn't much, and most of it was from the Jargon
    file.

    Anyway, I don't think that a precise definition of "tail
    recursion" is even necessary for practical purposes.  In most
    common usage though, the term refers to programs and not to
    implementations or languages.  It also seems to be always
    consistent (albeit often quite less general) with the notion of a
    function f calling itself via a chain of tail calls (in the sense
    of 1. above) where each such call may or may not be syntactically
    apparent. (It might be mediated by some HOF instead.)

3.  "Tail-recursive".  This, in fact, I have seen used both in
    reference to programs (or program parts) and implementations (but,
    fortunately, the latter only in one case).  Most authors call a
    program (part) "tail recursive" if it uses tail recursion in the
    sense of 2.  Some reserve the word for those program (parts) that
    use tail-recursion exclusively (i.e., that do not exhibit recursion
    that involves non-tail calls).

    By the way, the one use of the adjective "tail-recursive" in
    reference to implementations that I have seen is in SICP where the
    authors use it the same way RnRS uses "Properly Tail-Recursive".
    (I find this unfortunate, in case you wonder.)

    I have not seen the adjective applied to languages, but I can
    think of one such usage that I would be comfortable with: A
    language is tail-recursive if all the programs that you write in
    it are tail-recursive (in the sense that tail recursion is the
    only expressible recursion).  As a matter of fact, I have just
    recently designed and implemented such a language: the language
    allows recursive tail calls and also non-tail calls if they are
    not recursive.  The non-recursion requirement in the non-tail case
    is guaranteed by a combination of syntactic restrictions and type
    checking. See our forthcoming PLDI paper for more information on
    this work.  (This was joint work with Lal George.)

4. "Proper Tail-Recursion".  This is, indeed, used mainly in Scheme
   circles.  I don't particularly like the term myself, and for a long
   time I objected to even the notion it tries to denote because that
   notion was in fact ill-defined. The wording used in RnRS still
   merely appeals to the readers assumed knowledge of how one would
   implement Scheme.

   Related notions, however, have been used by a variety of authors,
   including Will Clinger, who very well may have exhausted this topic
   by now.  Earlier on, Appel suggested "safe-for-space" and showed an
   axiomatic approach.  What's especially nice about it is that it
   gives you a framework: changing the axioms one can derive different
   notions of what "safe-for-space" means.  A language definition can
   chose to require some sort of space safety, and implementations can
   be judged according to that.  But there is no need for a
   one-size-fits-all notion of space safety.

   Of course, every language implementation is safe-for-space for a
   sufficiently lenient set of axioms.  When comparing languages or
   their implementations based on space safety, one has to make sure
   that the axiom sets used in either case are sufficiently
   closely related.

   Space complexity measures in general, of course, are much more
   widely used throughout Computer Science.

Matthias
From: Duane Rettig
Subject: Re: back to the object level [Re: Meta Discussion about emotion (Off Topic: was: Re: Implementing call/cc ...)]
Date: 
Message-ID: <4bryvpn2u.fsf@beta.franz.com>
Matthias Blume <········@shimizu-blume.com> writes:

> [Ok, let me start with another apology for those short "Wrong"s and
> "Nonsense"s.  Indeed, they are rude, and if I would argue with you (or
> anyone) over a glass of beer, I probably wouldn't use them.

Apopology accepted.

>  Of
> course, this does not make the fact that I did use them here any
> better.

The context in which they're used is understood and not a problem.

>  I'll try to behave from now on. (Usenet will never be the
> same again... :-)]

Maybe we're on to something here :-)

> Anyway, let's focus on the substance:
> 
> You asked for references.  Well, here we go.  I'll try to discuss the
> subject matter and embed references as I go:
> 
> 1. "Tail call".  I think this one should be uncontroversial.  This is
>    a syntactic property, and as such depends on the syntax of the
>    language in question.

I agree.

>    Definite references for this are surprisingly hard to come by (at
>    least on my bookshelf) -- many authors seem to simply assume that
>    everyone already understands what a call in tail position is.  This
>    is probably a reasonable thing.

I agree also on the scarcity of the references, for many of these
terms.  Perhaps that is one reason why users of the terms tend to
assume that everyone else is using them in the same way...

In the case of "tail call" I agree completely, and have never
seen it used any other way.

>    One reference I found is in Appel's "Modern Compiler Implementation in X"
>    (where in my case X = ML).  Page 329 (chapter 15.6) states (referring to
>    the Fun-Tiger language):

Thanks for this reference.

>    "A function call f(x) within the body of another function g(y) is
>    in tail position if 'calling f is the last thing that g will do
>    before returning.'  More formally, in each of the following
>    expressions, the B_i are in tail contexts, but the C_i are not:
>       1. let var x := C_1 in B_1 end
>       2. C_1 (C_2)
>       3. if C_1 then B_1 else B_2
>       4. C_1 + C_2
> 
>     ...
> 
>     If a function call f(x) is an a tail context with respect to its enclosing
>     expression, and that expression is in a tail context, and so on all the
>     way to the body of the enclosing function definition ..., then f(x) is
>     a tail call."
> 
>     So "tail call" is a syntactically defined property of a particular occurence
>     of a function call within a given program.
> 
>     Anyway, I won't dwell on this any further unless someone complains and
>     comes with a different notion of "tail call".

Agreed.  I think we can put this one to bed.



We should also add a 1.5: "recursive", or "recursion".  Most sources
say that recursion is the expression of a problem in terms of simpler
problems.  Many, if not most, of the sources I see now on the web say
that it always pertains to the same  functionality (i.e. a self-call),
but at least one or two didn't make that requirement, and my recollections
from the early '70s, in my hardware days, was that some of the definitions
of of "recursion" described it as nothing more than a synonym for
nesting as in nested subroutine calls.

Some of the newer definitions I see tend to define "recursion" as
self-calling, and then they define "mutual-recursion as two or
more functions calling each other - the etymology of this scond
term would probably have had to come from the more archaic usage
of recursion I was familiar with.

So as a 1.5 item, I am willing to live with the current common usage
of "recursion" as essentially meaning "self-call".

> 2.  "Tail recursion".  This one is not so clear.  Different authors
>     have different ideas of what they mean by it.  In all cases I
>     could find during a cursory search (for a sample, see, e.g.,
>     Sethi's "Programming Languages, Concepts and Constructs",
>     Reynolds' "Theories of Programming Languages", the Dragon Book)
>     the author refers to a property of a program (or a program part
>     such as a particular function).
> 
>     Also, in nearly all cases they in fact describe it as if it were a
>     syntactic property.  Nobody seems to bother defining it formally
>     enough, though.  Most of the time it is done by example, and
>     examples tend to show things syntactically.  It is not clear
>     whether or not they had the more general notion (see below) in
>     mind, but given the context, some of them don't seem to.
>     (If you have seen my other article where I
>     listed 3 possible cases -- locally obvious, globally obvious,
>     undecidable -- they always refer to the locally obvious case.
>     The discussion invariable is about how to turn this case into
>     explicit loops.)
> 
>     I also decided to at least see what Google comes up with.  To my
>     surprise, there wasn't much, and most of it was from the Jargon
>     file.
> 
>     Anyway, I don't think that a precise definition of "tail
>     recursion" is even necessary for practical purposes.  In most
>     common usage though, the term refers to programs and not to
>     implementations or languages.  It also seems to be always
>     consistent (albeit often quite less general) with the notion of a
>     function f calling itself via a chain of tail calls (in the sense
>     of 1. above) where each such call may or may not be syntactically
>     apparent. (It might be mediated by some HOF instead.)

So because it is not so well-defined, and since we do seem to agree
that a definition of "tail recursion" is not even necessary, let's
also agree to try not to use the term anymore in discussions, because
it becomes a loaded term which doesn't anchor the participants'
expecatations of its meaning.

> 3.  "Tail-recursive".  This, in fact, I have seen used both in
>     reference to programs (or program parts) and implementations (but,
>     fortunately, the latter only in one case).  Most authors call a
>     program (part) "tail recursive" if it uses tail recursion in the
>     sense of 2.  Some reserve the word for those program (parts) that
>     use tail-recursion exclusively (i.e., that do not exhibit recursion
>     that involves non-tail calls).
> 
>     By the way, the one use of the adjective "tail-recursive" in
>     reference to implementations that I have seen is in SICP where the
>     authors use it the same way RnRS uses "Properly Tail-Recursive".
>     (I find this unfortunate, in case you wonder.)

Yes, William Clinger and I were discussing it, and I think he was also
referring to "tail recursion" as well.

>     I have not seen the adjective applied to languages, but I can
>     think of one such usage that I would be comfortable with: A
>     language is tail-recursive if all the programs that you write in
>     it are tail-recursive (in the sense that tail recursion is the
>     only expressible recursion).  As a matter of fact, I have just
>     recently designed and implemented such a language: the language
>     allows recursive tail calls and also non-tail calls if they are
>     not recursive.  The non-recursion requirement in the non-tail case
>     is guaranteed by a combination of syntactic restrictions and type
>     checking. See our forthcoming PLDI paper for more information on
>     this work.  (This was joint work with Lal George.)

By "non-recursion", I am assuming here that you mean "non-self-calling",
right?  If not, what is an example of the difference in this context of
"a non-tail call if they are non-recursive"?  Sorry for my density here,
but this one is confusing me.

> 4. "Proper Tail-Recursion".  This is, indeed, used mainly in Scheme
>    circles.  I don't particularly like the term myself, and for a long
>    time I objected to even the notion it tries to denote because that
>    notion was in fact ill-defined. The wording used in RnRS still
>    merely appeals to the readers assumed knowledge of how one would
>    implement Scheme.

This is interesting; I had never liked the phrase either, but did not
realize that there were Schemers who also didn't like the phrase.  It
seems as though our dislike of the phrase is for similar reasons.  But
the reason why I had originally objected to the phrase is that outside
of Scheme (and related) circles, the whole term is not defined, and thus
those of us outside of these circles tend to form what we think is a
definition by composition - prepending the qualitative adjective "proper"
to the ill-defined phrase "tail-recursion" is a recipe for disaster
in understanding.

If we (the Computer Science academia and intustry) can agree on a new
term for this concept, one which will make Schemers happy because it
starts to become more mnemonic, and one which will make CLers happy
because there are no built-in incorrect assumptions, then it would
become a lot easier to communicate the concepts across the boundaries
of these disciplines.

Yesterday I suggested "space bounded recursion" as a potential
aternative phrase which could be defined.  I'm not tied to it,
but perhaps it could serve as a starting point.  Another possibility
is simply "safe for space".  See below.

>    Related notions, however, have been used by a variety of authors,
>    including Will Clinger, who very well may have exhausted this topic
>    by now.  Earlier on, Appel suggested "safe-for-space" and showed an
>    axiomatic approach.  What's especially nice about it is that it
>    gives you a framework: changing the axioms one can derive different
>    notions of what "safe-for-space" means.  A language definition can
>    chose to require some sort of space safety, and implementations can
>    be judged according to that.  But there is no need for a
>    one-size-fits-all notion of space safety.

Yes, using a term that has to do with space safety is probably not a
problem for CLers, since we don't deal that much with such safety.
Of course, we do get bitten by stack overflows once in a while, but
that tends to be a tradeoff we are willing to make.  If there were
to be any problem with the term "safe", it is that the safety is
implying asymptotic space usage behavior, and that is not necessarily
clear from the generic term "safety" or "safe".  But since "safe for space"
is an unencumbered term as far as I know, there seems to be no way
it could be misunderstood as something it is not.

>    Of course, every language implementation is safe-for-space for a
>    sufficiently lenient set of axioms.  When comparing languages or
>    their implementations based on space safety, one has to make sure
>    that the axiom sets used in either case are sufficiently
>    closely related.
> 
>    Space complexity measures in general, of course, are much more
>    widely used throughout Computer Science.
> 
> Matthias

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: back to the object level [Re: Meta Discussion about emotion (Off Topic: was: Re: Implementing call/cc ...)]
Date: 
Message-ID: <m23ck68f7s.fsf@tti5.uchicago.edu>
Duane Rettig <·····@franz.com> writes:

> We should also add a 1.5: "recursive", or "recursion".  Most sources
> say that recursion is the expression of a problem in terms of simpler
> problems.  Many, if not most, of the sources I see now on the web say
> that it always pertains to the same  functionality (i.e. a self-call),
> but at least one or two didn't make that requirement, and my recollections
> from the early '70s, in my hardware days, was that some of the definitions
> of of "recursion" described it as nothing more than a synonym for
> nesting as in nested subroutine calls.
> 
> Some of the newer definitions I see tend to define "recursion" as
> self-calling, and then they define "mutual-recursion as two or
> more functions calling each other - the etymology of this scond
> term would probably have had to come from the more archaic usage
> of recursion I was familiar with.

Actually, it is worse than that.  There is also "recursion theory" --
which applies this term to everything that is *computable*.

> >     I have not seen the adjective applied to languages, but I can
> >     think of one such usage that I would be comfortable with: A
> >     language is tail-recursive if all the programs that you write in
> >     it are tail-recursive (in the sense that tail recursion is the
> >     only expressible recursion).  As a matter of fact, I have just
> >     recently designed and implemented such a language: the language
> >     allows recursive tail calls and also non-tail calls if they are
> >     not recursive.  The non-recursion requirement in the non-tail case
> >     is guaranteed by a combination of syntactic restrictions and type
> >     checking. See our forthcoming PLDI paper for more information on
> >     this work.  (This was joint work with Lal George.)
> 
> By "non-recursion", I am assuming here that you mean "non-self-calling",
> right?  If not, what is an example of the difference in this context of
> "a non-tail call if they are non-recursive"?  Sorry for my density here,
> but this one is confusing me.

Yes, I mean non-self-calling (directly or indirectly).  In other
words, a function f may call itself, but the call chain leading to
this self-call must consist of tail-calls only.

The language also allows Pascal-style function values (downward funarg
only), so functions are not quite first-class, and neither are what we
call exceptions in this language.  But the bottom line is -- with all
our restrictions on the above in place -- that the language can be
compiled without any stack or any heap to a Fortran-like runtime
model.

Matthias
From: Joe Marshall
Subject: Re: back to the object level [Re: Meta Discussion about emotion (Off Topic: was: Re: Implementing call/cc ...)]
Date: 
Message-ID: <65p2g0u7.fsf@ccs.neu.edu>
Matthias Blume <········@shimizu-blume.com> writes:

> 4. "Proper Tail-Recursion".  This is, indeed, used mainly in Scheme
>    circles.  I don't particularly like the term myself, and for a long
>    time I objected to even the notion it tries to denote because that
>    notion was in fact ill-defined. The wording used in RnRS still
>    merely appeals to the readers assumed knowledge of how one would
>    implement Scheme.

This is true.  The term seems to be useful in differentiating between
those implementations that `pop the stack' upon any exit from a
function versus those that merely notice `self calls' and change them
into loops.  The latter is better than nothing, but isn't `proper tail
recursion'.
From: ozan s yigit
Subject: Re: back to the object level [Re: Meta Discussion about emotion (Off Topic: was: Re: Implementing call/cc ...)]
Date: 
Message-ID: <vi4el3jpj8t.fsf@blue.cs.yorku.ca>
Matthias Blume: [tail call/tail recursion/proper tail recursion
notes and references elided]

i thought i'd add that appel's "compiling with continuations" talks
specifically about the scheme report's "mysterious injunction" on tail
recursion under the "space complexity chapter" with examples.

oz
---
In der Kunst ist es schwer etwas zu sagen,
was so gut ist wie: nichts zu sagen. - wittgenstein
From: Fred Gilham
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <u78yu1km22.fsf@snapdragon.csl.sri.com>
Matthias Blume <········@shimizu-blume.com> wrote:
> Tail-recursion is a property of (a portion of) the dynamic execution
> of a program -- namely when all function calls (within that portion)
> are tail calls.

This is interesting, but how does it affect compilation?  Are you
saying that tail recursion is more general than the syntactic property
I alluded to, because sometimes something will be a tail call even
though it is not syntactically apparent?  So the idea is that a
compiler can transform the syntactically apparent tail calls but not
the other kind?  I thought all tail call merging or reduction or
whatever one wants to call it operated syntactically.  If it's a
run-time property then it seems you can't compile it, at least all the
time.

If you mean something else, I'd be curious to hear it.

-- 
Fred Gilham                                        ······@csl.sri.com
America does not know the difference between sex and money. It treats
sex like money because it treats sex as a medium of exchange, and it
treats money like sex because it expects its money to get pregnant and
reproduce.                                   --- Peter Kreeft
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m24r4pkjse.fsf@tti5.uchicago.edu>
Fred Gilham <······@snapdragon.csl.sri.com> writes:

> Matthias Blume <········@shimizu-blume.com> wrote:
> > Tail-recursion is a property of (a portion of) the dynamic execution
> > of a program -- namely when all function calls (within that portion)
> > are tail calls.
> 
> This is interesting, but how does it affect compilation?

It doesn't.

>  Are you
> saying that tail recursion is more general than the syntactic property
> I alluded to, because sometimes something will be a tail call even
> though it is not syntactically apparent?

No.  All tail calls are syntactically apparent.  But most programs
contain both tail-calls and non-tail calls.  Tail-recursive behavior
is exhibited by certain programs, and whether or not a program is
tail-recursive has to do with the execution of those tail calls and
non-tail calls.

Roughly speaking, a particular program execution is tail-recursive if
every non-tail call eventually returns.  (Notice that tail-calls do
not have to ever return for that.)  A program is tail-recursive if
every possible execution of that program is tail-recursive.(*)

To give an example (in Scheme):

   (define (f i n)
     (if (> i n) '()
         (if (goldbach-counterexample? i)
             (begin (f (+ i 2)) (print "counterexample!\n"))
             (f (+ i 2)))))

where the predicate goldbach-counterexample? returns true iff its
argument is even, greater than 3, and not the sum of two primes.  Now,
Goldbach and Euler believe that this function f is tail-recursive, but
we don't know for sure because it depends on whether or not what's
known as the Goldbach Conjecture is true or not.  Individual
executions of this program (in particular those a relatively small n
(2 * 10^16 as of this writing [MathWorld]).

So far, nobody said anything about implementations.  Tail-calls and
tail-recursion are properties of programs -- the first one being
syntactic.

Finally, an *implementation* is Properly Tail-Recursive if the amount
of space allocated on behalf of procedure calls is asymptotically
proportional to the number of pending non-tail calls (= non-tail calls
that have not yet returned).

[Side note: I have complained about the Proper Tail-Recursion
requirement in RnRS many times because for the above "definition" to
make sense one must say what "on behalf of procedure calls" means --
but RnRS doesn't.  Will Clinger's paper closes this gap, but it has
not made it into the Scheme definition.]

Matthias

(*) I can think of another plausible definition of "tail recursion": A
program execution is tail-recursive if it makes *no* non-tail calls.
Notice that this is a trivial instance of the former definition.
Moreover, you perhaps noticed that the definition of "Proper
Tail-Recursion" -- which is a property ascribed to implementations as
opposed to programs -- does not mention "tail recursion" but only
"tail call" (actually: "non-tail call"), so it does not depend on the
subtleties of what one wishes to mean by "tail recursion".
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <pan.2003.04.23.23.03.16.251363@shimizu-blume.com>
On Wed, 23 Apr 2003 16:04:17 -0500, Matthias Blume wrote:

> Fred Gilham <······@snapdragon.csl.sri.com> writes:
> 
>> Matthias Blume <········@shimizu-blume.com> wrote:
>> > Tail-recursion is a property of (a portion of) the dynamic execution
>> > of a program -- namely when all function calls (within that portion)
>> > are tail calls.
>> 
>> This is interesting, but how does it affect compilation?
> 
> It doesn't.
> 
>>  Are you
>> saying that tail recursion is more general than the syntactic property
>> I alluded to, because sometimes something will be a tail call even
>> though it is not syntactically apparent?
> 
> No.  All tail calls are syntactically apparent.  But most programs
> contain both tail-calls and non-tail calls.  Tail-recursive behavior
> is exhibited by certain programs, and whether or not a program is
> tail-recursive has to do with the execution of those tail calls and
> non-tail calls.
> 
> Roughly speaking, a particular program execution is tail-recursive if
> every non-tail call eventually returns.  (Notice that tail-calls do
> not have to ever return for that.)  A program is tail-recursive if
> every possible execution of that program is tail-recursive.(*)
> 
> To give an example (in Scheme):
> 
>    (define (f i n)
>      (if (> i n) '()
>          (if (goldbach-counterexample? i)
>              (begin (f (+ i 2)) (print "counterexample!\n"))
>              (f (+ i 2)))))
> 
> where the predicate goldbach-counterexample? returns true iff its
> argument is even, greater than 3, and not the sum of two primes.  Now,
> Goldbach and Euler believe that this function f is tail-recursive, but
> we don't know for sure because it depends on whether or not what's
> known as the Goldbach Conjecture is true or not.  Individual
> executions of this program (in particular those a relatively small n
> (2 * 10^16 as of this writing [MathWorld]).
> 

Actually, I just realized that this would be tail-recursive 
according to my own definition above because of the
upper bound on the recursion (n).

... which just goes to show that "tail recursion" is, indeed,
a somewhat fuzzy thing.  Fortunately, however, one does not
actually have to nail this one down in practise (but I already
said that).

Matthias
From: Erann Gat
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <gat-2304031831240001@192.168.1.51>
In article <······························@shimizu-blume.com>, "Matthias
Blume" <········@shimizu-blume.com> wrote:

> > To give an example (in Scheme):
> > 
> >    (define (f i n)
> >      (if (> i n) '()
> >          (if (goldbach-counterexample? i)
> >              (begin (f (+ i 2)) (print "counterexample!\n"))
> >              (f (+ i 2)))))
> > 
> > where the predicate goldbach-counterexample? returns true iff its
> > argument is even, greater than 3, and not the sum of two primes.  Now,
> > Goldbach and Euler believe that this function f is tail-recursive, but
> > we don't know for sure because it depends on whether or not what's
> > known as the Goldbach Conjecture is true or not.  Individual
> > executions of this program (in particular those a relatively small n
> > (2 * 10^16 as of this writing [MathWorld]).
> > 
> 
> Actually, I just realized that this would be tail-recursive 
> according to my own definition above because of the
> upper bound on the recursion (n).

Actually, your code as given will generate an error, which I suppose also
makes it tail recursive according to your definition.

> ... which just goes to show that "tail recursion" is, indeed,
> a somewhat fuzzy thing.

I'd say it goes to show that your definition is pretty silly and useless.

E.
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <pan.2003.04.24.03.19.40.733274@shimizu-blume.com>
On Wed, 23 Apr 2003 18:31:24 -0700, Erann Gat wrote:

> Actually, your code as given will generate an error, which I suppose also
> makes it tail recursive according to your definition.

Sorry, I didn't test it.  I guess I forgot to pass n to the recursive
calls of f.  Silly me.  

>> ... which just goes to show that "tail recursion" is, indeed,
>> a somewhat fuzzy thing.
> 
> I'd say it goes to show that your definition is pretty silly and useless.

True, it did occur to me later that it doesn't quite capture what I wanted
it to capture.  A much simpler "definition" (which I also just posted in
another reply) is this:

  - A function f is recursive if it directly or indirectly calls itself.
  - A recursive function f is tail-recursive if the following holds:
    Whenever f calls f, all pending calls between the two calls of
    f -- including the second call of f, but excluding the first one --
    are tail calls.

This is somewhat better, but, admittedly, I can already see a number of
problems with it, too.  But notice that it really does not matter what we call
"tail recursion".  What matters is "tail calls" vs. "non-tail calls",
which is easy and should be uncontroversial.  The contentious issue was
the one of "Proper Tail Recursion" -- which other than its name suggests --
is not a special kind of "tail recursion" and which can, indeed, be
explained entirely without reference to "tail recursion" (whatever we
decide the latter to mean).

Matthias
From: Jeffrey Mark Siskind
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <a520654a.0304240446.60e93a9c@posting.google.com>
Let me ask a substantive question: Does RnRS say anything about whether
for-each tail calls its first argument on its second argument? I.e. is the
following a valid implementation of for-each?

(define (for-each procedure list)
 (if (not (null? list))
     (begin (procedure (car list))
            (for-each procedure (cdr list)))))

Or must it be defined as:

(define (for-each procedure list)
 (if (not (null? list))
     (if (null? (cdr list))
         (procedure (car list))
         (begin (procedure (car list))
                (for-each procedure (cdr list))))))

If it doesn't, should it? Has anybody thought about this before?

This actually came up in practice in the following situation. The Prolog
community has long realized that the issue of tail call and potential for tail
merging comes up two ways in Prolog: one in the same sense that it occurs in
Scheme and one in the case of backtracking. The latter corresponds to a tail
call in a CPS converted program. I have been writing Prolog-like code in a
nondeterministic dialect of Scheme for years. A standard thing to do is:

(define (a-member-of list)
 (if (null? list)
     (fail)
     (either (first list) (a-member-of (rest list)))))

I was writing an LALR parser that would allow for ambiguous grammars through
nondeterminism. The essence of the code looks like:

(let* ((actions (possible-actions top-of-state-stack next-token))
       (action (a-member-of actions)))
 (case (kind action)
  ((shift) ...)
  ((reduce) ...)
  ((accept) ...)
  ((error) ...)
  (else (fuck-up))))

Traditional LALR parsers only handle the case where the is a single possible
action for each state-token pair. Mine handled the case of multiple possible
actions by nondeterministic backtracking. But I gave it unambiguous grammars
where there was only a single possible action, represented as a singleton
list. I (naively) expected the trail to be constant size, independent of the
input-string length because my implementation of either does `tail-call its
last argument'. What I mean by that is that the top element of the trail aka
the choice-point stack (actually implemented on the heap) is popped off before
evaluating the last alternative of an either. But despite this, I was still
getting growth of the trail, linear in the length of the input string. I
realized that I needed to change the definition of a-member-of to:

(define (a-member-of list)
 (if (null? list)
     (fail)
     (if (null? (rest list))
         (first list)
         (either (first list) (a-member-of (rest list))))))

to get a constant-sized trail.

Later on, for various reasons, I rewrite the parser in CPS without using the
nondeterministic primitives. And the essence of the parser looked like:

(let ((actions (possible-actions top-of-state-stack next-token)))
 (for-each
  (lambda (action)
   (case (kind action)
    ((shift) ...)
    ((reduce) ...)
    ((accept) ...)
    ((error) ...)
    (else (fuck-up))))
  actions))

And I observed non-constant-space behavior of the program in several
Scheme implementations that I knew were properly tail recursive. I though that
there were bugs in those implementations. But I then realized that the problem
is not in whether those implementations were properly tail recursive but with
what their implementation of for-each was, much analogous to my issue with my
definition of a-member-of. When I wrote my own version of for-each, along the
lines of the second definition above, I again got constant-space behavior
independent of the input-string length.

Does RnRS say anything about the implementation of for-each? Should it?
Has anybody thought about this before?
From: Joe Marshall
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <y920hut8.fsf@ccs.neu.edu>
····@purdue.edu (Jeffrey Mark Siskind) writes:

> Does RnRS say anything about the implementation of for-each? 

Nope.  (Well, not about tail recursive properties of for-each.  It
certainly says *something* about how for-each is supposed to behave.)

> Should it?

Yes.  I think so.

> Has anybody thought about this before?

Yes.

These sorts of things crop up not infrequently.  My favorite example
is in the standard implementation of streams (lazy lists) using DELAY
and FORCE.  The simple implementation of FILTER ends up consuming
(temporarily) a large amount of resources that it shouldn't.  The use
of DELAY and FORCE, however, obscure the problem quite well.
(Essentially, the tail-call ends up being DELAYED, and the structure
necessary to hold the intermediate result gets allocated on the heap.
Neither the compiler nor the interpreter can easily determine that
the result will simply be returned to the next intermediate DELAYED
call, so a chain of useless pending memoization is built up.  This is
directly analagous to the useless pending stack frames built up when
tail recursion is used in a system that does not optimize them out.)

Another example:  imagine a DOTIMES macro in Scheme.  Should the last
iteration call the body tail recursively?
From: Don Geddis
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m3lly1ar6p.fsf@maul.geddis.org>
Matthias Blume <········@shimizu-blume.com> writes:
> Moreover, I don't see how "debuggability" could possibly be "pessimized".  If
> I can't rely on tail-call optimization, I have to write explicit loops --
> which have the same (or worse!) "debuggability" than the program with tail
> calls.

Why must you write loops?  Why can't you keep the recursive definition, but
not merge the tail calls?  (Yes, obviously you might run out of stack space
for large inputs, but surely there are situations where that isn't the primary
concern.)

Wouldn't it be easier for a novice to understand and debug a recursive function
if all the stack frames remained, and the student could trace the exact
sequence of calls?

Optimizing (aka merging) the tail calls sure saves on stack space, but it
seems to make debugging slightly worse (just like writing an explicit loop
would).

        -- Don
_______________________________________________________________________________
Don Geddis                    http://don.geddis.org              ···@geddis.org
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m2ist5l02u.fsf@tti5.uchicago.edu>
Fred Gilham <······@snapdragon.csl.sri.com> writes:

> Tail recursion: A syntactic property (somewhat complicated definition
> which I omit)

No.  Tail recursion is not a syntactic property.  See below.

> Tail call: A syntactic property (more general than tail recursion)

Being a tail call is, indeed, a syntactic property.  "Tail recursion"
is a dynamic property of a particular program execution.  A program
behaves tail-recursively on a segment of its dynamic execution if all
function calls that get executed within this segment are tail calls.

Matthias
From: ····@sonic.net
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <3EA6AACB.134EF051@sonic.net>
Fred Gilham wrote:
> 
> Joe Marshall <···@ccs.neu.edu> wrote:
> > ....
> >
> > My guess is that almost everyone that comes to comp.lang.lisp and
> > asks about `proper tail recursion' is in fact thinking only about
> > simple `tail call optimization' and not the weird screw cases.

Keep in mind that CL doesn't have the same kind of 
continuations (and therefore the same kind of stack 
behavior) as Scheme.  I think tail merging really 
would achieve PTR in a language like CL, so if CL 
people don't understand the difference, cut them a 
little slack because they've never needed to.
 
> Proper tail recursion:  A property relating to space complexity
> (bounded space consumption for tail calls)

works for me.
 
> Tail recursion: A syntactic property (somewhat complicated definition
> which I omit)

I think it's a semantic property rather than syntactic, but 
even scheme doesn't require you to distinguish this syntax 
from this semantics, so I'll go with it.
 
> Tail call: A syntactic property (more general than tail recursion)

Ditto.
 
> Tail call merging: In essence automated use of the JRST hack for tail
> calls.  The BASIC version of the JRST hack, which is where I first saw
> it, is replacing
 
> 100 GOSUB 200
> 110 RETURN
> 
> with
> 
> 100 GOTO 200

Wow, you must have been doing this longer than me.  :-)
 
> Tail call optimization: Optional use of tail call merging with the
> intent of minimizing space complexity

Tail Call Optimization is, AFAIK, not a technically loaded phrase 
the way "Proper Tail Recursion" is.  You can use it to describe any
optimization you want that applies to tail calls.  Usually that's 
going to be an optimization for space, but other things are 
concievable. 
 
> Scheme requires tail call merging (which is why it's not an
> optimization for Scheme) and requires that it have the property of
> proper tail recursion.

Actually, scheme doesn't require tail call merging.  Basically
what it requires is that you can execute any number of tail 
calls without requiring more than a finite number of stack 
frames be allocated at any time.  You can do close to that 
with tail call merging, and most schemes do tail call 
merging for efficiency.  And tail call merging would be 
sufficient to do it (I think) in a language with CL's semantics
and stack discipline.

But tail call merging isn't going to handle the allocation 
strategy for continuations of the type generated by scheme's 
call/cc (ie, continuations with unlimited extent, which can 
be called multiple times).

And you can achieve the property which scheme requires by 
allocating call frames on the heap, "snapping" the return 
pointer in the allocated frames if it's a tail call, and 
just garbage collecting the heap whenever you run out of
space.  It's not the most efficient way meet the requirement, 
but that system by itself can achieve PTR, and tail call 
merging by itself can't.  

Hope this helps...

			Bear
From: Fred Gilham
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <u71xztklii.fsf@snapdragon.csl.sri.com>
····@sonic.net wrote:
> Actually, scheme doesn't require tail call merging....
>
> But tail call merging isn't going to handle....
>
> And you can achieve the property which scheme requires by allocating
> call frames on the heap....
>
> Hope this helps...

I see your point.  Thanks, I think it helps. :-)

-- 
Fred Gilham                                          ······@csl.sri.com
The Net interprets Microsoft products as damage, and routes around them.
From: David Rush
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b83o4n$p8h$2@pixie.nscp.aoltw.net>
···@jpl.nasa.gov (Erann Gat) writes:
> In article <····························@posting.google.com>,
> ······@qnci.net (William D Clinger) wrote:
> > The examples in my paper aren't at all realistic; they were the simplest
> > I could contrive.  Here is the example I gave:

> Bottom line: even after carefully studying your post and your paper I
> still don't understand what you and Duane are arguing about.
> 
> I also find it rather frustrating that you generate a whole series of
> examples that you yourself admit are unconvincing, and then instead of
> finally generating a convincing example you instead do some some
> hand-waving 

He also pointed you at some excellent papers by people who have
covered this whole topic in a lot of detail. In fact, the full TCO bar
was raised to the standard of "safe-for-space" quite a few years
ago. A quick google search for "safe-for-space andrew appel" returned
107 references, quite a few of which go into some detail on the topic.

> beginning to understand why Erik Naggum would get so infuriated with
> Schemers.

You don't really want to go there...

david rush
-- 
Mathematicians stay away from actual numbers as much as possible. We
like to talk about numbers without actually exposing ourselves to
them - that's what computers are for.
	-- Randy Waterhouse in _Cryptonomicon_
From: Erann Gat
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <gat-2204031055480001@k-137-79-50-101.jpl.nasa.gov>
In article <············@pixie.nscp.aoltw.net>, David Rush <·····@aol.net>
wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> > In article <····························@posting.google.com>,
> > ······@qnci.net (William D Clinger) wrote:
> > > The examples in my paper aren't at all realistic; they were the simplest
> > > I could contrive.  Here is the example I gave:
> 
> > Bottom line: even after carefully studying your post and your paper I
> > still don't understand what you and Duane are arguing about.
> > 
> > I also find it rather frustrating that you generate a whole series of
> > examples that you yourself admit are unconvincing, and then instead of
> > finally generating a convincing example you instead do some some
> > hand-waving 
> 
> He also pointed you at some excellent papers by people who have
> covered this whole topic in a lot of detail. In fact, the full TCO bar
> was raised to the standard of "safe-for-space" quite a few years
> ago. A quick google search for "safe-for-space andrew appel" returned
> 107 references, quite a few of which go into some detail on the topic.

You miss the point.  I have no doubt that if I took the time to do the
research that the answer would be there.  But Will was already taking the
time to write an explanation rather than just pointing people at papers. 
Why would he do that if his aim was not to make the topic more accessible,
so that people without the time or the inclination to do the research
could also understand the issue?  My intention was not to say: you owe me
an explanation.  My intention was to say: it appeared to me that you were
trying to communicate something, and from my point of view you failed;
thought you might like to know.  (Personally, I like getting feedback from
people I'm trying to explain things to to let me know if I'm getting
through.)

E.
From: ozan s yigit
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <vi465ox8k2j.fsf@blue.cs.yorku.ca>
······@qnci.net (William D Clinger) writes:
> 
> The following paper contains several relevant citations:
> 
>     William D Clinger. Proper tail recursion and space efficiency.
>     In the Proceedings of the 1998 ACM Conference on Programming
>     Language Design and Implementation, June 1998, pages 174-185.
>     ftp://ftp.ccs.neu.edu/pub/people/will/tail.ps.gz
> 

> [abstract and discussion elided]

i think the paper and the discussion closes the lid on this topic; one
would hope rettig (or whomever) would attach this note into some sort of a
FAQ for the group, if for no other reason than to reduce the now exceedingly
silly "define what you really really really mean" discussions that emerge
when basic literacy requirements are not met...

oz
---
humans occasionally make mistakes, even programmers do. -- gerard j holzmann
From: Madhu
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m3u1che18d.fsf@robolove.meer.net>
Helu

* ozan s yigit  <···············@blue.cs.yorku.ca>

| i think the paper and the discussion closes the lid on this topic; one
| would hope rettig (or whomever) would attach this note into some sort of a
| FAQ for the group, if for no other reason than to reduce the now exceedingly
| silly "define what you really really really mean" discussions that emerge
| when basic literacy requirements are not met...

The basic literacy requirement thats not being met seems to be on
your part, in (not) following the subsequent discussion on this
thread.


Regards
Madhu

--
Open system or closed system, enlightenment or ideology, those are the
questions.	         	    "John C. Mallery" <····@ai.mit>
From: ozan s yigit
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <vi4n0i983s8.fsf@blue.cs.yorku.ca>
Madhu <·······@meer.net> writes:

> The basic literacy requirement thats not being met seems to be on
> your part, in (not) following the subsequent discussion on this
> thread.

i have followed the same thread for several years in a row by now. :)
the comment had to do with published work and more general than you seem
to think. if you are new to the medium, you may find that the usual usenet
discussions rarely have any deep consequences of literacy; most people
who have important, consequential points to make [if they haven't made
them elsewhere already] don't post them on usenet anymore...

oz
--- 
bosh requires a good deal of care. - edward lear
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <48yu5litk.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote:
> > This thread is crossposted.  The two newsgroups have different contexts.
> > Please don't use the term 'proper tail recursion' in comp.lang.lisp
> > without qualifying it, either by calling it "Scheme's 'proper tail
> > recursion'" or "'proper tail recursion' as defined by Scheme'.
> 
> Whatever the merits of the phrase "proper tail recursion" may be, it
> is the phrase that was coined by the researchers who investigated and
> defined this concept.  It means what they defined it to mean, and as
> clarified and elaborated by those of us who continued their line of
> research.
> 
> The concept of proper tail recursion is not unique to Scheme.  That
> and related concepts such as Appel's safe-for-space-complexity have
> proved useful in quite a few languages.  Scheme is no longer the only
> language that mandates proper tail recursion.
> 
> The compiler community has developed a separate tradition of tail call
> optimization.  This has created some confusion, as many people assumed
> that proper tail recursion is the same thing as tail call optimization.
> They are not, however, the same.  There are cases in which it is not
> possible to achieve the asymptotic space efficiency of proper tail
> recursion while allocating variables on a stack.  In the compiler
> community's definition of tail call optimization, these conflicts
> between stack allocation and space efficiency are resolved by using
> stack allocation at the expense of space efficiency.  Proper tail
> recursion requires these conflicts to be resolved in favor of space
> efficiency, at the expense of stack allocation.

So far you are describing a Scheme-based technique (whether Scheme was
first or last to use it or not) for turning recursion into iteration.
That's fine; I don't have any problem with you using your term, but when
you take it into the CL community, please don't call it "proper tail
recursion" without explaining what it means, either by a reference, or
by a brief description.  In the CL community, "tail call" and "tail
recursion" has certain meaning, and has nothing to do with iteration.
You may want to turn recursion into iteration.  We don't have that
obsession.

> So it is a technical fact that tail call optimization and proper tail
> recursion are different things.  Nonetheless it remains easy for people
> to confuse the two.  That is why people who understand the distinction
> are inclined to object when someone proposes to replace the established
> names of these concepts by names that are likely to create even more
> confusion.

You are making my case perfectly for me.  When a coined phrase is
confusing, many times it is the specialized etymology of the domain
which is causing the confusion, and a few extra words are necessary
to clarify in a broader community.

> For example, it is not at all clear to me whether Duane intends for
> his preferred terms, "tail merging" or "tail-call merging", to mean
> tail call optimization or proper tail recursion.

Then you aren't reading very well, because I defined it in my previous
article (it's neither, by the way).

>  It is not at all
> clear that Duane even understands the difference.

If I didn't understand the difference, why would I have objected
to its use unqualified?

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304201448.7d7cdae3@posting.google.com>
Duane Rettig wrote:
> So far you are describing a Scheme-based technique (whether Scheme was
> first or last to use it or not) for turning recursion into iteration.

What I described is a general technique that can be and is used
to specify and to reason about the asymptotic space complexity of
programs.  For example, my definition of O(S_{GC}) is probably the
best extant formalization of the space complexity of recursion in
Common Lisp.

> If I didn't understand the difference, why would I have objected
> to its use unqualified?

Because there are none so blind as they who will not see?

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4y924paf5.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote:
> > So far you are describing a Scheme-based technique (whether Scheme was
> > first or last to use it or not) for turning recursion into iteration.
> 
> What I described is a general technique that can be and is used
> to specify and to reason about the asymptotic space complexity of
> programs.  For example, my definition of O(S_{GC}) is probably the
> best extant formalization of the space complexity of recursion in
> Common Lisp.

Only for the continuation-passing style.  Common Lisp, which does not
require (nor even define) continuation-passing style, is not required
to make the transformations of tail calls that your paper suggests,
nor is it even required in practice to regard stack space as space
usage.  The idea that a "space leak" could have anything to do with
the stack is foreign to the CL community, and stack allocation is
generally considered to be "free".

[Lest my usage of the term "tail call" be misunderstood, I don't
mean it in the sense defined by your paper, but in the more general
sense, where a tail call simply means a call in tail position, where
the caller has no more work to do other than to return the value[s]
returned by the callee.]

> > If I didn't understand the difference, why would I have objected
> > to its use unqualified?
> 
> Because there are none so blind as they who will not see?

So call me blind, and endulge me by referring to it as "Scheme's
proper tail recursion" or "proper tail recursion as required by
the continuation-passing style" when speaking on comp.lang.lisp.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304210639.1562f4a@posting.google.com>
Duane Rettig wrote:
> > What I described is a general technique that can be and is used
> > to specify and to reason about the asymptotic space complexity of
> > programs.  For example, my definition of O(S_{GC}) is probably the
> > best extant formalization of the space complexity of recursion in
> > Common Lisp.
> 
> Only for the continuation-passing style.

Nonsense.  O(S_{GC}) captures the space complexity of general recursion
in Common Lisp, Java, and pretty much the entire class of languages that
require garbage collection but do not require proper tail recursion or
any similar property.

> The idea that a "space leak" could have anything to do with
> the stack is foreign to the CL community, and stack allocation is
> generally considered to be "free".

No doubt.  There wouldn't be much point to a research paper that only
told people what they already knew.

> [Lest my usage of the term "tail call" be misunderstood, I don't
> mean it in the sense defined by your paper, but in the more general
> sense, where a tail call simply means a call in tail position, where
> the caller has no more work to do other than to return the value[s]
> returned by the callee.]

That is exactly how I defined "tail call"; see Definition 2.

> So call me blind, and endulge me by referring to it as "Scheme's
> proper tail recursion" or "proper tail recursion as required by
> the continuation-passing style" when speaking on comp.lang.lisp.

No.  I will call established technical concepts by their established
names.  I will not use the names you prefer, "tail merging" and "tail
call merging", because those names have historically been used to
refer to code transformations and optimizations, not to the space
efficiency property that is known as proper tail recursion.  To use
the same names to refer to these different things would exacerbate
the confusion that your posts on this subject have exemplified.

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <41xzvdets.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote:
> 
> > [Lest my usage of the term "tail call" be misunderstood, I don't
> > mean it in the sense defined by your paper, but in the more general
> > sense, where a tail call simply means a call in tail position, where
> > the caller has no more work to do other than to return the value[s]
> > returned by the callee.]
> 
> That is exactly how I defined "tail call"; see Definition 2.

Definition 2 is defined on Definition 1, which only allows for
lambda expressions and the tails of "if" expressions.  It does not,
for example, allow for a progn form, which in CL is not a lambda form
or an if form, but whose last subform is in tail position in CL.
Also, Definition 1 identifies itself as being appropriate for programs
written in Core Scheme.  This is fine for definitional purposes in
Scheme, but it does not account for different basic definitions in CL.

Your definition lacks in expository quality from a CL perspecitve,
and that's why I disagree with the statement that you have defined
"tail call" exactly the same as I have.

> > So call me blind, and endulge me by referring to it as "Scheme's
> > proper tail recursion" or "proper tail recursion as required by
> > the continuation-passing style" when speaking on comp.lang.lisp.
> 
> No.  I will call established technical concepts by their established
> names.

They are only established within their context.  CL is outside of
the scope of the establishement of your terminology.

> I will not use the names you prefer, "tail merging" and "tail
> call merging", because those names have historically been used to
> refer to code transformations and optimizations,

Transformations, yes, but they are not necessarily optimizations.
It depends on what one is trying to optimize.  If the answer is
"always the stack", then yes, tail merging is an optimization.
But sometimes a merged tail call offers a pessimization in time,
because the stack must be unlinked before the next function is
entered.  This pessimization is often not desired in a CL setting.

I'm not asking you to use the terms I prefer, and said so in my
original post.  My objection is and has always been your use of
your definitional phrase "proper tail recursion" without qualifying
where it comes from.

> not to the space
> efficiency property that is known as proper tail recursion.

What I object to is the hijacking of the term "proper tail recursion"
to mean more (or indeed less) than the sum of its parts.  In communities
in which "tail call" and "tail recursion" are understood in general
terms, and not according to the definitions ascribed to Scheme and
scheme thought, the prepending of the adjective "proper" is meaningless,
but newbies tend to become confused because they look at the adjective
and try to make a judgement call.  In CL tradition, there is no such
thing as "proper" tail recursion or "improper" tail recursion, there is
only tail recursion without adjectival modification.

When personal computers first came out, I wanted to buy one, and
started saving up for one.  But very soon afterward, people were
confused when I told them I wanted to buy a "personal computer",
because IBM had hijacked the term, and I didn't want a IBM - I
had wanted an Altair...

> To use
> the same names to refer to these different things would exacerbate
> the confusion that your posts on this subject have exemplified.

Instead of ad hominem argument, let's finish the discussion first
and perhaps we'll discover just who is doing the confusing.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304211425.25c7078e@posting.google.com>
Duane Rettig wrote in message news:<·············@beta.franz.com>...
> Definition 2 is defined on Definition 1, which only allows for
> lambda expressions and the tails of "if" expressions.  It does not,
> for example, allow for a progn form, which in CL is not a lambda form
> or an if form, but whose last subform is in tail position in CL.
> Also, Definition 1 identifies itself as being appropriate for programs
> written in Core Scheme.  This is fine for definitional purposes in
> Scheme, but it does not account for different basic definitions in CL.

Agreed.  In the language and compiler communities, pages are saved by
explaining a concept in terms of some core language, while assuming
that everyone understands how the full language can be translated into
that core, or how the concepts can be extended from the core to the
full language.  In Scheme this is easy, because the defining documents
specify a translation into the core, so a specification of the concept
for the core language induces a specification of the concept for the
full language.

The defining documents of Common Lisp do not specify a core language,
or any translation from full CL into a core.  Nonetheless some CL
compilers, such as the CMU and S1 CL compilers, operate by translating
CL into some core subset, and the subset used by the S1 CL compiler is
pretty much an extension of core Scheme IIRC.  In a discussion of space
efficiency, the most important difference I see between a suitable core
subset of CL and the core of Scheme is that the CL core would have to
have some iteration construct(s) as primitive(s).  Extending the
operational semantics of my paper to deal with a couple of iteration
constructs would be very easy, however, so I don't think the lack of
an official CL core is a serious problem.  In short, I think it's
pretty obvious how to remedy this defect.

> What I object to is the hijacking of the term "proper tail recursion"
> to mean more (or indeed less) than the sum of its parts.

Yes.  The problem is that the phrase is not compositional: the word
"proper" does not modify "tail recursion".  Instead, the entire
phrase "proper tail recursion" refers to a property that is achieved
by implementing (syntactic) tail calls in any of several systematic
ways, and is not achieved by several other systematic ways or by any
unsystematic approach.

I don't like that either, but it is the way Steele and Sussman used
the phrase.  OTOH, their usage was better than prior usage, which
overloaded "tail recursion" to mean three distinct things: syntactic
tail calls, a certain approach to implementing them, and the space
efficiency property.  That they themselves often overloaded "proper
tail recursion" to mean both a certain approach to implementing them
and the space efficiency property helped cause the confusion between
tail call optimization and the space efficiency property.  Now that
we have an established literature on "tail call optimizations" that
are seldom systematic enough to achieve the space efficiency property,
we have arrived at the kind of three-fold terminology that, in an
ideal world, we would have had from the start:

    a syntactic concept (tail calls, aka tail recursion)
    implementation techniques (tail call optimization, tail merging)
    space efficiency property (proper tail recursion)

It is indeed unfortunate that one of the common terms for the first
is a subphrase of the established term for the third.  The situation
is much like that surrounding the abominable parsing technique known
as "recursive descent with backup", which is not a form of the
excellent parsing technique known as "recursive descent".  With the
parsing techniques, the confusion has led to alternative terms such
as "definite clause grammar (DCG) parser" for the abominable technique,
and "hand-coded LL(1) parser" for the excellent technique.

Thus I prefer "tail call" to "tail recursion" for the syntactic
concept.  I would be willing to consider an alternative name for the
space efficiency property if one can be found.  I do not, however,
believe that it is helpful to deny that "proper tail recursion" is
the established name for that property, or to claim that Steele and
Sussman hijacked it.  So far as I know, they were the first to use
it, and their terminology was definitely an improvement over what
had come before.

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <48yu2tokk.fsf@beta.franz.com>
I think we are coming to terms [sic]

I spent some time last night skimming through the S&S papers
(I found them easily at http://library.readscheme.org/page1.html)
In them I found a few surprises in terminology.

······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote in message news:<·············@beta.franz.com>...
> > Definition 2 is defined on Definition 1, which only allows for
> > lambda expressions and the tails of "if" expressions.  It does not,
> > for example, allow for a progn form, which in CL is not a lambda form
> > or an if form, but whose last subform is in tail position in CL.
> > Also, Definition 1 identifies itself as being appropriate for programs
> > written in Core Scheme.  This is fine for definitional purposes in
> > Scheme, but it does not account for different basic definitions in CL.
> 
> Agreed.  In the language and compiler communities, pages are saved by
> explaining a concept in terms of some core language, while assuming
> that everyone understands how the full language can be translated into
> that core, or how the concepts can be extended from the core to the
> full language.  In Scheme this is easy, because the defining documents
> specify a translation into the core, so a specification of the concept
> for the core language induces a specification of the concept for the
> full language.

Yes, and the same is true in Mathematics, where the same Greek symbols
can be used over and over again because the context in which the symbols
are used is understood by both presenter and receiver, _or_ the
symbols are locally defined for the purposes of the presentation, and
thus there is no conflict with other definitional terms.  I have for
example, seen the lambda symbol used as a straight variable, where it
had nothing to do with Lisp or the Lambda calculus.  Because it was
locally defined, though, it presented no problem in understanding the
concept being presented, which was of course the goal of the presenter.

> The defining documents of Common Lisp do not specify a core language,
> or any translation from full CL into a core.  Nonetheless some CL
> compilers, such as the CMU and S1 CL compilers, operate by translating
> CL into some core subset, and the subset used by the S1 CL compiler is
> pretty much an extension of core Scheme IIRC.

I think you might get some objections to that by CLers.  I think that
the "core" of CL was left undefined purposefully, because of its goal
in being more inclusive than restrictive, thus allowing other older Lisp
dialects to be absorbed into the larger "mushy" core of Common Lisp...

>  In a discussion of space
> efficiency, the most important difference I see between a suitable core
> subset of CL and the core of Scheme is that the CL core would have to
> have some iteration construct(s) as primitive(s).

That is probably true.

>  Extending the
> operational semantics of my paper to deal with a couple of iteration
> constructs would be very easy, however, so I don't think the lack of
> an official CL core is a serious problem.  In short, I think it's
> pretty obvious how to remedy this defect.

> > What I object to is the hijacking of the term "proper tail recursion"
> > to mean more (or indeed less) than the sum of its parts.
> 
> Yes.  The problem is that the phrase is not compositional: the word
> "proper" does not modify "tail recursion".  Instead, the entire
> phrase "proper tail recursion" refers to a property that is achieved
> by implementing (syntactic) tail calls in any of several systematic
> ways, and is not achieved by several other systematic ways or by any
> unsystematic approach.

This would not be a problem for me if Schemers would always quote the
phrase, to keep it monolithic, then it could be recognized as a unit,
and readers would tend to search for a definition (which is easy to
find).  However, when proper tail recursion is used in straight English
prose as I'm doing here, it is not clear that I'm meaning the monolithic
definitional term, and in fact it is only clear if the reader already
knows what proper taiol recursion is.  This is why I object to its use
unqualified in places where the readers may not know the definition.

> I don't like that either, but it is the way Steele and Sussman used
> the phrase.  OTOH, their usage was better than prior usage, which
> overloaded "tail recursion" to mean three distinct things: syntactic
> tail calls, a certain approach to implementing them, and the space
> efficiency property.

Actually, I was suprised last night to discover that Steele was not even
consistent in his use of the concept of "tail recursion".  This is an even
larger issue than the longer, more qualified phrase, because
"tail recursion" is overloaded by everyone, and yet Steele seems to expect
his readers to understand tail recursion to mean calls which are really
jumps, rather than the syntactic concept that we are discussing here.

S&S are mostly consistent in their use of "tail recursion" to describe
not only the syntactic concept of the position of the call within the
function body, but also the implementational concept (in Scheme) that it
is really turned into a jump.

Note for example in [Guy Lewis Steele, Jr.. "RABBIT: A Compiler for SCHEME".
Masters Thesis. MIT AI Lab. AI Lab Technical Report AITR-474. May 1978],
on Page 4, Section A: Background, he says "Tail recursion implies
that loops written in an apparently recursive form will actually be
executed in an iterative fashion."  

Also as an example, in [Guy Lewis Steele, Jr. and Gerald Jay Sussman.
"Lambda: The Ultimate Imperative". MIT AI Lab. AI Lab Memo AIM-353.
March 1976.], on page 3, Section 1.1, he shows an example of a for-loop
in Algol and a do-loop in Scheme, and says "In reality, the SCHEME DO
loop is not a primitive; it is a macro which expands into a function
which performs iteration by tail recursion".

There are other examples.  Interestingly, I found very few uses of the
term "proper tail recursion" in the S&S papers I skimmed.  Perhaps some
are missing from the site I found, or perhaps I just missed them in
my skimming.

So the above are examples where at least Steele uses "tail recursion"
to take in the whole concept of Schemes recursion-by-iteration style.

On the other hand, Steele shows that he knows what the rest of the
computing world thinks of the concept of "tail recursion"; in
[Guy Lewis Steele, Jr.. "Debunking the 'Expensive Procedure Call' Myth,
or, Procedure Call Implementations Considered Harmful, or, Lambda: The
Ultimate GOTO". MIT AI Lab. AI Lab Memo AIM-443. October 1977.], he is
clearly talking to the Yourdon/structured-programming crowd, to Algol,
Pascal, PL/I, and COBOL (with mention of Fortran).  On page 14, he is
discussing objections that programmers might have over the technique he
is encouraging to iterate by calling procedures, and under item 3, which
discusses the problem that the chain of procedure calls would eventually
overflow the stack, he says "If tail recursive procedure calls are compiled
as JUMP instructions, then this problem does not arise."  And again
on page 16 througout, he talks about tail recursive procedure calls.

Now of course, "tail recursive procedure call" and "tail recursion" are
two different phrases, but it is hard for me to draw enough of a
distinction in the English parsing of the two phrases to allow that
the latter can become definitional.

>  That they themselves often overloaded "proper
> tail recursion" to mean both a certain approach to implementing them
> and the space efficiency property helped cause the confusion between
> tail call optimization and the space efficiency property.  Now that
> we have an established literature on "tail call optimizations" that
> are seldom systematic enough to achieve the space efficiency property,
> we have arrived at the kind of three-fold terminology that, in an
> ideal world, we would have had from the start:
> 
>     a syntactic concept (tail calls, aka tail recursion)
>     implementation techniques (tail call optimization, tail merging)
>     space efficiency property (proper tail recursion)
> 
> It is indeed unfortunate that one of the common terms for the first
> is a subphrase of the established term for the third.

I agree with this, but would go even farther to say that the first is
somewhat tainted as well, and that "tail recursion" should not be
used (as you say below, but for different reason).

>  The situation
> is much like that surrounding the abominable parsing technique known
> as "recursive descent with backup", which is not a form of the
> excellent parsing technique known as "recursive descent".  With the
> parsing techniques, the confusion has led to alternative terms such
> as "definite clause grammar (DCG) parser" for the abominable technique,
> and "hand-coded LL(1) parser" for the excellent technique.

Yes, perhaps it would be good to follow such a pattern and come up with
terminology that would be descriptive and not overloading for both the
first and third kinds, and thus acceptable to all users of the terminology,
as well as understandable to those who have not been taught the
distinctions.

> Thus I prefer "tail call" to "tail recursion" for the syntactic
> concept.

And I would prefer just "tail call", for reasons above,

>  I would be willing to consider an alternative name for the
> space efficiency property if one can be found.

Agreed.  Let's start searching for appropriate phrases.

>  I do not, however,
> believe that it is helpful to deny that "proper tail recursion" is
> the established name for that property, or to claim that Steele and
> Sussman hijacked it.  So far as I know, they were the first to use
> it, and their terminology was definitely an improvement over what
> had come before.

Perhaps hijacking is too strong a term, due to the intention.  I
vaguely recall a movie, a comedy, where the hero (perhaps Bob Hope
or Jack Benny) went into a bank with a bag and his attention was
not on what he was doing; what he said was "I'd like to make a
withdrawal", but what the teller saw was a gun in his hand (it
may have been a prop) and a note that was suggestive (it may have
said something like "put it all into the bag", and of course the
teller emptied her drawer.  Now, did this guy intend to rob the
bank?  Not at all.  Did he in fact rob the bank?  Well, I don't
know; it depends on your point of view - to the teller, who
_misunderstood_ his actions, he certainly did.


-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304231318.2d6d3c6a@posting.google.com>
Duane Rettig wrote:
> I think we are coming to terms [sic]

Even if we can't agree on the terms we use, knowing that our disagreements
mostly terminological sounds like progress to me.

> I spent some time last night skimming through the S&S papers
> (I found them easily at http://library.readscheme.org/page1.html)
> In them I found a few surprises in terminology.

I appreciate your effort doing this.  I haven't had time to go back
to the rrrs-authors archives to see if anything there bears on what
I'm about to say.

> This would not be a problem for me if Schemers would always quote the
> phrase, to keep it monolithic, then it could be recognized as a unit,
> and readers would tend to search for a definition (which is easy to
> find).

I don't like to have to put quotes around technical terms, as if I'm
quoting someone else's prose.  How about using hyphenation instead?

    safe-for-space-complexity            to mean O(S_{sfs})
    properly-tail-recursive              to mean O(S_{tail})
    consistently-garbage-collected       to mean O(S_{gc})
    Algol-like                           to mean O(S_{stack})

> There are other examples.  Interestingly, I found very few uses of the
> term "proper tail recursion" in the S&S papers I skimmed.  Perhaps some
> are missing from the site I found, or perhaps I just missed them in
> my skimming.

Although Steele and Sussman invented this phrase, I don't think it
really caught on until the R3RS.  IIRC the RRRS, which hardly anyone
has read, didn't say a word about tail calls or tail recursion.
Several people, including Sussman, objected to this omission, so the
following paragraph was added in the R3RS:

    Implementations of Scheme are required to be properly
    tail-recursive.  This allows the execution of an iterative
    computation in constant space, even if the iterative
    computation is described by a syntactically recursive
    procedure.  Thus with a tail-recursive implementation,
    iteration can be expressed using the ordinary procedure-call
    mechanics, so that special iteration constructs are useful
    only as syntactic sugar.

IIRC, Sussman drafted this paragraph for the R3RS.  I'm not sure I'm
remembering this correctly, but I think he left the word "properly"
out of his first draft, and I (as one of the editors) objected on
the grounds that "tail recursion" is a syntactic notion.  I am quite
sure, however, that Sussman was the one who suggested using the word
"properly", explaining that its role was to indicate that we were
talking about the space property, not the syntactic notion.

If I am wrong, and "properly tail recursive" occurs within the RRRS,
then what I am remembering is a difference between some draft of the
RRRS and the actual RRRS.  But I think it was a difference between
the RRRS and R3RS.  I just don't have time to check that right now.

> >  I would be willing to consider an alternative name for the
> > space efficiency property if one can be found.
> 
> Agreed.  Let's start searching for appropriate phrases.

Agreed.  I'll be revising my PLDI '98 paper soon, so I'm looking
for alternative phrases in the fairly near term.

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <44r4oooux.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Duane Rettig wrote:
> > I think we are coming to terms [sic]
> 
> Even if we can't agree on the terms we use, knowing that our disagreements
> mostly terminological sounds like progress to me.

Yes, I agree that we are making progress.

> > I spent some time last night skimming through the S&S papers
> > (I found them easily at http://library.readscheme.org/page1.html)
> > In them I found a few surprises in terminology.
> 
> I appreciate your effort doing this.

No problem; it has been an interesting exercise.

>  I haven't had time to go back
> to the rrrs-authors archives to see if anything there bears on what
> I'm about to say.

It seems consistent with what I see in R5RS (which, I assume is the most
recent version of the Scheme spec).  As we get closer to resolution, I
become less concerned about history and etymology and more concerned with
current definitions (and how they interplay in different contexts).

> > This would not be a problem for me if Schemers would always quote the
> > phrase, to keep it monolithic, then it could be recognized as a unit,
> > and readers would tend to search for a definition (which is easy to
> > find).
> 
> I don't like to have to put quotes around technical terms, as if I'm
> quoting someone else's prose.  How about using hyphenation instead?
> 
>     safe-for-space-complexity            to mean O(S_{sfs})
>     properly-tail-recursive              to mean O(S_{tail})
>     consistently-garbage-collected       to mean O(S_{gc})
>     Algol-like                           to mean O(S_{stack})

This seems reasonable to me.

> > There are other examples.  Interestingly, I found very few uses of the
> > term "proper tail recursion" in the S&S papers I skimmed.  Perhaps some
> > are missing from the site I found, or perhaps I just missed them in
> > my skimming.
> 
> Although Steele and Sussman invented this phrase, I don't think it
> really caught on until the R3RS.

  [ ... ]

> If I am wrong, and "properly tail recursive" occurs within the RRRS,
> then what I am remembering is a difference between some draft of the
> RRRS and the actual RRRS.  But I think it was a difference between
> the RRRS and R3RS.  I just don't have time to check that right now.

Yes, I see only references to "proper tail recursion" in R5RS, and no
references to "tail recursion".  How it got to be that way is not
as important to me, although I can understand why you want to trace
the etymology yourself.

> > >  I would be willing to consider an alternative name for the
> > > space efficiency property if one can be found.
> > 
> > Agreed.  Let's start searching for appropriate phrases.
> 
> Agreed.  I'll be revising my PLDI '98 paper soon, so I'm looking
> for alternative phrases in the fairly near term.

I'll start thinking and probing for appropriate phrases.

Thanks.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Matthias Blume
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <m2n0igj7ns.fsf@tti5.uchicago.edu>
Duane Rettig <·····@franz.com> writes:

> > > >  I would be willing to consider an alternative name for the
> > > > space efficiency property if one can be found.
> > > 
> > > Agreed.  Let's start searching for appropriate phrases.
> > 
> > Agreed.  I'll be revising my PLDI '98 paper soon, so I'm looking
> > for alternative phrases in the fairly near term.
> 
> I'll start thinking and probing for appropriate phrases.

My favorite has always been "safe-for-space" -- which, of course, must
be accompanied by some sort of axiomatic definition what that means.

For example, one could look at an operational (or denotational)
dynamic semantics of the language in question and explicitly
incorporate a notion of what it means for data to be "live", and how
much space is used by such live data.

To be somewhat more concrete, suppose we express our (deterministic)
semantics as a sequence of state transitions.  S(p,x,i) would give the
state of program p on input x at step i.  Now we also want to have a
function L where L(p,x,i) gives the amount of space required to store
the live data of p on x at step i.  An implementation would then be
"safe-for-space" if p on x at step i requires O(L(p,x,i)) bytes of
memory.

The problem with such a definition is that for it to make sense it
requires a step-by-step correspondence between the abstract dynamic
semantics and the actual implementation.

A much weaker (but frequently adopted) notion of "safe-for-space"
would be to require that p on x can run in O(max{i \in
0..n}(L(p,x,i))) where n is the final state of the execution.

Instead of considering the entire run of the program and take the
maximum amount of live data as the basis for comparison, we can
strengthen this by considering intervals k..l of the index range of i
where S(p,x,k) and S(p,x,l) are "externally observable" program states
(i.e., points in the program's execution where it synchronizes with an
outside observer -- usually by doing I/O).  We just require that
between points k and l, the program requires no more than
O(max{i \in k..l}(L(p,x,i))) bytes of memory.

In the non-deterministic case, things get a bit messier but not any
more difficult at the conceptual level.

Matthias
From: ····@sonic.net
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <3EA6ADB0.67115E2F@sonic.net>
Duane Rettig wrote:

> This would not be a problem for me if Schemers would always quote the
> phrase, to keep it monolithic, then it could be recognized as a unit,
> and readers would tend to search for a definition (which is easy to
> find).  However, when proper tail recursion is used in straight English
> prose as I'm doing here, it is not clear that I'm meaning the monolithic
> definitional term, and in fact it is only clear if the reader already
> knows what proper taiol recursion is.  This is why I object to its use
> unqualified in places where the readers may not know the definition.
 
I think I agree.  I'm reasonably consistent in using capitals when 
referring to Proper Tail Recursion (because it's a specific proper 
name) but I haven't really paid attention to it and been careful about 
it, and I certainly should.  I have occasionally abbreviated it to 
PTR, but usually only after the term has already come up in discussion
and people are aware of what we're talking about. 

				Bear
From: Ray Blaak
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <un0ihnq2b.fsf@telus.net>
Duane Rettig <·····@franz.com> writes:
> >  I would be willing to consider an alternative name for the
> > space efficiency property if one can be found.
> 
> Agreed.  Let's start searching for appropriate phrases.

"Tail jump".

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: Dorai Sitaram
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b86jt8$812$1@news.gte.com>
In article <·············@telus.net>, Ray Blaak  <·····@telus.net> wrote:
>Duane Rettig <·····@franz.com> writes:
>> >  I would be willing to consider an alternative name for the
>> > space efficiency property if one can be found.
>> 
>> Agreed.  Let's start searching for appropriate phrases.
>
>"Tail jump".

Tail return elimination?  
Tail return-address elimination? 
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304231446.4f8d9b06@posting.google.com>
Dorai Sitaram wrote:
> In article <·············@telus.net>, Ray Blaak  <·····@telus.net> wrote:
[snip]
> >"Tail jump".
> 
> Tail return elimination?  
> Tail return-address elimination?

These sound like alternative names for the implementation technique
known as tail merging.  We don't need an alternative name for tail
merging, because "tail merging" is a perfectly good name for it.

The thing we need an alternative name for is the space efficiency
property known as "proper tail recursion".  If we're going to change
that name, let's try to make it a little clearer that we're talking
about a space efficiency property, not an implementation technique.

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4n0igsr68.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Dorai Sitaram wrote:
> > In article <·············@telus.net>, Ray Blaak  <·····@telus.net> wrote:
> [snip]
> > >"Tail jump".
> > 
> > Tail return elimination?  
> > Tail return-address elimination?
> 
> These sound like alternative names for the implementation technique
> known as tail merging.  We don't need an alternative name for tail
> merging, because "tail merging" is a perfectly good name for it.
> 
> The thing we need an alternative name for is the space efficiency
> property known as "proper tail recursion".  If we're going to change
> that name, let's try to make it a little clearer that we're talking
> about a space efficiency property, not an implementation technique.

How about "space bounded recursion"?

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Pascal Costanza
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <costanza-A76270.01003724042003@news.netcologne.de>
In article <····························@posting.google.com>,
 ······@qnci.net (William D Clinger) wrote:

> The thing we need an alternative name for is the space efficiency
> property known as "proper tail recursion".  If we're going to change
> that name, let's try to make it a little clearer that we're talking
> about a space efficiency property, not an implementation technique.

What about "strictly space-efficient tail call" or "strongly 
space-efficient tail call"? Just brainstorming.

BTW, I have followed this fruitful discussion, and learned quite a bit 
from it. Thanks for that to everyone involved.


Pascal

-- 
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie
From: Ray Blaak
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <u7k9jxea5.fsf@telus.net>
······@qnci.net (William D Clinger) writes:

> Dorai Sitaram wrote:
> > In article <·············@telus.net>, Ray Blaak  <·····@telus.net> wrote:
> [snip]
> > >"Tail jump".
> > 
> > Tail return elimination?  
> > Tail return-address elimination?
> 
> These sound like alternative names for the implementation technique
> known as tail merging.  We don't need an alternative name for tail
> merging, because "tail merging" is a perfectly good name for it.
> 
> The thing we need an alternative name for is the space efficiency
> property known as "proper tail recursion".  If we're going to change
> that name, let's try to make it a little clearer that we're talking
> about a space efficiency property, not an implementation technique.
> 
> Will

"bounded tail recursion"

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
·····@telus.net                                The Rhythm has my soul.
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304251247.534b2696@posting.google.com>
Pascal Constanza suggested:
> What about "strictly space-efficient tail call" or "strongly
> space-efficient tail call"? Just brainstorming.

I like "space-efficient tail calls".

Duane Rettig suggested:
> How about "space bounded recursion"?

I think the recursion theorists have already appropriated this to
mean something else.

Ray Blaak suggested:
> "bounded tail recursion"

The word "tail" takes it away from the recursion theorists.  Still,
we don't want people to think that the "bounded" or "space bounded"
means that the amount of tail recursion is being bounded by the size
of some stack.

Overall, my favorite is "space-efficient tail calls".  Comparing
it to "proper tail recursion", it replaces the vaguely moralistic
word "proper" with the specific thing we mean, and it replaces the
word "recursion", which many people nowadays interpret to mean
self-recursion or some simple form of mutual recursion, with the
more widely understood word "calls".

Comments?

Will
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4y91yb5tw.fsf@beta.franz.com>
······@qnci.net (William D Clinger) writes:

> Pascal Constanza suggested:
> > What about "strictly space-efficient tail call" or "strongly
> > space-efficient tail call"? Just brainstorming.
> 
> I like "space-efficient tail calls".
> 
> Duane Rettig suggested:
> > How about "space bounded recursion"?
> 
> I think the recursion theorists have already appropriated this to
> mean something else.

OK, too bad...

> Ray Blaak suggested:
> > "bounded tail recursion"
> 
> The word "tail" takes it away from the recursion theorists.  Still,
> we don't want people to think that the "bounded" or "space bounded"
> means that the amount of tail recursion is being bounded by the size
> of some stack.
> 
> Overall, my favorite is "space-efficient tail calls".  Comparing
> it to "proper tail recursion", it replaces the vaguely moralistic
> word "proper" with the specific thing we mean, and it replaces the
> word "recursion", which many people nowadays interpret to mean
> self-recursion or some simple form of mutual recursion, with the
> more widely understood word "calls".
> 
> Comments?

Sounds fine to me.  I wonder if it would stick though; would your
constituency accept such a long name?

Another possibility: drop the "tail":

 "space-efficient-calls"

or

 "space-safe-calls"

The "tail" seems redundant to me, since the term would be known by
definiton in the Scheme community, and the only way it would be
misunderstood in the CL (etc.) community is if someone here
misunderstood "space" as not pertaining to the stack.  But the same
problem would occur anyway with the longer definition...

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Sander Vesik
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <1052175287.626351@haldjas.folklore.ee>
In comp.lang.scheme Dorai Sitaram <····@goldshoe.gte.com> wrote:
> In article <·············@telus.net>, Ray Blaak  <·····@telus.net> wrote:
>>Duane Rettig <·····@franz.com> writes:
>>> >  I would be willing to consider an alternative name for the
>>> > space efficiency property if one can be found.
>>> 
>>> Agreed.  Let's start searching for appropriate phrases.
>>
>>"Tail jump".
> 
> Tail return elimination?  
> Tail return-address elimination? 

tail frame elimination? After all, tail call optimisation is
applicable to say C and respectable compilers do employ it.

-- 
	Sander

+++ Out of cheese error +++
From: Siegfried Gonzi
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <3ECBB9EC.8080102@kfunigraz.ac.at>
Duane Rettig wrote:

> Instead of ad hominem argument, let's finish the discussion first
> and perhaps we'll discover just who is doing the confusing.

I do not have any clue what you guys are talking about when you talk 
about "tail recursion" and other definitions. But wouldn't Prof. 
Clinger's paper account for a good  start to say that Scheme is very 
different from Common Lisp and only superficial resembles Common Lisp? I 
  think such a discussion is very fruitfull and could cut a long 
standing debate. I mean isn't it contending to have some hooks which 
poiints to a
important difference between Lisp and Scheme - or is it?

S. Gonzi
From: Duane Rettig
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <4smsbbus4.fsf@beta.franz.com>
Siegfried Gonzi <···············@kfunigraz.ac.at> writes:

> Duane Rettig wrote:
> 
> > Instead of ad hominem argument, let's finish the discussion first
> > and perhaps we'll discover just who is doing the confusing.
> 
> I do not have any clue what you guys are talking about when you talk
> about "tail recursion" and other definitions.

Because the url to his paper was given in a different fork of this
thread, I will repeat it here, so those interested can follow along:

ftp://ftp.ccs.neu.edu/pub/people/will/tail.ps.gz

> But wouldn't
> Prof. Clinger's paper account for a good  start to say that Scheme is
> very different from Common Lisp and only superficial resembles Common
> Lisp? I  think such a discussion is very fruitfull and could cut a
> long standing debate. I mean isn't it contending to have some hooks
> which poiints to a
> 
> important difference between Lisp and Scheme - or is it?

Yes, I believe it does point out such a fundamental difference, and
that is why I'd like to continue the discussion to find out how much
common ground we can find ourselves on.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Christopher C. Stacy
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <un0ikdvwf.fsf@dtpq.com>
>>>>> On 21 Apr 2003 00:38:38 -0700, Duane Rettig ("Duane") writes:

 Duane> [Lest my usage of the term "tail call" be misunderstood, I don't
 Duane> mean it in the sense defined by your paper, but in the more general
 Duane> sense, where a tail call simply means a call in tail position, where
 Duane> the caller has no more work to do other than to return the value[s]
 Duane> returned by the callee.]

Maybe we should go back to PDP-10 assembler terminology
and just call it PJRST :== JRST.
From: Pascal Costanza
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b7mhai$ov4$1@f1node01.rhrz.uni-bonn.de>
Michael Sperber [Mr. Preprocessor] wrote:
>>>>>>"Pascal" == Pascal Costanza <········@web.de> writes:
> 
> 
> Pascal> Most Common Lisp implementations optimize tail calls, or allow you to
> Pascal> configure it. It's just not a requirement imposed by the
> Pascal> specification. Note that in general, tail call optimized programs are
> Pascal> harder to debug, so it doesn't make a lot of sense to require this
> Pascal> optimization.
> 
> That is nonsense, except when speaking in the context of certain
> implementations.  As it's a pretty common misconception (as is the
> general idea that proper tail recursion is merely an "optimization"),
> you might want to check out
> 
> http://zurich.ai.mit.edu/pipermail/rrrs-authors/1998-January/002231.html

I am not convinced. A tail-recursive function call is a special case of 
(general) function calls. Tail-recursive function calls have certain 
properties that allow for more efficient execution than in the general 
case. What's wrong with calling this an optimization? I didn't say "mere 
optimization".

And as it is easier to not perform this optimization than to do it, it 
is also harder to retain a certain kind of debugging support when you 
introduce that optimization.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304170924.6c280db6@posting.google.com>
·································@hotmail.com (Franz Kafka) inquired:
> Is it possible to implement the Scheme operator call/cc which provides
> dynamic continuations in ANSI CL?

Yes.  This is trivial.

> How much work would be involved to
> extend CL to handle call/cc?

I don't believe it is possible to extend an arbitrary implementation
of Common Lisp with a call/cc that interoperates perfectly with
legacy code.

Screamer sounds like a good compromise between the trivial and the
impossible.

Will
From: ····@sonic.net
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <3E9EFD0E.2664576@sonic.net>
William D Clinger wrote:
> 
> ·································@hotmail.com (Franz Kafka) inquired:
> > Is it possible to implement the Scheme operator call/cc which provides
> > dynamic continuations in ANSI CL?
> 
> Yes.  This is trivial.


Really?  You can store continuations in data structures and 
call them multiple times, even after returning from the 
context once or more?  I didn't know CL could do that.  That 
and its more standardized networking and binary-manipulation
libraries might make it the right language for a project I'm 
on now, where I've been using scheme. 

				Bear
From: Joe Marshall
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <r8809aej.fsf@ccs.neu.edu>
····@sonic.net writes:

> William D Clinger wrote:
> > 
> > ·································@hotmail.com (Franz Kafka) inquired:
> > > Is it possible to implement the Scheme operator call/cc which provides
> > > dynamic continuations in ANSI CL?
> > 
> > Yes.  This is trivial.
> 
> Really?  You can store continuations in data structures and 
> call them multiple times, even after returning from the 
> context once or more?  

I'm not exactly sure how Will is interpreting the original poster's
question, but if he is interpreting `dynamic continuations' as meaning
`continuations with dynamic extent', then it *is* trivial to implement
in CL (hint, close over a lexical block), but you would not be able to
invoke them more than once because of the dynamic extent.

(Another *possible* interpretation of the original question would be
`Would an implementation be able to provide Scheme-like continuations
without violating ANSI CL?'  To which the answer is also `Yes')

(A third interpretation is that the original poster was not using
`dynamic' in a technical sense but rather because it sounded cool.)
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304171817.68194adf@posting.google.com>
Joe Marshall wrote:
> I'm not exactly sure how Will is interpreting the original poster's
> question....

I was trying, unsuccessfully as it appears, to point out the radical
difference between

    1.  implementing feature X in some language L
and
    2.  extending language L to support feature X

Suppose, for example, that feature X is "the syntax of C++".  You
can implement the syntax of C++ in just about any language L by
writing a C++ parser in L, but there aren't too many languages that
can be extended to support the syntax of C++ without introducing
many syntactic ambiguities and outright conflicts.

One reason my explanation was unsuccessful is that it is indeed
possible to extend the Common Lisp language with call/cc; the
thing that is impossible is to write an implementation of call/cc
in portable CL that will work in all conforming implementations
of CL with all legacy CL code.  Had I noticed that this thread was
cross-posted to comp.lang.lisp, I wouldn't have tried to say
anything so subtle.

Will
From: Fred Gilham
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <u78yu7zqun.fsf@snapdragon.csl.sri.com>
······@qnci.net (William D Clinger) writes:
> Had I noticed that this thread was cross-posted to comp.lang.lisp, I
> wouldn't have tried to say anything so subtle.

Hey!  What's that supposed to mean?  :-)

-- 
Fred Gilham                                         ······@csl.sri.com
When trying to reach the lowest common denominator, you have to be
prepared for the occasional division by zero.
From: William D Clinger
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b84e9a9f.0304191055.7c05cfa2@posting.google.com>
Fred Gilham wrote:
> ······@qnci.net (William D Clinger) writes:
> > Had I noticed that this thread was cross-posted to comp.lang.lisp, I
> > wouldn't have tried to say anything so subtle.
> 
> Hey!  What's that supposed to mean?  :-)

It wasn't supposed to mean what you may think.  In particular,
I didn't mean to suggest that the readers of comp.lang.lisp
were any less likely to understand or to appreciate subtlety
than readers of comp.lang.scheme.

What I did mean to suggest is that subtlety requires the reader
to exercise more imagination, and gives the reader more room to
interpret the message in ways that I may not have intended.  In
light of the heightened sensitivity some readers of these two
newsgroups have developed for insults coming from the other
newsgroup, I think it is better to speak plainly than subtly
when cross-posting, so as not to give so much opportunity for
a reader to misinterpret what I say as an insult when none is
intended.  Similarly, I would not want an intended insult to
be misinterpreted as benign.

It is ironic that my explanation of this was overly subtle, so
that it could itself be interpreted as an insult.  That was not
my intent.

Will
From: ····@sonic.net
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <3E9F3E99.5EE41AAB@sonic.net>
Joe Marshall wrote:
> 
> ····@sonic.net writes:
> 
> > Really?  You can store continuations in data structures and
> > call them multiple times, even after returning from the
> > context once or more?
> 
> I'm not exactly sure how Will is interpreting the original poster's
> question, but if he is interpreting `dynamic continuations' as meaning
> `continuations with dynamic extent', then it *is* trivial to implement
> in CL (hint, close over a lexical block), but you would not be able to
> invoke them more than once because of the dynamic extent.

Ah.  Darn.  Doing binary manipulation to compose datagrams in 
portable (R5RS) scheme is like kicking dead whales down the beach. 
Think, slow, ugly, disgusting, smelly, and requiring far more 
effort than is reasonable.  Most implementations provide binary 
operations that are efficient, so you have fast binary ops in 
any given implementation, but code depending on them won't be 
portable.  I had to detect the implementation and conditionally 
define a bunch of macros in R5RS-compatible language to do 
things the REALLY SLOW way. 

But continuations of the type provided by scheme's call/cc (ie, 
which you can call more than once and can call after returning 
from the context once or more) make recovering from protocol 
errors, a serious pain in most languages, totally trivial.  

If I could have had schemish call/cc-like continuations with 
CL's standard binary operations, that would have been really neat.  

Hmmm.  I don't know exactly what "dynamic" means in the context 
of continuations.  My mind grabbed onto the "like call/cc" part 
of the question.  But if we're talking about extent, "like 
call/cc" isn't dynamic extent, it's unlimited extent.  Is a 
"dynamic continuation" one with dynamic extent?

			Bear
From: Jochen Schmidt
Subject: Re: Implementing call/cc or Dynamic Continuations in ANSI CL
Date: 
Message-ID: <b7nkub$cne$07$1@news.t-online.com>
····@sonic.net wrote:

> But continuations of the type provided by scheme's call/cc (ie,
> which you can call more than once and can call after returning
> from the context once or more) make recovering from protocol
> errors, a serious pain in most languages, totally trivial.

I think CLs condition system and restarts would be a much better and
comfortable fit for this problem.

ciao,
Jochen