From: S. Robert James
Subject: Simple recursive functions in Lisp
Date: 
Message-ID: <1170887378.180146.206080@k78g2000cwa.googlegroups.com>
As a Lisp newbie, I implemented reverse as follows:

(defun reverse. (lst)
  "Returns lst reversed"
  (if (null (cdr lst))
      (list (car lst))
      (append (reverse. (cdr lst)) (list (car lst)))))

I noticed that Paul Graham (On Lisp, p.30) implements it as:

(defun good-reverse (lst)
  (labels ((rev (lst acc)
     (if (null lst)
          acc
          (rev (cdr lst) (cons (car lst) acc)))))
      (rev lst nil)))

Why? My implementation  seems clearer.  How is Graham's superior?

From: Kaz Kylheku
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170889699.846491.32160@p10g2000cwp.googlegroups.com>
On Feb 7, 2:29 pm, "S. Robert James" <············@gmail.com> wrote:
> As a Lisp newbie, I implemented reverse as follows:
> (defun reverse. (lst)

You can use LIST as a variable name in Common Lisp. Only the function
binding of the symbol COMMON-LISP:LIST is reserved.

>   "Returns lst reversed"
>   (if (null (cdr lst))
>       (list (car lst))
>       (append (reverse. (cdr lst)) (list (car lst)))))
>
> I noticed that Paul Graham (On Lisp, p.30) implements it as:
>
> (defun good-reverse (lst)
>   (labels ((rev (lst acc)
>      (if (null lst)
>           acc
>           (rev (cdr lst) (cons (car lst) acc)))))
>       (rev lst nil)))
>
> Why? My implementation  seems clearer.  How is Graham's superior?

Graham's is superior because of the extra ACC parameter taken by the
local recursive function REV. Note how the accumulator is used
together with CONS to build up the list without traversing it,
contrary to your approach using APPEND, which has to scan the new list
every time. So you can see that it takes O(N) time.

Secondly, note that his version is tail-recursive. This means that a
good Lisp compiler can turn it into a loop, in which the ACC variable
becomes a loop variable. This means that when Graham's version is
compiled, it will not only run in linear time, but in constant O(1)
stack space.

Your version is not tail recursive because the return value of the
recursive call is used for further computation (doing the APPEND). So
your version requires O(N^2) time (due to the repeated re-scanning of
the growing list to append more items to it) and O(N) stack space (due
to the naive car/cdr recursion).

(Stricly speaking, both approaches use O(N) space since they are
consing up a list that is just as long as the input list. However,
stack space nevertheless has to be considered separately because it
may be in limited supply, and it still waste by a constant factor even
if you don't run out).

Another way to express Graham's algorithm is iteration.

(defun reverse* (list)
  (let ((acc ()))
    (dolist (item list acc)
      (push item acc))))

The PUSH macro call is equivalent to (setf reversed (cons item
reversed)).

The loop has two variables. The explicit ACC that we set up for
accumulating the reversed list, as well as a hidden variable which
marches through the list, and sets up the value of ITEM on every
iteration.

The recursive version like Graham's is formed by visualizing each
iteration of the loop as a function. The loop variables become that
function's parameters, so that each iteration has its own ACC and its
own variable for accessing the next element in the list. That is to
say, under iteration, each iteration computes the new values of the
iteration variables and prepares the next values by directly assigning
to them, before branching back to the top of the loop. Under the
equivalent tail recursion, each ``iteration'' computes the new values
of the loop variables and then passes them as function arguments to
the next iteration, where they become the next iteration's variables.
The call simultaneously performs the role of the backward branch, and
the variable initialization.
From: S. Robert James
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170893768.594770.36870@l53g2000cwa.googlegroups.com>
Thanks for that great analysis, Kaz.

Your equivalent iterative implenetation begs the question: assuming
all assignemnts (ie side effects) are local and immediate, is there a
real advantage to functional style recursion over iteration?  It seems
to me that it's just as hard - even harder - to keep track of the
recursive state / accumulator than to keep track of the iterative
assignments.

Dogma aside, how have I gained by writing it in the recursive version?

(I'm referring to advantages for the programmer, not differences in
execution speed).

On Feb 7, 6:08 pm, "Kaz Kylheku" <········@gmail.com> wrote:
> On Feb 7, 2:29 pm, "S. Robert James" <············@gmail.com> wrote:
>
> > As a Lisp newbie, I implemented reverse as follows:
> > (defun reverse. (lst)
>
> You can use LIST as a variable name in Common Lisp. Only the function
> binding of the symbol COMMON-LISP:LIST is reserved.
>
> >   "Returns lst reversed"
> >   (if (null (cdr lst))
> >       (list (car lst))
> >       (append (reverse. (cdr lst)) (list (car lst)))))
>
> > I noticed that Paul Graham (On Lisp, p.30) implements it as:
>
> > (defun good-reverse (lst)
> >   (labels ((rev (lst acc)
> >      (if (null lst)
> >           acc
> >           (rev (cdr lst) (cons (car lst) acc)))))
> >       (rev lst nil)))
>
> > Why? My implementation  seems clearer.  How is Graham's superior?
>
> Graham's is superior because of the extra ACC parameter taken by the
> local recursive function REV. Note how the accumulator is used
> together with CONS to build up the list without traversing it,
> contrary to your approach using APPEND, which has to scan the new list
> every time. So you can see that it takes O(N) time.
>
> Secondly, note that his version is tail-recursive. This means that a
> good Lisp compiler can turn it into a loop, in which the ACC variable
> becomes a loop variable. This means that when Graham's version is
> compiled, it will not only run in linear time, but in constant O(1)
> stack space.
>
> Your version is not tail recursive because the return value of the
> recursive call is used for further computation (doing the APPEND). So
> your version requires O(N^2) time (due to the repeated re-scanning of
> the growing list to append more items to it) and O(N) stack space (due
> to the naive car/cdr recursion).
>
> (Stricly speaking, both approaches use O(N) space since they are
> consing up a list that is just as long as the input list. However,
> stack space nevertheless has to be considered separately because it
> may be in limited supply, and it still waste by a constant factor even
> if you don't run out).
>
> Another way to express Graham's algorithm is iteration.
>
> (defun reverse* (list)
>   (let ((acc ()))
>     (dolist (item list acc)
>       (push item acc))))
>
> The PUSH macro call is equivalent to (setf reversed (cons item
> reversed)).
>
> The loop has two variables. The explicit ACC that we set up for
> accumulating the reversed list, as well as a hidden variable which
> marches through the list, and sets up the value of ITEM on every
> iteration.
>
> The recursive version like Graham's is formed by visualizing each
> iteration of the loop as a function. The loop variables become that
> function's parameters, so that each iteration has its own ACC and its
> own variable for accessing the next element in the list. That is to
> say, under iteration, each iteration computes the new values of the
> iteration variables and prepares the next values by directly assigning
> to them, before branching back to the top of the loop. Under the
> equivalent tail recursion, each ``iteration'' computes the new values
> of the loop variables and then passes them as function arguments to
> the next iteration, where they become the next iteration's variables.
> The call simultaneously performs the role of the backward branch, and
> the variable initialization.
From: Harold Lee
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170894385.175846.174480@p10g2000cwp.googlegroups.com>
On Feb 7, 4:16 pm, "S. Robert James" <············@gmail.com> wrote:

snip...

> Dogma aside, how have I gained by writing it in the recursive version?

snip...

You can prove the correctness of a recursive program more easily.
Check out this intro, with examples in Scheme and C:
http://www-128.ibm.com/developerworks/linux/library/l-recurs.html?ca=dgr-lnxw06Recursion
From: Harald Hanche-Olsen
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <pco8xf8blut.fsf@shuttle.math.ntnu.no>
+ "Harold Lee" <·······@gmail.com>:

| You can prove the correctness of a recursive program more easily.

I don't think it's any harder to prove a well-written iterative
algorithm correct than the corresponding recursion.  In either case
it's typically a case of finding the proper invariant and then proving
that it is in fact invariant, and that it implies the the desired exit
condition at termination.

Spaghetti code can of course be hard to prove correct, even if it is.
Maybe the functional style makes it harder to write spaghetti code,
and if so that could be the reason it is claimed to be more easily
proved correct.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- It is undesirable to believe a proposition
  when there is no ground whatsoever for supposing it is true.
  -- Bertrand Russell
From: John Thingstad
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <op.tnhivubkpqzri1@pandora.upc.no>
On Thu, 08 Feb 2007 18:27:54 +0100, Harald Hanche-Olsen  
<······@math.ntnu.no> wrote:

> + "Harold Lee" <·······@gmail.com>:
>
> | You can prove the correctness of a recursive program more easily.
>
> I don't think it's any harder to prove a well-written iterative
> algorithm correct than the corresponding recursion.  In either case
> it's typically a case of finding the proper invariant and then proving
> that it is in fact invariant, and that it implies the the desired exit
> condition at termination.
>
> Spaghetti code can of course be hard to prove correct, even if it is.
> Maybe the functional style makes it harder to write spaghetti code,
> and if so that could be the reason it is claimed to be more easily
> proved correct.
>

It is harder!
There are methods for both.
Back when I took Verifiable Programming Hoare Logic ruled.
I introduced functional reasoning to ease the reasoning about
invariants in loop's. It takes a teth of he space!
Take a look at acl2.

-- 
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
From: S. Robert James
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170895861.010675.105110@l53g2000cwa.googlegroups.com>
According to SICP ( http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2.1
), Graham's version should be considered iterative, *not* recursive.
It's functional,  but iterative nonetheless.


On Feb 7, 7:16 pm, "S. Robert James" <············@gmail.com> wrote:
> Thanks for that great analysis, Kaz.
>
> Your equivalent iterative implenetation begs the question: assuming
> all assignemnts (ie side effects) are local and immediate, is there a
> real advantage to functional style recursion over iteration?  It seems
> to me that it's just as hard - even harder - to keep track of the
> recursive state / accumulator than to keep track of the iterative
> assignments.
>
> Dogma aside, how have I gained by writing it in the recursive version?
>
> (I'm referring to advantages for the programmer, not differences in
> execution speed).
>
> On Feb 7, 6:08 pm, "Kaz Kylheku" <········@gmail.com> wrote:
>
> > On Feb 7, 2:29 pm, "S. Robert James" <············@gmail.com> wrote:
>
> > > As a Lisp newbie, I implemented reverse as follows:
> > > (defun reverse. (lst)
>
> > You can use LIST as a variable name in Common Lisp. Only the function
> > binding of the symbol COMMON-LISP:LIST is reserved.
>
> > >   "Returns lst reversed"
> > >   (if (null (cdr lst))
> > >       (list (car lst))
> > >       (append (reverse. (cdr lst)) (list (car lst)))))
>
> > > I noticed that Paul Graham (On Lisp, p.30) implements it as:
>
> > > (defun good-reverse (lst)
> > >   (labels ((rev (lst acc)
> > >      (if (null lst)
> > >           acc
> > >           (rev (cdr lst) (cons (car lst) acc)))))
> > >       (rev lst nil)))
>
> > > Why? My implementation  seems clearer.  How is Graham's superior?
>
> > Graham's is superior because of the extra ACC parameter taken by the
> > local recursive function REV. Note how the accumulator is used
> > together with CONS to build up the list without traversing it,
> > contrary to your approach using APPEND, which has to scan the new list
> > every time. So you can see that it takes O(N) time.
>
> > Secondly, note that his version is tail-recursive. This means that a
> > good Lisp compiler can turn it into a loop, in which the ACC variable
> > becomes a loop variable. This means that when Graham's version is
> > compiled, it will not only run in linear time, but in constant O(1)
> > stack space.
>
> > Your version is not tail recursive because the return value of the
> > recursive call is used for further computation (doing the APPEND). So
> > your version requires O(N^2) time (due to the repeated re-scanning of
> > the growing list to append more items to it) and O(N) stack space (due
> > to the naive car/cdr recursion).
>
> > (Stricly speaking, both approaches use O(N) space since they are
> > consing up a list that is just as long as the input list. However,
> > stack space nevertheless has to be considered separately because it
> > may be in limited supply, and it still waste by a constant factor even
> > if you don't run out).
>
> > Another way to express Graham's algorithm is iteration.
>
> > (defun reverse* (list)
> >   (let ((acc ()))
> >     (dolist (item list acc)
> >       (push item acc))))
>
> > The PUSH macro call is equivalent to (setf reversed (cons item
> > reversed)).
>
> > The loop has two variables. The explicit ACC that we set up for
> > accumulating the reversed list, as well as a hidden variable which
> > marches through the list, and sets up the value of ITEM on every
> > iteration.
>
> > The recursive version like Graham's is formed by visualizing each
> > iteration of the loop as a function. The loop variables become that
> > function's parameters, so that each iteration has its own ACC and its
> > own variable for accessing the next element in the list. That is to
> > say, under iteration, each iteration computes the new values of the
> > iteration variables and prepares the next values by directly assigning
> > to them, before branching back to the top of the loop. Under the
> > equivalent tail recursion, each ``iteration'' computes the new values
> > of the loop variables and then passes them as function arguments to
> > the next iteration, where they become the next iteration's variables.
> > The call simultaneously performs the role of the backward branch, and
> > the variable initialization.
From: Barry Margolin
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <barmar-00496A.20073107022007@comcast.dca.giganews.com>
In article <························@l53g2000cwa.googlegroups.com>,
 "S. Robert James" <············@gmail.com> wrote:

> According to SICP ( 
> http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2.1
> ), Graham's version should be considered iterative, *not* recursive.
> It's functional,  but iterative nonetheless.

Only in an implementation that performs tail-call elimination.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Bob Felts
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1ht6mcx.65jhe7zm5y0uN%wrf3@stablecross.com>
Barry Margolin <······@alum.mit.edu> wrote:

> In article <························@l53g2000cwa.googlegroups.com>,
>  "S. Robert James" <············@gmail.com> wrote:
> 
> > According to SICP ( 
> > http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2.1
> > ), Graham's version should be considered iterative, *not* recursive.
> > It's functional,  but iterative nonetheless.
> 
> Only in an implementation that performs tail-call elimination.

I don't see why the definition of a recursive function depends on
implementation details.  If a function calls itself, it's recursive

Looking at the link to SICP, it says:

In contrasting iteration and recursion, we must be careful not to
confuse the notion of a recursive process with the notion of a recursive
procedure.   ***When we describe a procedure as recursive, we are
referring to the syntactic fact that the procedure definition refers
(either directly or indirectly) to the procedure itself.*** (emphasis
mine)  But when we describe a process as following a pattern that is,
say, linearly recursive, we are speaking about how the process evolves,
not about the syntax of how a procedure is written. It may seem
disturbing that we refer to a recursive procedure such as fact-iter as
generating an iterative process. However, the process really is
iterative: Its state is captured completely by its three state
variables, and an interpreter need keep track of only three variables in
order to execute the process.

So SICP also makes the distinction between the form of the function and
its implementation.
From: Frode Vatvedt Fjeld
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <2h8xf92kjg.fsf@vserver.cs.uit.no>
"S. Robert James" <············@gmail.com> writes:

> Dogma aside, how have I gained by writing it in the recursive
> version?

No. In my opinion, normal iteration is almost always clearer (easier
to read and write) than recursion, and therefore better. It seems to
me that recursion is highly overrated in (non-theoretical) CS.

-- 
Frode Vatvedt Fjeld
From: Joe Marshall
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170967661.529678.257740@j27g2000cwj.googlegroups.com>
`Recursion' and `iteration' are both special cases of the more general
concept
of a fixed-point.  It is kind of non-sensical to say that one is
better than the
other because the salient point is that you transfer control back to
code
that has already been run.  Whether you manipulate the stack or the
heap
on the way is largely an implementation detail.
From: Frode Vatvedt Fjeld
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <2h4ppv2v51.fsf@vserver.cs.uit.no>
"Joe Marshall" <··········@gmail.com> writes:

> `Recursion' and `iteration' are both special cases of the more
> general concept of a fixed-point.  It is kind of non-sensical to say
> that one is better than the other because the salient point is that
> you transfer control back to code that has already been run.
> Whether you manipulate the stack or the heap on the way is largely
> an implementation detail.

I don't think I understand what you mean here. I mean, everything is
an implementation detail when seen from some viewpoint that is
sufficiently distant. But here I understood the context to be
programmers working with (writing, reading, debugging..) code. And
then I think it makes perfect sense to say e.g that

  (defun sum (list)
    (loop for x in list sum x))

is vastly better than

  (defun sum (list)
    (if (null list)
        0
        (+ (car list)
           (sum (cdr list)))))

or, for better and/or worse:

  (defun sum (list &optional (accumulator 0))
    (if (null list)
        accumulator
        (sum (cdr list)
             (+ accumulator
                (car list)))))


Whether these are all equivalent in the eyes of a (sufficiently smart)
compiler, end-user or correctness proof, I don't see as relevant.

-- 
Frode Vatvedt Fjeld
From: Joe Marshall
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1171056828.616324.114410@a75g2000cwd.googlegroups.com>
On Feb 9, 7:44 am, Frode Vatvedt Fjeld <······@cs.uit.no> wrote:
> "Joe Marshall" <··········@gmail.com> writes:
> > `Recursion' and `iteration' are both special cases of the more
> > general concept of a fixed-point.  It is kind of non-sensical to say
> > that one is better than the other because the salient point is that
> > you transfer control back to code that has already been run.
> > Whether you manipulate the stack or the heap on the way is largely
> > an implementation detail.
>
> I don't think I understand what you mean here. I mean, everything is
> an implementation detail when seen from some viewpoint that is
> sufficiently distant.

I was trying to be succinct and not let myself get drawn in to the
recursion vs. iteration vs. tail-recursion debate.  The point I was
trying
to make is that you can consider all function calls and loops as
`gotos that pass arguments'.  In the case of a traditionally
`recursive'
function, there is an implicit continuation argument that must be
allocated on each call.  In the case of `tail recursion', the implicit
continuation is re-used.  Not all languages have a mechanism that
allows you to transfer control to an arbitrary function while still
re-using the continuation.  Traditional `iteration' re-uses the
continuation, but generally doesn't allow argument passing.  The
arguments must therefore be stored somewhere that persists
across the control transfer.

> But here I understood the context to be
> programmers working with (writing, reading, debugging..) code.

That's a different ball of wax.  What you are used to or comfortable
with
is often a matter of taste and experience.

> And
> then I think it makes perfect sense to say e.g that
>
>   (defun sum (list)
>     (loop for x in list sum x))
>
> is vastly better than
>
>   (defun sum (list)
>     (if (null list)
>         0
>         (+ (car list)
>            (sum (cdr list)))))
>
> or, for better and/or worse:
>
>   (defun sum (list &optional (accumulator 0))
>     (if (null list)
>         accumulator
>         (sum (cdr list)
>              (+ accumulator
>                 (car list)))))
>
> Whether these are all equivalent in the eyes of a (sufficiently smart)
> compiler, end-user or correctness proof, I don't see as relevant.

I guess I was making an irrelevant statement.
From: Pascal Bourguignon
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <87k5yrw8ok.fsf@thalassa.informatimago.com>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:

> "Joe Marshall" <··········@gmail.com> writes:
>
>> `Recursion' and `iteration' are both special cases of the more
>> general concept of a fixed-point.  It is kind of non-sensical to say
>> that one is better than the other because the salient point is that
>> you transfer control back to code that has already been run.
>> Whether you manipulate the stack or the heap on the way is largely
>> an implementation detail.
>
> I don't think I understand what you mean here. I mean, everything is
> an implementation detail when seen from some viewpoint that is
> sufficiently distant. But here I understood the context to be
> programmers working with (writing, reading, debugging..) code. And
> then I think it makes perfect sense to say e.g that
>
>   (defun sum (list)
>     (loop for x in list sum x))
>
> is vastly better than
>
>   (defun sum (list)
>     (if (null list)
>         0
>         (+ (car list)
>            (sum (cdr list)))))

Of course, since LOOP is a higher level abstraction.  You should
compare with this iterative form:

(defun sum (list)
  (do ((current list (cdr current))
       (sum 0))
      ((null current) sum)
   (setf sum (+ sum (car current)))))

or even:

(defun sum (list)
  (let ((sum 0))
     (until (null list)
        (setf sum  (+ sum (car list)))
        (setf list (cdr list)))
      sum))

To prove the recursive function, you can say that (sum '()) == 0 by
the base case, and that given a list named (cdr list) and a number
named (car list):
  (sum (cons (car list) (cdr list))) == (+ (car list) (sum (cdr list)))
which is all you need, to prove that the recursive form of sum works.


Now, the problem with these iterative forms is that they must use SETF
(DO uses SETF implicitely in the update forms).  Hence the
difficulties in the proof: you have to develop the assertions between
each instruction, tracking the "value" bound to each variable at each
time.    Also note the use of the word "instruction". In lisp, we have
"expressions" and they are combined and embedded one in another.  But
to develop the assertions needed to prove a procedure, you need a more
"sequential" form, separating the expressions (binding the
intermediary results to additionnal variables).  To prove the
iterative function, you must introduce names for the values, and
virtual "variables", since the bindings change over time.  I let you
write the Hoare pre- and post- conditions to prove the iterative
version...

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

"Logiciels libres : nourris au code source sans farine animale."
From: Frode Vatvedt Fjeld
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <2hzm7n1aak.fsf@vserver.cs.uit.no>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:

> > [..] But here I understood the context to be
> > programmers working with (writing, reading, debugging..) code. And
> > then I think it makes perfect sense to say e.g that
> >
> >   (defun sum (list)
> >     (loop for x in list sum x))
> >
> > is vastly better than
> >
> >   (defun sum (list)
> >     (if (null list)
> >         0
> >         (+ (car list)
> >            (sum (cdr list)))))

Pascal Bourguignon <···@informatimago.com> writes:

> Of course, since LOOP is a higher level abstraction.  You should
> compare with this iterative form:
> 
> (defun sum (list)
>   (do ((current list (cdr current))
>        (sum 0))
>       ((null current) sum)
>    (setf sum (+ sum (car current)))))

Why should I, when I'd never ever write something like that in
practice? Why is loop disqualified as "higher level"? (And why stop at
'do', when tagbody/go will do the job too?) I'm very grateful that
loop exists so I don't have to brain-parse the unreasonably cryptic
'do' forms. Anyways, I'd written

  (defun sum (list)
    (let ((sum 0))
      (dolist (x list sum)
        (incf sum x))))

..assuming reduce etc. are too specific to be relevant to the
debate. I think this is still much better than either recursive
variant.

> To prove the recursive function [..]

I don't really know about everyone else, but personally I "prove"
functions about once every odd leap year. I believe that the overall
read/writeability of the code to be of more value.


As it happens, I tend to believe that "program proofs" is a concept
that is one big misunderstanding. What is a proof? It is a device to
convince people that something is true, typically by means of more or
less formalized rules of reduction and composition ("divide and
conquer"). In the context of this debate, "something" is about the
workings of pieces of program code. Now, the text "(loop for x in list
sum x)" does a very good job of convincing me that that program will
return the sum of the elements of the list (if you'll allow me the
axioms that the implementation of loop is correct, along with the
lisp, OS, and hardware implementations etc.). Programs _are_ proofs,
and should be written to be as convincing as possible. Trying to prove
the proofs is just silly, or at best an euphemism for hand-compiling
unreadable (i.e. bad) programs into something the brain can handle.


-- 
Frode Vatvedt Fjeld
From: Pascal Bourguignon
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <877iurw17o.fsf@thalassa.informatimago.com>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:

> Frode Vatvedt Fjeld <······@cs.uit.no> writes:
>
>> > [..] But here I understood the context to be
>> > programmers working with (writing, reading, debugging..) code. And
>> > then I think it makes perfect sense to say e.g that
>> >
>> >   (defun sum (list)
>> >     (loop for x in list sum x))
>> >
>> > is vastly better than
>> >
>> >   (defun sum (list)
>> >     (if (null list)
>> >         0
>> >         (+ (car list)
>> >            (sum (cdr list)))))
>
> Pascal Bourguignon <···@informatimago.com> writes:
>
>> Of course, since LOOP is a higher level abstraction.  You should
>> compare with this iterative form:
>> 
>> (defun sum (list)
>>   (do ((current list (cdr current))
>>        (sum 0))
>>       ((null current) sum)
>>    (setf sum (+ sum (car current)))))
>
> Why should I, when I'd never ever write something like that in
> practice? Why is loop disqualified as "higher level"? (And why stop at
> 'do', when tagbody/go will do the job too?) I'm very grateful that
> loop exists so I don't have to brain-parse the unreasonably cryptic
> 'do' forms. [...]


If you want to compare with LOOP, 

(defun sum (list)
  (loop :for x :in list :sum x))


use a macro such as RECURSE that has the same knowledge of lists and
sums, to let you write:

(defun sum (list)
  (recurse :for x :in list :sum x))



(defmacro recurse (&key for in sum)
  (let ((fname (gensym))
        (vlist (gensym))
        (vresu (gensym)))
    `(labels ((,fname (,vlist ,vresu)
                (let ((,for (car ,vlist)))
                  (if (null ,vlist)
                      ,vresu
                      (+ ,sum (,fname (cdr ,vlist)))))))
       (,fname ,in 0))))



> I don't really know about everyone else, but personally I "prove"
> functions about once every odd leap year. I believe that the overall
> read/writeability of the code to be of more value.

Of course. That's why we're using Common Lisp, not scheme or haskell...


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

NEW GRAND UNIFIED THEORY DISCLAIMER: The manufacturer may
technically be entitled to claim that this product is
ten-dimensional. However, the consumer is reminded that this
confers no legal rights above and beyond those applicable to
three-dimensional objects, since the seven new dimensions are
"rolled up" into such a small "area" that they cannot be
detected.
From: Neil Cerutti
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <slrnespfqe.1ag.horpner@FIAD06.norwich.edu>
On 2007-02-09, Pascal Bourguignon <···@informatimago.com> wrote:
> Of course, since LOOP is a higher level abstraction.  You
> should compare with this iterative form:
>
> (defun sum (list)
>   (do ((current list (cdr current))
>        (sum 0))
>       ((null current) sum)
>    (setf sum (+ sum (car current)))))
>
> or even:
>
> (defun sum (list)
>   (let ((sum 0))
>      (until (null list)
>         (setf sum  (+ sum (car list)))
>         (setf list (cdr list)))
>       sum))

The iterative process that SICP is looking for is created by
something like the following recursive procedure:

(defun sum (accum list)
  (if (null list)
      accum
      (sum (+ (car list) accum) (cdr list))))

It creates an iterative process, but the procedure is just as
easy to prove correct as another recursive procedure that created
a recursive process would be.

-- 
Neil Cerutti
From: Thomas A. Russ
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <ymiodo4a7s0.fsf@sevak.isi.edu>
"S. Robert James" <············@gmail.com> writes:

> Dogma aside, how have I gained by writing it in the recursive version?

For the case of computations that are essentially iterative in nature
(like REVERSE), you haven't gained anything by writing a recursive
version -- at least as far as any practical effects are concerned.

Sometimes an iterative solution is clearer, and sometimes a recursive
solution seems more natural.  For reverse, I think an iterative
algorithm is clearer.  For factorial, this becomes a more even argument,
but it could go either way.

But there are some algorithms that are inherently recursive.  The most
common example is a tree-walker.  Recursive functions are known
theoretically to be more powerful computationally than non-recursive
functions.  But this only applies to cases where you can't reduce the
function to a tail-recursive form.

As an example of such a function, consider a function to sum the numeric
values in an arbitrarily nested list structure:

  (sum-leaves '(1 2 3)          =>  6
  (sum-leaves '(1 (2) ((3) 0))  =>  6
  (sum-leaves '(((1 1) 1 (1 (1 (1 1 1)))) 1 ((1)))  =>  10

This is inherently a recursive procedure and it cannot be reduced to an
iterative form.*  In fact, this would be a good little exercise for new
lisp programmers.

OK, so why even bother with recursive solutions to things like REVERSE
or FACTORIAL?  One answer is that it introduces the notion of recursive
functions with a simple example, and (for REVERSE) also demonstrates
manipulation of list structure that you would need to know how to do in
order to implement SUM-LEAVES.  One could perhaps argue against using
REVERSE as an example, on the grounds that it encourages really
inefficient code -- as others have pointed out.  But the general goal of
all of this is pedagogical.


* Although someone is likely to produce an alleged "iterative" version
that explicitly manages its own recursive stack.  But that just moves
the recursion management into the algorithm itself.

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: S. Robert James
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170968036.152650.159960@h3g2000cwc.googlegroups.com>
On Feb 8, 12:17 pm, ····@sevak.isi.edu (Thomas A. Russ) wrote:
> Sometimes an iterative solution is clearer, and sometimes a recursive
> solution seems more natural....
>
> there are some algorithms that are inherently recursive.  The most
> common example is a tree-walker....
>
> As an example of such a function, consider a function to sum the numeric
> values in an arbitrarily nested list structure:
>
>   (sum-leaves '(1 2 3)          =>  6
>   (sum-leaves '(1 (2) ((3) 0))  =>  6
>   (sum-leaves '(((1 1) 1 (1 (1 (1 1 1)))) 1 ((1)))  =>  10
>
> This is inherently a recursive procedure and it cannot be reduced to an
> iterative form.*  In fact, this would be a good little exercise for new
> lisp programmers.

Thanks for the clear analysis.  Looking at SICP and especially http://
72.14.209.104/search?q=cache:kP4Y_FAWhZkJ:people.csail.mit.edu/trevor/
6.001/recitation-20040917.pdf , I would argue there are three forms:

1. Recursive
2. Iterative
3. Iterative process implemented in FP/recursive semantics

#2 and #3 may indeed compile to the same thing, whereas #1 and #2/3
are fundamentally different solutions.  As Dr. Russ pointed out, which
type is clearer depends on the problem.  (Between #2 and #3, I would
argue that it's a matter of which one you are more experienced with.)

BTW, I did the problem you suggested, and did find it educating:

(defun sum-leaves (lst)
  (cond ((null lst) 0)
        ((atom lst) lst)
      (t (+ (sum-leaves (car lst)) (sum-leaves (cdr lst))))))
From: Kaz Kylheku
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170986167.477000.121760@v33g2000cwv.googlegroups.com>
On Feb 7, 4:16 pm, "S. Robert James" <············@gmail.com> wrote:
> Thanks for that great analysis, Kaz.
>
> Your equivalent iterative implenetation begs the question: assuming
> all assignemnts (ie side effects) are local and immediate, is there a
> real advantage to functional style recursion over iteration?

I think that in this example you gain nothing.

But there are complex algorithms that are built on mutual tail
recursion among many different functions. These would be hard to re-
write as a loop, because each function has a different set of
parameters. These would all have to be clumped together into one big
state. Keeping them as separate little functions makes it easier to
analyze.

> It seems
> to me that it's just as hard - even harder - to keep track of the
> recursive state / accumulator than to keep track of the iterative
> assignments.

It's exactly the same. In fact the compiler will reduce acc to a
single memory location which is assigned, just like as in the
iterative version.
From: Alan Crowe
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <86odo3oxko.fsf@cawtech.freeserve.co.uk>
"S. Robert James" <············@gmail.com> writes:

> Thanks for that great analysis, Kaz.
> 
> Your equivalent iterative implenetation begs the question: assuming
> all assignemnts (ie side effects) are local and immediate, is there a
> real advantage to functional style recursion over iteration?  It seems
> to me that it's just as hard - even harder - to keep track of the
> recursive state / accumulator than to keep track of the iterative
> assignments.
> 
> Dogma aside, how have I gained by writing it in the recursive version?

Substantial programs involve christmas trees of pointers to
structures of pointers to structures.

CL wins a factor of two over C because everything is a
pointer, but the real issue is how do you avoid structure
sharing bugs. I think that there are three main approaches

1)Functional - just say no to mutant christmas
  trees! Looking at what functional programs actually do,
  they generally copy deeply enough to reach the deepest
  change and share the rest. Since you only change things by
  making an inexact copy, structure sharings bugs are
  eliminated by adopting the functional style.

2)Defensive deep copying. If you program in C you can avoid
  structuring sharing bugs by doing unnecessary
  copying. A worse approach than functional programming

3)Exercise omniscience. You could be careful about which
  structures are shared and avoid bugs by being clever. This
  begs the important questions.

Sometimes the way that changing a shared data structure
propagates behind the scenes is what makes an algorithm
fast. For example marking the nodes in a graph so that
different routes into the same node see that you have
already been there. So I don't think you can do with
imperative programming entirely.

If you could entirely do without imperative programming then
I think that you could simplify your life by always
programming in a functional style. Since imperative
programming has to be in your tool box, what do you do about
ordinary little functions in which "all assignemnts (ie side
effects) are local and immediate"? Providing you
maintain the discipline needed to eliminate structure
sharing bugs by functional programming I think you obtain
all the benefits that functional programming has to offer,
and should simply exercise your taste to write the code,
functional or imperative, that reads most easily.

Alan Crowe
Edinburgh
Scotland
From: Harold Lee
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <1170893839.268108.127200@m58g2000cwm.googlegroups.com>
On Feb 7, 2:29 pm, "S. Robert James" <············@gmail.com> wrote:
> As a Lisp newbie, I implemented reverse as follows:
>
...
>
> Why? My implementation  seems clearer.  How is Graham's superior?

The stack space and execution time arguments above are important to
understand. You should also test boundary cases:

(reverse. nil) => (nil)
(good-reverse nil) => nil
From: Bill Atkins
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <not-a-real-email-665397.17424507022007@host86-26-113-128.not-set-yet.ntli.net>
In article <························@k78g2000cwa.googlegroups.com>,
 "S. Robert James" <············@gmail.com> wrote:

> As a Lisp newbie, I implemented reverse as follows:
> 
> (defun reverse. (lst)
>   "Returns lst reversed"
>   (if (null (cdr lst))
>       (list (car lst))
>       (append (reverse. (cdr lst)) (list (car lst)))))
> 
> I noticed that Paul Graham (On Lisp, p.30) implements it as:
> 
> (defun good-reverse (lst)
>   (labels ((rev (lst acc)
>      (if (null lst)
>           acc
>           (rev (cdr lst) (cons (car lst) acc)))))
>       (rev lst nil)))
> 
> Why? My implementation  seems clearer.  How is Graham's superior?

Graham's implementation is tail-recursive, meaning that it uses the same 
amount of stack space no matter how big its argument is.  The REVERSE. 
function, on the other hand, will consume stack proportional to the size 
of the list.  I don't have On Lisp handy, but this is probably described 
in the book.  Look for "tail recursion" in the index.

You're right that your version is clearer, though.  Making a function 
tail-recursive often makes it less pretty, but the practical benefits 
are significant.

HTH,
Bill
From: Pascal Bourguignon
Subject: Re: Simple recursive functions in Lisp
Date: 
Message-ID: <87bqk51tki.fsf@thalassa.informatimago.com>
"S. Robert James" <············@gmail.com> writes:

> As a Lisp newbie, I implemented reverse as follows:
>
> (defun reverse. (lst)
>   "Returns lst reversed"
>   (if (null (cdr lst))
>       (list (car lst))
>       (append (reverse. (cdr lst)) (list (car lst)))))
>
> I noticed that Paul Graham (On Lisp, p.30) implements it as:
>
> (defun good-reverse (lst)
>   (labels ((rev (lst acc)
>      (if (null lst)
>           acc
>           (rev (cdr lst) (cons (car lst) acc)))))
>       (rev lst nil)))
>
> Why? My implementation  seems clearer.  How is Graham's superior?

1- APPEND will copy all its arguments but the last.

   Since your recursive call is done on (CDR LST), with 1 less element
   each time, you will be calling APPEND N times (N = (length lst)),
   with sublists of 0, 1, 2, ... N-1 elements.  
   In total you will be consing  N�+N CONS cells,
   to reverse a list of N elements.

2- the recursive call of your reverse is not a tail call, so it cannot
   be optimized out, therefore you will be using N stack frames.

Conclusion, your reverse needs N�+N cons cells in the heap, plus N
frames on the stack, to reverse a simple list of N elements...



On the other hand the good reverse uses a tail recursive function,
REV, that can be optimized out, therefore it will use only one stack
frame (two with the one of good-reverse itself).

And since REV is called with (CDR LST), it's called N times like your
function, but it also clearly calls CONS only N times.

Conclusion, GOOD-REVERSE needs 2 stack frames, and N cons cells to
reverse a list of N elements.




-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

COMPONENT EQUIVALENCY NOTICE: The subatomic particles (electrons,
protons, etc.) comprising this product are exactly the same in every
measurable respect as those used in the products of other
manufacturers, and no claim to the contrary may legitimately be
expressed or implied.