From: Kent M Pitman
Subject: Re: modifying top level structure in &rest
Date: 
Message-ID: <sfwlnzbdfjl.fsf@world.std.com>
SDS <···········@cctrading.com> writes:

> How dangerous is it to modify the top-level structure of the &rest list?

Potentially very.  (Depends on undocumented aspects of your implementation;
will potentially create an enormously-hard-to-detect/debug portability
pitfall.

> CLtL2 p 78 implies that this list is not necessarily new, so what? 

So you can potentially have awful effects on other data in your
program you may not realize you are sharing with.

> Can I nreverse it for more convenient looping?

I strongly recommend not doing so.

> Like in the following:
> 
> (defun poly (var &rest coeffs)
>  "Compute the polynomial with the given coeffecients.
>   Use the Horner (sp?) scheme.
>   COEFFS are (a0 a1 a2 ...) for a0 + a1 x + a2 x^2..."
>  (declare (number var) (type (list number) coeffs))
>  (do ((res 0) (cc (nreverse coeffs) (cdr cc)))
>      ((null cc) res)
>    (setq res (+ (car cc) (* var res)))))
>
> I know about #'reverse, I just don't want extra consing.

You're not going to win because you can't tell whether too little 
consing has been done or not.

It is permissible for an implementation to do:

  (defvar *foo* (list 2 3 5 7 9))

  (apply #'poly *foo*) => 113
   
  *foo* => (2 3 5 7 9) or or (2 3) or (2 9 7 5 3) or (2 5 3) ...

The point is that NREVERSE is allowed to trash the backbone of the
list in a creative way and APPLY is NOT REQUIRED to allocate fresh
structure; normally, APPLY does allocate fresh structure, but in the
case that it already has a list in its hand, it is permitted to just
use that list and there is NO WAY in general to detect that in the
called program so you simply can't assume you have modifiable
structure in your hands inside POLY.

Some implementations might always copy anyway, but you can't depend
on that from a portable or conforming ANSI CL program.

Incidentally, (list number) is not a valid type declaration in portable CL.

> Another solution is, of course, to change the order of arguments for the
> function...

Uh, yeah, I guess.  Or you could just compute the series the "obvious"
way...

 (defun poly (var &rest coeffs)
   (do ((res 0) (xn 1 (* xn var)) (cc coeffs (cdr cc)))
       ((null cc) res)
    (setq res (+ res (* (car cc) xn)))))

Yeah, it's got an extra multiply but I'm not sure if that's any worse
than the cost of the nreverse.  The algorithmic complexity doesn't seem
to be any different.

For more information on this, see X3J13 issue REST-LIST-ALLOCATION in
the CLHS.  In its most primitive form, you might say this issue is
about whether (APPLY #'LIST X) is the same as (COPY-LIST X) or
(IDENTITY X) or whether that fact is undefined.  There were three
options: NEWLY-ALLOCATED, MUST-SHARE, or MAY-SHARE.  I suspect
MAY-SHARE was adopted simply because it was the status quo, not
because anyone thought it was an especially great situation from the
user's point of view.  When performance was involved and implementations
already differed, we tended to leave it to implementation discretion
and just mark it as such.  And it's not like we didn't know the user-end
problem.  Even in 1988, Gail Zacharias wrote:

 gz> If Common Lisp doesn't require unshared &rest lists, then I think
 gz> it must provide a declarative version of this idiom so the
 gz> COPY-LIST can be portably avoided when it's redundant.  Seems to
 gz> me that the fact that this is a common case where users find a
 gz> need for conditionalization indicates a real deficiency in Common
 gz> Lisp's specification.

From: Bruno Haible
Subject: Re: modifying top level structure in &rest
Date: 
Message-ID: <totoa2134879@clisp.cons.org>
Kent Pitman <······@world.std.com> wrote about polynomial evaluation:
> Or you could just compute the series the "obvious" way...
>
> (defun poly (var &rest coeffs)
>   (do ((res 0) (xn 1 (* xn var)) (cc coeffs (cdr cc)))
>       ((null cc) res)
>    (setq res (+ res (* (car cc) xn)))))
>
> Yeah, it's got an extra multiply but I'm not sure if that's any worse
> than the cost of the nreverse.  The algorithmic complexity doesn't seem
> to be any different.

The algorithmic complexity is not different, but the numerical stability
is. Moreover, with this "obvious" algorithm, you are more likely to run
into floating point overflow problems.

If the original poster is worried about space allocation, I'd recommend
a vector representation for the polynomials instead of a list representation.
Except on Lisp machines which have hardware support for CDR-coding, a list
with n elements usually takes up 2*n machine words, whereas a vector with n
elements takes n+1 or n+2 machine words.

                        Bruno
From: Kent M Pitman
Subject: Re: modifying top level structure in &rest
Date: 
Message-ID: <sfwpvoezgqg.fsf@world.std.com>
······@ilog.fr (Bruno Haible) writes:

> Kent Pitman <······@world.std.com> wrote about polynomial evaluation:
> > Or you could just compute the series the "obvious" way...
> > (defun poly (var &rest coeffs)
> >   (do ((res 0) (xn 1 (* xn var)) (cc coeffs (cdr cc)))
> >       ((null cc) res)
> >    (setq res (+ res (* (car cc) xn)))))
> > Yeah, it's got an extra multiply but I'm not sure if that's any worse
> > than the cost of the nreverse.  The algorithmic complexity doesn't seem
> > to be any different.
> The algorithmic complexity is not different, but the numerical stability
> is. Moreover, with this "obvious" algorithm, you are more likely to run
> into floating point overflow problems.

Ah, good point.

> If the original poster is worried about space allocation, I'd
> recommend a vector representation for the polynomials instead of a
> list representation.

I agree with this, if float precision/overflow is a concern, as
inevitably I suppose it is.

> Except on Lisp machines which have hardware support for CDR-coding,
> a list with n elements usually takes up 2*n machine words, whereas a
> vector with n elements takes n+1 or n+2 machine words.

Indeed.  And if you have an array, you also can efficiently access any
element. 

Btw, even though cdr-coding gets you the compactness of an array, I
should point out for those who think it would be efficient to access
the nth element, that it's not.  Cdr-coding only gives you 2 bits of
information about the cdr: cdr-next, cdr-nil, cdr-normal, and
cdr-error.  So even though data might be laid out like:

	+----------+------+
  +-->  | CDR-NEXT |  +------> A
	+----------+------+
	| CDR-NEXT |  +------> B
	+----------+------+
	| CDR-NEXT |  +------> C
	+----------+------+
	| CDR-NIL  |  +------> D
	+----------+------+

representing (A B C D) you can't do some sort of `LIST-REF' into 
that list to get the fourth element even though it's 3 after the
first element because it requires you to be psychic about the
way the list is laid out--e.g., that it hasn't been RPLACD'd or
whatever.  For all you know, the data in (A B C D) is laid out like:

        +----------+------+
  +-->  | CDR-NEXT |  +------> A
        +----------+------+
        | CDR-NORM |  +------> B
        +----------+------+
        | CDR-ERR  |  +------------+
        +----------+------+        |
        | CDR-NIL  |  +------> E   |
        +----------+------+        |
                                   |
  +--------------------------------+
  |
  |     +----------+------+
  +-->  | CDR-NEXT |  +----------> C
        +----------+------+
        | CDR-NIL  |  +----------> D
        +----------+------+

and the 3rd word following the initial pointer may not be what
you expect at all.  Cdr-coding is really a lower-level mechanism that
you can't usefully rely on except in that it tends to make your data
more compact.  But you can't, for example, treat this like an array.

Also, if memory serves me right, the Lisp Machine does offer
ZL:ART-Q-LIST arrays, which can be either arrays or lists.  But all of
this stuff works best with hardware or microcode support.  In CL,
I agree with Bruno that simple vectors (1-d arrays) have a lot of 
appeal.  And once you have them, you can do computations in either
direction at the same cost.