From: Sam Steingold
Subject: (APPLY function sequence)
Date: 
Message-ID: <m3k7ipb61t.fsf@loiso.podval.org>
ANSI CL specifies the last argument of APPLY to be a list.
Why not a sequence (i.e., a vector or a list)?
Apparently some implementations (although not CLISP :-) might have a
hard time implementing this efficiently - is there any other reason?

(I think someone mentioned this here recently...)

-- 
Sam Steingold (http://www.podval.org/~sds) running RedHat8 GNU/Linux
<http://www.camera.org> <http://www.iris.org.il> <http://www.memri.org/>
<http://www.mideasttruth.com/> <http://www.palestine-central.com/links.html>
Never let your schooling interfere with your education.

From: Steven M. Haflich
Subject: Re: (APPLY function sequence)
Date: 
Message-ID: <3DEECAFA.8030806@alum.mit.edu>
Sam Steingold wrote:
> ANSI CL specifies the last argument of APPLY to be a list.
> Why not a sequence (i.e., a vector or a list)?
> Apparently some implementations (although not CLISP :-) might have a
> hard time implementing this efficiently - is there any other reason?

I don't think there is any good reason based on language
semantics, but apply is essentially a primitive function
in the language, probably imlpemented in some lower-level
implementation language and perhaps also opencoded by the
compiler.  The vector type includes not only simple-vectors
= (simple-array t 1) but also displaced adjustable bit
vectors with fill pointers.  Imposing upon the low-level
implementation of the primitive apply, and/or on the
compiler's code generator, to handle this kind of
high-level nonsense would be tedious for cl implementors,
and would likely have unexpected performance characteristics
for application programmers.
From: Kent M Pitman
Subject: Re: (APPLY function sequence)
Date: 
Message-ID: <sfw65u96jis.fsf@shell01.TheWorld.com>
"Steven M. Haflich" <·················@alum.mit.edu> writes:

> Sam Steingold wrote:
> > ANSI CL specifies the last argument of APPLY to be a list.
> > Why not a sequence (i.e., a vector or a list)?
> > Apparently some implementations (although not CLISP :-) might have a
> > hard time implementing this efficiently - is there any other reason?
> 
> I don't think there is any good reason based on language
> semantics, [...]

Well, sort of.

The callee is allowed in the case of &REST to share structure with 
the caller.  But &REST must provide a list, and so if you supplied 
a vector the implementation would have to copy it out into a list
anyway... or &REST would have to be prepared to receive a sequence 
instead of a list.

> I don't think there is any good reason based on language
> semantics, but apply is essentially a primitive function
> in the language, probably imlpemented in some lower-level
> implementation language and perhaps also opencoded by the
> compiler.  The vector type includes not only simple-vectors
> = (simple-array t 1) but also displaced adjustable bit
> vectors with fill pointers.  Imposing upon the low-level
> implementation of the primitive apply, and/or on the
> compiler's code generator, to handle this kind of
> high-level nonsense would be tedious for cl implementors,
> and would likely have unexpected performance characteristics
> for application programmers.

Yeah, in general, the entire language thinks that forms are lists, and
by making sequences accepted here, you open the door to ask why
sequences are not accepted everywhere.  Lots of code already expects lists,
and you'd just be breaking all that code by changing anything--and I'm not
sure to what end.  No one has ever asserted lists were the BEST datatype
to use for forms; it is simply expedient to have the decision settled and
everyone know the answer.  The value (in the sense of 'product utility') 
is in the predictability, stability, and regularity, NOT in the bestness.
Changing the type would not be improving things because it would not, on
the whole, improve the predictability, stability, or regularity.  It would
make things less predictable, less stable, and less regular.  What
teensy tiny performance or notational convenience would be gained could not,
in my opinion, possibly justify the disadvantage it would cause in an
a decades old language where this facet has been stable and happy and not
complained about (to several decimal places of percentages anyway) at all
in all that time.

If one were designing another language, of course one would reconsider this.
There's nothing magic about the present choice.  In a new language, there
would be no stability to worry about, and predictability/regularity would
come from the new definition of that new language, so you could make it be
any way that made sense.  But within the context of our language, stability
is enough reason.

The reason clisp would have no problem is presumably its byte-compiled nature,
btw.  Implementations that inline the particular code would have to inline
the tests for multiple possible data reps, which would be harder.  I assume
that a (properly maintained) lisp machine architecture would, like clisp,
also not have a problem since hardware type dispatch could accomodate vectors
there too with a "mere matter of a chip rev".  But it's worth having the 
language take a broader view of efficiency.  Making it hard to compile
any implementation efficiently makes it easier for someone to get a bad
opinion of lisp, and people who have a bad opinion of lisp are notoriously
bad at remembering what vendor and version it was they had a problem with.
So it creates a problem for clisp and lisp machines even if they implement
it efficiently.

Or so I claim.  My opinion is just that--one person's opinion.