From: Bruno Haible
Subject: Re: Comparison: Beta - Lisp
Date: 
Message-ID: <34t6t1$1ep@nz12.rz.uni-karlsruhe.de>
Lenny Gray <········@netcom.com> wrote:
>
> Also, I was interested in Beta until one minute ago, because of this.
> Are there intrinsic reasons for this that will prevent it from ever
> improving?

I don't think so. If their compiler did more optimizations, the Mjolner
timings certainly would be much closer to the C timings.


                    Bruno Haible
                    ······@ma2s2.mathematik.uni-karlsruhe.de

From: Jeff Dalton
Subject: Re: Comparison: Beta - Lisp
Date: 
Message-ID: <Cw31Dx.6JE@cogsci.ed.ac.uk>
In article <······················@roke.cse.psu.edu> ········@roke.cse.psu.edu (Scott Schwartz) writes:
>······@ma2s2.mathematik.uni-karlsruhe.de (Bruno Haible) writes:
>   Lenny Gray <········@netcom.com> wrote:
>   > Also, I was interested in Beta until one minute ago, because of this.
>
>   If their compiler did more optimizations, the Mjolner
>   timings certainly would be much closer to the C timings.
>
>Contrast with the strategy of the Sather group: their compiler started
>out by doing better than C++ on some microbenchmarks.  That's what it
>takes to win supporters in real life.

There was a Prolog a while back that did better than C for some
numeric stuff.  Did this win wupporters?  Nooooooooooo... 
From: Scott Schwartz
Subject: Re: Comparison: Beta - Lisp
Date: 
Message-ID: <SCHWARTZ.94Sep13183351@roke.cse.psu.edu>
····@aiai.ed.ac.uk (Jeff Dalton) writes:
   There was a Prolog a while back that did better than C for some
   numeric stuff.  Did this win wupporters?  Nooooooooooo... 

It's a necessary but not sufficient condition.  Sather and C are both in
the algol vein; crossover is easy.  Prolog is utterly different than C;
performance is almost the least of one's worries.
From: Jeff Dalton
Subject: Re: Comparison: Beta - Lisp
Date: 
Message-ID: <Cw6IGM.G50@cogsci.ed.ac.uk>
In article <······················@roke.cse.psu.edu> ········@roke.cse.psu.edu (Scott Schwartz) writes:
>····@aiai.ed.ac.uk (Jeff Dalton) writes:
>   There was a Prolog a while back that did better than C for some
>   numeric stuff.  Did this win wupporters?  Nooooooooooo... 
>
>It's a necessary but not sufficient condition.

I'd say it's neither.
From: Christian Lynbech
Subject: Re: Comparison: Beta - Lisp
Date: 
Message-ID: <LYNBECH.94Sep15223604@xenon.daimi.aau.dk>
>>>>> "Jeff" == Jeff Dalton <····@aiai.ed.ac.uk> writes:

Jeff> In article <······················@roke.cse.psu.edu>
Jeff> ········@roke.cse.psu.edu (Scott Schwartz) writes:
>> ····@aiai.ed.ac.uk (Jeff Dalton) writes: There was a Prolog a while
>> back that did better than C for some numeric stuff.  Did this win
>> wupporters?  Nooooooooooo...
>> 
>> It's a necessary but not sufficient condition.

Jeff> I'd say it's neither.

Are we about to open the `C vs. Lisp' thread again, with BETA on the
side?

(for those puzzled: this has been a raging debate for the last couple
of months in comp.lang.lisp)


------------------------------------------------------------------------------
Christian Lynbech               | Hit the philistines three times over the 
office: R0.33 (phone: 3217)	| head with the Elisp reference manual.
email: ·······@daimi.aau.dk	|        - ·······@hal.com (Michael A. Petonic)
------------------------------------------------------------------------------
From: Scott McLoughlin
Subject: Re: Comparison: Beta - Lisp
Date: 
Message-ID: <os2Psc1w165w@sytex.com>
·······@xenon.daimi.aau.dk (Christian Lynbech) writes:

> Jeff> I'd say it's neither.
> 
> Are we about to open the `C vs. Lisp' thread again, with BETA on the
> side?
> 
> (for those puzzled: this has been a raging debate for the last couple
> of months in comp.lang.lisp)
> 
> 
> 

Howdy,
        Sure let's open it up again ;-) But no really -- skip all this
talk of "realtime" this and that and concerns about competing with
Fortran on floating point.  I'm still _VERY_ curious (concerned?)
about why Lisp isn't more popular in "the trenches". Go look at 
Borland C++ compiler's output in large model (typical Windows app) -
not too slick. Now go run a typical Windows app (or Unix workstation
app for that matter).  It pages like all get out when you open a 
windows or switch to another task. Now go install a Windows 
_personal utility app_ -- 40 meg disk footprint or more. We're
not talking big DB servers, just a nice word processor or 
spreadsheet.
        So why don't folks use Lisp to write this stuff? Blazing
speed,space,etc. aint that critical. What gives?

=============================================
Scott McLoughlin
Conscious Computing
=============================================
From: Patrick D. Logan
Subject: Re: Why do people like C?
Date: 
Message-ID: <patrick_d_logan.100.00101479@ccm.jf.intel.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>I'd suggest:

>1. Unix.
>2. Pascal is too restrictive.
>3. Positive feedback effects.

My guess: (nothing else to talk about so,...)

I think there is peer pressure to understand and use C. Everyone else does, so 
it would be bad not to do so as well.

Lisp has a reputation of being different and not widely accepted for various 
reasons. Therefore it is acceptable not to understand and use Lisp.
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CwFxAA.M34@cogsci.ed.ac.uk>
In article <·····················@cabell.vcu.edu> ·······@cabell.vcu.edu (Adrian L. Flanagan) writes:
>·····@aero.org (John Doner) writes:
>
>>In article <············@sytex.com>, Scott McLoughlin <····@sytex.com> wrote:
>>>I'm still _VERY_ curious (concerned?)
>>>about why Lisp isn't more popular in "the trenches".
>>...
>>>        So why don't folks use Lisp to write this stuff? Blazing
>>>speed,space,etc. aint that critical. What gives?
>
>[long abstract theory deleted]
>
>>I invite criticism of this theory.
>
>>John Doner
>
>I must strenously disagree with the original poster.  "Blazing
>speed,space,etc." are that critical. 

Then why are so many things so large and slow?  Sure, there are
some cases where speed, space, etc are critical, but there must
be many others where they aren't.  I think you are right to an
extent, but it can't be the whole story.

>Particularly in the PC DOS
>world with its 640K restriction, program size and efficiency of
>compiled code made a tremendous market difference in acceptance of
>early commercial programs.  Programmers writing in C had a large
>advantage over programmers using the early Lisp systems, [...]

>The (relative) failure of Lisp has everything to do with Lisp
>vendors' failure to understand (even now) the needs of their
>marketplace.  Call it Ivory Tower Syndrome.

Did anyone really think Lisp would occupy the place C now has?
If so, they sure went about it in a bizarre way!

Most Common Lisp vendors at least did not seem to see the PC DOS world
as their market.  (Or, again, if they did, they approached it a very
strange way.)  There is a market that commercial Common Lisps served
fairly well.  It was more restricted than it could have been, even if
we look only at reasonably powerful "workstations".  Perhaps the vendors
didn't realize how much people would still want to do Unix stuff
rather than just live in the Lisp World.  I don't know.

A strange thing is that it sometimes looks like only success in the
PC market counts at all.  The PC market is a rather odd place.
The OS technology would have been laughed out of town in the 70s.
And yet people put up with 640K restrictions, no virtual memory,
no proper multi-tasking, etc, for ages.  I think it's reasonable
that someone might not have predicted that things would develop
the way they did in the PC market, much less that the PC market
would start to dominate other markets (at least so far as
perception of success is concerned).

Moreover, I think it's surprising that _any_ language has so
dominant a position.

-- jeff
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <780164672snz@wildcard.demon.co.uk>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

> >I must strenously disagree with the original poster.  "Blazing
> >speed,space,etc." are that critical. 
> 
> Then why are so many things so large and slow?  Sure, there are
> some cases where speed, space, etc are critical, but there must
> be many others where they aren't.  I think you are right to an
> extent, but it can't be the whole story.

The problem is that real Lisp system don't compete well with
C/C++ systems. It doesn't matter that they _could_, it only
matters that they don't do it well enough. I blame it on byte
counting, but that doesn't help much.

> A strange thing is that it sometimes looks like only success in the
> PC market counts at all.  The PC market is a rather odd place.

I agree. (Oh no, not again! (-; ) It's the market that gets the
highest publicity. I have a friend who regularly slags off the
Mac, which is a machine he doesn't use. It may not be #1 on the
list of popular machines, but it's _there_. He doesn't see it
that way, of course. He has a nasty habit of attending those
multimedia events that Microsoft like to organise, esp when
Bill Gates makes an appearance.

Just blame the media and strong marketing. :-)

> The OS technology would have been laughed out of town in the 70s.
> And yet people put up with 640K restrictions, no virtual memory,
> no proper multi-tasking, etc, for ages.  I think it's reasonable
> that someone might not have predicted that things would develop
> the way they did in the PC market, much less that the PC market
> would start to dominate other markets (at least so far as
> perception of success is concerned).

Someone has a wonderful quote in their sigfile:

"640K outta be enough for  anyone" -- Bill Gates.

No. You can never have enough memory. It's obvious why. At
the very least, they'll want to sell you software with more
features, and features eat memory. Microsoft have been very
good to people who make RAM chips...

> Moreover, I think it's surprising that _any_ language has so
> dominant a position.

Agreed. "There can only be one." Once it is there, why should it
change? Think about when it changed, and why. For micros, it was
when we switched from 8 bit machines to 16 bit. I used a C compiler
on an 8bit machine, but then, I was odd like that. I didn't hear
about C being used to write best selling apps until much later.

Martin Rodgers
-- 
Future generations are relying on us
It's a world we've made - Incubus	
We're living on a knife edge, looking for the ground -- Hawkwind
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <780163663snz@wildcard.demon.co.uk>
In article <·····················@cabell.vcu.edu>
           ·······@cabell.vcu.edu "Adrian L. Flanagan" writes:

> The (relative) failure of Lisp has everything to do with Lisp
> vendors' failure to understand (even now) the needs of their
> marketplace.  Call it Ivory Tower Syndrome.

I agree about the need for speed and small a "footprint". At one
time, all the reviews of C compilers that I found would measure
of number of bytes for a "Hello, World" program. This was to see
how much library overhead there was, which means very little when
see object code sizes of more than a megabyte. The object code for
many leading apps these days tend to be at least this big.

The IDDE, the grahical front end, for the C++ compiler I use is
about 1 MB, and that's not counting the compiler and linker etc,
which are in their own DLL files. My machine can barely run the
IDDE, as it only has 8 MB of RAM. It does not run well, and yet
this is nothing compared to VC++.

Someone once said that "Lisp programmers know the value of every
thing, and the cost of nothing". This may be a gross generalisation,
but there may be some truth in it. I might say that C programmers
know the cost of everything, and the value of nothing, and that
would also be a gross generalisation. There also might be some
truth in it. Think of those byte counting compiler reviews. Now
imagine if Lisp systems were review with the same attention to
object code size.

I know, there's a different culture. In Lisp, you might not want
stand alone programs. You might simply call a function, instead
of launching an application. It might even look the same! That
was how Smalltalk-80 was intended to be used, but today, a modern
Smalltalk works a little differently. Smalltalk/V is a good example
of what I mean, and yet it is still judged by the standards set
by C programmers, who are still counting bytes.

Also note how the user interface ideas, like the desktop metaphor,
were taken from Smalltalk, but the idea of all code being part of
the system and not making a special distinction between system
code and app code wasn't adopted by Apple, Microsoft, etc.

If you underestimate that cutural difference, you may have to
pay for it. I hope you don't, but I can't find as many jobs
offered for Smalltalk/Lisp/Prolog as there are for C/C++/VB.
I could just be looking in the wrong places, but I would still
advise any vendor to look closely at what makes their product
different from, let's say, Microsoft's.

Perhaps Apple have thought about this. The design of the Dylan
language suggests to me that they have. I hope that it succeeds,
as I'm told that Smalltalk programmers are paid more than C/C++
programmers, and Dylan might well do better than Smalltalk as
a pure object orientted language. ;-)

Martin Rodgers
-- 
Future generations are relying on us
It's a world we've made - Incubus	
We're living on a knife edge, looking for the ground -- Hawkwind
From: Mike Fischbein
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CwJ82u.7nL@csfb1.fir.fbc.com>
John Doner (·····@aero.org) wrote:
: In article <············@sytex.com>, Scott McLoughlin <····@sytex.com> wrote:
: >I'm still _VERY_ curious (concerned?)
: >about why Lisp isn't more popular in "the trenches".
: ...
: >        So why don't folks use Lisp to write this stuff? Blazing
: >speed,space,etc. aint that critical. What gives?

:   None of them can explain why C won out over Pascal,
: for example.

Pascal has many limitations imposed in an attempt to mandate "good"
coding practices on beginners; C removes those limitations.  Many
Pascal compilers remove some of them, but use of those extensions
renders code non-portable.

:   My latest theory is that the answer lies in cognitive
: effects arising from the conception and structure of the language.
: People make up mental models of how things work, and interpret the
: programs they write in terms of those models.  For experienced
: programmers, compiler writers perhaps, these models are complete and
: accurate, closely corresponding to the objective reality.  Novice
: programmers have poor models that are incomplete, poorly related to the
: actual computing machines, and perhaps even inconsistent.

This is an excellent basis for the discussion.

:   The
: intellectual effort required to develop a good model for Lisp or Ada is
: much greater than that required to develop one for C.

This sentence I must disagree with completely.  Lisp can be
conceptualized more easily than most computer languages, certainly the
ones under discussion.  One might leave out significant chunks of the
language, but that is frequently what novice programmers do -- in whatever
language they work in.

:   There are more
: abstractions involved.  Thus, C is more easily comprehended by
: inexperienced programmers.

Which renders this conclusion invalid.  There are not more abstractions
involved; but the abstractions involved in Lisp are *different* than
those involved in C, and *different* than those most novice programmers
have been involved with.

C's conceptual machine model is similar to that of the most common
beginner/teaching languages, BASIC and Pascal, without many of the
limitations of those languages.  C's conceptual machine model is also
similar to most common CPUs.  This makes it easier for novice
programmers, who've been working in BASIC and studying the 8086
instruction set, to map their conceptualizations to C than to Lisp.
Similar handwaving for Pascal and 68000.

A programmer who can work comfortably with C, BASIC, Pascal, Fortran,
et al, has really learned one conceptual machine with different (and
varying amounts of) syntactic sugar.  This makes it easy to shift from
one Algol-like language to another; having learned BASIC or Pascal, the
novice programmer finds C to be an extension of already known
concepts.  Lisp presents a different way of thinking about the problem
that does not fit comfortably with what the novice already "knows"
about programming.

Lisp, Forth, Smalltalk, awk, and APL (and other non-Algol-like
languages) all have different conceptual machines.  All require
significant shifts in the way the programmer thinks about solving the
problem at hand (compared to the Algol-like family).  Gross conceptual
shifts are much more difficult than for most people than the relatively
minor syntactical shifts required for staying in a single language
family.  This applies to Lisp also; it is easier for a hypothetical
programmer who knows only KCL, say, to learn Scheme than it would be
for that same individual to learn C. But beginning programmers
generally work with simple languages usually designed for beginners,
such as BASIC, Pascal, Shell and Rexx; and these languages are almost
all part of the Algol family.

	mike
From: Geoffrey P. Clements
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <gclements-2309940758110001@155.50.21.58>
Hi,

Pardon me for jumping in the middle of this discussion. The question I
have is: what is Lisp used for? Is there anyone out there using Lisp to
develop comercial applications? The only application I've ever seen
written in Lisp "in the real world" was a FORTRAN to Ada translator. You
couldn't buy it. You gave the company your FORTRAN code and they gave you
back Ada code.

I've heard all the reasons why Lisp is such a great language, but no one
seems to be using it for anything but research projects. (I think. Correct
me if I'm wrong.)

I've played with a few small Lisps. Power Lisp and xLisp for the
Macintosh. I don't see what use they are over Metrowerk Code Warrior C++
for developing useful applications. (Read comercially saleable.)

geoff
From: J W Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CxBFF1.28C@festival.ed.ac.uk>
············@wildcard.demon.co.uk (Cyber Surfer) writes:

>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

>> There are 32 bits in a longword at some location.  They can be
>> interpreted as an int, as a float, as a pointer, etc.  The type
>> of the var that corresponds to that location says what they are.

>Assuming a CPU with a 32bit word for longs, of course. That implies
>that an int would be 16bit. You're also assuming that a long will
>be the same size as a pointer. I never make explicit assumptions
>like that in my C code, as it will be wrong on some machines.

I know all that.  But writing something that's exactly right
in general, rather than describing a simple example, would take
too long.  I assumed people could vary it to suit.

Now, in fact, 32 bits in a longword does not imply 16-bit ints.
Think of a VAX.  "Longword" is VAX terminology.  "Word" continued
to mean 16-bits, as on a PDP-11.

>> That's all I meant anyway.  Again I meant in C, not CL.  Or,
>> rather, in the model used to understand C.

>In one model used to understand C.

Well, what are the hardware models people keep talking about.
I still await an explanation.

> Sadly, I have to use more than
>one model, as some CPUs are rather odd. The Intel x86 family, for
>example. ;-) I usually use "real mode", but when I can, I prefer
>to use the 32bit protected mode. The size of a pointers and ints
>are different, depending on the CPU mode chosen for the program,

The idea that hardware fits C seems rather odd to me.  For
instance, Basic's "if <cond> then <line number>" is closer to
what the hardware's like.

Note too that the hardware's data is (usally) typeless.  Whether
some bit are a float or a pointer depends on what instructions
are used to operate on them.  This is similar to what happens
in an "everything is a list" Lisp.  This analogy was frequently
made in the past, but went out of favor as the various freedoms
of assembly code (self-modifying, typeless) were increasingly
seen as "bad".

>CL hides all of this from the programmer. An implementation might
>eveal some of it, but I've yet to use a Lisp that does that. If
>CL code run by CLISP can distinguish between 32bit "flat" pointers
>and 32bit far segmented pointers, I've yet to find a way to do it,
>and I'd be very suprised if it could use both.

C code can't distinguish either.  All the type info is compiled
away, as you no doubt know.  I don't really understand what you're
getting at here.

>> The ANSI doc is on the net, somewhere.  I think Barmar said where
>> a short while back.

>Yes, but 3 MB is too big for me. What file format is it in, anyway?

LaTex and .dvi.

>> That's interesting.  In my experience, you could typically redim the
>> array but only within its original total size.  So only the address
>> calculation, not the size, was dynamic.

>I don't remember that feature, but it's still only a variation of
>static allocation. If I understand what you're saying correctly.

Just so.  What I find interesting is that arrays were treated so
differently from strings.

>> For some reason, a number of languages (C, some Basics, some Lisps,
>> ...) treat arrays in a more "static" way than similar types (structs
>> in C, strings in Basic).

>With Basic, it really depends on the compiler. There are so many
>dialects that I've never seen two compatible systems. With CL, I
>might distinguish between the language and the compiler, but with
>Basic, I could only do that with ANSI Basic and an implementation.
>I've yet to ever use an implementation of ANSI Basic, so I can't
>comment on it.

There was always a language description that could have been
followed: Basic Nth edition, from Dartmouth.  This is roughly
analogous to the Lisp 1.5 book plus various manuals for particular
Lisp languages/implementations.  But for both Basic and Lisp,
implementors seldom felt constrained to follow such descriptions.
I find this interesting as well and would be interested in comments.

I suspect that the ease of implementing Basic and Lisp was an
important factor.  That is, it was fairly easy to implement a
Basic or a Lisp.  Why try to match another exactly when you
could make your task easier -- or add your own neat features --
by inventing a variant?

>> I find it difficult to judge how much difference it makes to learn
>> assembler.  I did learn an assembly language fairly early on, and
>> I'm not sure how difficult things would have been if I hadn't.

>I wouldn't want anyone to beging to learn programming at that
>level. I'm currently reading Roger Bailey's Hope tutorial, and
>the idea of teaching programming with Hope is one that appeals
>to me a lot. It might not have features like arrays, but that
>might actually _help_ the teaching process, rather than hinder it.

But some people have, evidently, encountered hardware-level
stuff rather early-on.  I haven't seen this Hope tutorial
(is it available on the net?) but it sounds like the kind
of thing that appeals to the mathematically inclined but
perhaps not the the raised-on-hardware types?

>> But I'd be surprised if novices typically learned such things
>> at / near the start.

>The point where I learned about the CPU was when I had to.
>I was beginning to outgrow Basic, and the "Basic" model was
>certainly holding me back. It was difficult to imagine what
>the machine was doing.

Why did you care what the machine was doing?  I'm not saying
you shouldn't, just asking the reasons.

I'm glad I learned assembler for a couple of machines, but I
don't have much desire to do the same again, and I've never
cared very much about the hardware details (logic circuits
and the like).

>Curiously, I could do all of that in Cambridge Lisp, on my 3rd
>machine. Not only that, but the code performed as fast as if
>I'd written it in C. That Lisp had no trouble interfacing to
>the OS, but then, it had a decent FFI, while the Basic on my
>first machine didn't. 

Can you remember what the FFI looked like?  Could you manipulate
C structs in Lisp or what?

BTW, I find it amusing that "foreign" seems to mean "C"
these days (though perhaps not in _your_ usage).

-- jeff
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <781778400snz@wildcard.demon.co.uk>
In article <··········@festival.ed.ac.uk>
           ····@festival.ed.ac.uk "J W Dalton" writes:

> I know all that.  But writing something that's exactly right
> in general, rather than describing a simple example, would take
> too long.  I assumed people could vary it to suit.

We're both being pedantic. ;-)
 
> Now, in fact, 32 bits in a longword does not imply 16-bit ints.
> Think of a VAX.  "Longword" is VAX terminology.  "Word" continued
> to mean 16-bits, as on a PDP-11.

That's my point. I try to make as few assumptions as possible.
When it's necessary, I prefer to document the assumptions.

> Well, what are the hardware models people keep talking about.
> I still await an explanation.

I try to avoid hardware models, as I prefer more abstract models
of mechine behaviour to exact specifications of "word" lengths etc.

> The idea that hardware fits C seems rather odd to me.  For
> instance, Basic's "if <cond> then <line number>" is closer to
> what the hardware's like.

Yes, and now fewer and fewer Basics use line numbers. The model used
for the "Hope Machine" by Roger Bailey in his hope tutorial avoids
a lot of aspects of a CPU "model", and goes for something much more
abstract.

> Note too that the hardware's data is (usally) typeless.  Whether
> some bit are a float or a pointer depends on what instructions
> are used to operate on them.  This is similar to what happens
> in an "everything is a list" Lisp.  This analogy was frequently
> made in the past, but went out of favor as the various freedoms
> of assembly code (self-modifying, typeless) were increasingly
> seen as "bad".

This also depends on how close your model is to a model of a CPU.
A register might be typeless, but you could also think of it as
a bit-value, as in BCPL or Forth. Some languages allow a more
abstract model of objects. In a suitably strongly typed language,
you could still use types at the lowest level in the model.

> C code can't distinguish either.  All the type info is compiled
> away, as you no doubt know.  I don't really understand what you're
> getting at here.

I'm saying that the nature of the pointers, segmented or "flat",
is not a language feature. It's usually a CPU feature. That makes
it possible to write code in C that will work with either kind
of pointer, but it also allows you to write code that depends on
a single kind of pointer. Can the same thing be done in CLISP,
for example? XLISP? MIT Scheme? I don't know. I don't even know
if you'd need to, but I certainly don't want to.

> >Yes, but 3 MB is too big for me. What file format is it in, anyway?
> 
> LaTex and .dvi.

Thanks. I'm not able to read either of those formats yet.

> Just so.  What I find interesting is that arrays were treated so
> differently from strings.

In the early Basics that I've used, there could a _lot_ of
variation from one implementation to another. They were often
different dialects, as well as different compiler/interpreters.

> Lisp languages/implementations.  But for both Basic and Lisp,
> implementors seldom felt constrained to follow such descriptions.
> I find this interesting as well and would be interested in comments.

So would I. What pressure is there today to conform to a
standard (for whatever language you choose, but let's choose
Lisp for the moment)?

It seems that CL vendors are still adding features that aren't
in the ANSI CL standard, and some of those features, like the
LOOP extensions, have been added to the standard. At what point
does this stop, if ever? What happens to CL if vendors move on,
and the standard doesn't? I could also ask who is still using
Standard Lisp today, but Cambridge Lisp is the only implemention
I've seen so far.

> I suspect that the ease of implementing Basic and Lisp was an
> important factor.  That is, it was fairly easy to implement a
> Basic or a Lisp.  Why try to match another exactly when you
> could make your task easier -- or add your own neat features --
> by inventing a variant?

Exactly. It was similar for Forth. I used Hyper Forth+ for a
while, and it was _very_ non-standard, even a few years after
Forth-79 appeared. I showed it to a salesman who sold PolyForth,
and he looked a little sick. It must have been the multitasking
demo, which was rather slow on an 8 Mhz 68K! I don't think he
was impressed.

Still, it was a fine development system, as long as you were
happy with all the differences from the "standard" dialects.
(Forth 79 was just the official one, but Fig-Forth was still
popular at that time.)

> But some people have, evidently, encountered hardware-level
> stuff rather early-on.  I haven't seen this Hope tutorial
> (is it available on the net?) but it sounds like the kind
> of thing that appeals to the mathematically inclined but
> perhaps not the the raised-on-hardware types?

Bailey has a tutorial on the WWW, but I haven't yet compared
it with his book. Here're URLs for the tutorial and for Hope:

ftp://santos.doc.ic.ac.uk/pub/papers/R.Paterson/hope/hope_tut/hope_tut.html
http://santos.doc.ic.ac.uk/pub/papers/R.Paterson/hope/index.html

> Why did you care what the machine was doing?  I'm not saying
> you shouldn't, just asking the reasons.

It was a 16K TRS-80. You can to learn that stuff to do anything
interesting, even in Basic. Even when I expaned the machine to
48K and added some floppy drives, it was still necessary to know
what about the CPU and play with hex codes, just to do some tasks
that had little to do with programming.

> I'm glad I learned assembler for a couple of machines, but I
> don't have much desire to do the same again, and I've never
> cared very much about the hardware details (logic circuits
> and the like).

I know how you feel! Life is too short for that. I don't mind
reading about it, or even getting my hands dirty when there's
no other way of doing something, but it's so easy to avoid all
that these days. I suspect that the only reason I _still_ need
to get down to that level is because not everyone thinks that
programmer time can be used better.

For example, I might still find myself using a list of hex
addresses for C functions in order to discover when a program
crashed. I can't just use the symbolic name without also adding
the symbolic info to a binary, and then running it with a
debugger. I even used to have to read mangled C++ names,
until a certain linker was updated so it could do that itself.
It now gives me error msgs with unmangled names, which makes
life so much easier.

> Can you remember what the FFI looked like?  Could you manipulate
> C structs in Lisp or what?

I recall that it could do all that, perhaps using something
like PEEK and POKE. I know that that's how it was done in
Basic on my first machine, and the code was called with a
function called something like USR, with the entry address
as the argument. It looked a lot cleaner in Cambridge Lisp,
of course.

> BTW, I find it amusing that "foreign" seems to mean "C"
> these days (though perhaps not in _your_ usage).

As I'm currently using Windows, this is easy. It has a standard
linkage for PASCAL and C. Most of Windows uses the PASCAL linkage.
Actually linking addresses at runtime is done by Windows itself,
which makes it as simple as you could hope for it to be. The only
problems that I know of seem to be that Borland and Microsoft
can't agree on how to return objects like doubles from functions,
but there's a simple "fix" for that - just look at what the
machine does at the stack level, and alter the prototype for
your function appropriately. It's not pretty, but it's simple
and it works, apparently.
-- 
"Internet? What's that?" -- Simon "CompuServe" Bates
http://cyber.sfgate.com/examiner/people/surfer.html
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CwJ88z.K2E@cogsci.ed.ac.uk>
In article <··········@relay.tor.hookup.net> ·····@RedRock.com (Bob Hutchison) writes:
>>              My latest theory is that the answer lies in cognitive
>>effects arising from the conception and structure of the language.
>>People make up mental models of how things work, and interpret the
>>programs they write in terms of those models.  [...]  The
>>intellectual effort required to develop a good model for Lisp or Ada is
>>much greater than that required to develop one for C.  There are more
>>abstractions involved.  Thus, C is more easily comprehended by
>>inexperienced programmers.
>
>Interesting theory here, but I don't think I agree.  In my experience, novice
>programmers are so caught up in the details that they cannot make
>useful abstractions at all.  This is the kind of thing you would expect
>of anyone learning something (e.g. many sports, especially team sports).
>I think that you are right that C is easier for them to comprehend, but
>I don't think it is because it has easier abstractions.  I think it is because
>they can use the computer hardware itself as C's abstraction, that is,
>use a concrete thing as an abstraction -- what an illusion :-)  I guess
>that I think you are basically right that the novice has a difficult time
>forming a useful understanding of how the language works, but I think
>this is difficult for any language.  I think that in C's case the novice can
>cheat.

I thought your (Bob Hutchison's) article excellent, and I don't want
to give the opposite impression by disagreeing with part of it.  Also,
I like the idea of using a concrete thing as an abstraction, though I
suspect that many people have a somewhat abstract model of the hardware.

However:

  (a) Novices don't necessarily know all that much about the hardware;
  (b) Novices (e.g. children even back in the days before they grew up
      with video games) have found it fairly easy to learn languages
      that aren't so close to the hardware (e.g. LOGO, Basic).
  (c) There are reasonably simple hardware-based models that work
      for Lisp.

This makes me question whether C wins because novices can use
the hardware as a "cheat".

It's important to bear in mind that some Lisps -- e.g. Common Lisp,
InterLisp -- are large and full of rather complicated stuff while
other Lisps are very simple.  They're smaller and simpler than C; and
Lisp implementations tend to be interactive, which makes it easier to
try things out.  It's also easy to set up Lisp to use the "just run
it" Basic approach.

Nonetheless, I think that in practice Lisp *is* often hard to learn.
I'm not sure I can say whether it's easier or harder than C.  It
would depend, for one thing, on how much of C and how well it must
be understood, and on how much of which Lisp.

Anyway, in my view the following factors are responsible for much
of the difficulty:

  (1) The fully parenthesized prefix syntax.
  (2) Peculiar, unfamiliar names such as car, cdr, cond, and lambda.
  (3) Hard topics such as recursion that tend to be mixed in with
      learning Lisp.
  (4) Confusing presentations of eval, quote, and "evaluates its
      arguments" that make the question of what gets evaluated
      seem much harder than it is.  (The syntax also contributes
      to this, because it's so uniform.)
  (5) Teaching that has a mathematical flavour and emphasises the
      functional side of Lisp.  This is great for some students but 
      makes Lisp much harder for others.  E.g. box-and-arrow diagrams
      are tied to the discussion of mutation, and hence aren't
      available when people are first trying to figure out what lists
      are.  (A number of odd models can result from this.)

Some of these are already questions of how Lisp is taught.  Others,
such as the fully parenthesized syntax, require more care in
presentation than they often receive.  It will also be interesting
to see how much difference it makes to change the syntax (as in
Dylan).

-- jeff
From: Bob Hutchison
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <35urdq$31p@relay.tor.hookup.net>
In <··········@cogsci.ed.ac.uk>, ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <··········@relay.tor.hookup.net> ·····@RedRock.com (Bob Hutchison) writes:
>>>              My latest theory is that the answer lies in cognitive
>>>effects arising from the conception and structure of the language.
>>>People make up mental models of how things work, and interpret the
>>>programs they write in terms of those models.  [...]  The
>>>intellectual effort required to develop a good model for Lisp or Ada is
>>>much greater than that required to develop one for C.  There are more
>>>abstractions involved.  Thus, C is more easily comprehended by
>>>inexperienced programmers.
>>
>>Interesting theory here, but I don't think I agree.  In my experience, novice
>>programmers are so caught up in the details that they cannot make
>>useful abstractions at all.  This is the kind of thing you would expect
>>of anyone learning something (e.g. many sports, especially team sports).
>>I think that you are right that C is easier for them to comprehend, but
>>I don't think it is because it has easier abstractions.  I think it is because
>>they can use the computer hardware itself as C's abstraction, that is,
>>use a concrete thing as an abstraction -- what an illusion :-)  I guess
>>that I think you are basically right that the novice has a difficult time
>>forming a useful understanding of how the language works, but I think
>>this is difficult for any language.  I think that in C's case the novice can
>>cheat.
>
>I thought your (Bob Hutchison's) article excellent, and I don't want
>to give the opposite impression by disagreeing with part of it.  Also,
>I like the idea of using a concrete thing as an abstraction, though I
>suspect that many people have a somewhat abstract model of the hardware.

Thanks, and I don't mind being disagreed with, I am quite used to it :-)

>
>However:
>
>  (a) Novices don't necessarily know all that much about the hardware;
>  (b) Novices (e.g. children even back in the days before they grew up
>      with video games) have found it fairly easy to learn languages
>      that aren't so close to the hardware (e.g. LOGO, Basic).
>  (c) There are reasonably simple hardware-based models that work
>      for Lisp.

I don't know that it matters that their model of the hardware is abstract.
The model they have seems actually quite good at predicting what the
hardware will do (CPU here, not IO devices so much).  If they can then
translate that predictive model into a 'C' model, then they stand a
chance of predicting what a C program will do.

Children seem to be an exception to everything to do with learning
(do you have kids?).  They seem to be better at learning.  If we think
of adults learning, we see that they do it a bit differently than
children.  Adults try to relate new things to things they already know
or understand.  Children don't have that luxury if they are young,
and don't seem to need that technique so much.  If this is a valid
understanding of how many adults learn, then being having the aid
of a simple hardware model translated into C, that can help them
predict what a program will do, might be found useful.

Use of a hardware model to aid in learning a programming language
would apply to most languages.  I don't doubt that there is a suitable
hardware model to explain lisp, but I don't think it is the same one.
Unfortunatly the one availble to a C programmer is the one taught,
at least where I went to school.

The other difficulty with languages like the lisps and other high level
languages, is that they provide a fair bit of support for the
development of 'software'.  I wonder what a hardware model of
a continuation in scheme or ml would look like, or a non-deterministic
program written using them?  What is the hardware model for an
abstract data type for that matter?  What is the hardware model
for a CL macro?  (this macro idea is one that seems to be somethin
that a C programmer has an awful time comprehending, possibly
it is just an 'I don't believe you' problem rather than a 'what would
I do with it?' problem)

>
>This makes me question whether C wins because novices can use
>the hardware as a "cheat".
>
>It's important to bear in mind that some Lisps -- e.g. Common Lisp,
>InterLisp -- are large and full of rather complicated stuff while
>other Lisps are very simple.  They're smaller and simpler than C; and
>Lisp implementations tend to be interactive, which makes it easier to
>try things out.  It's also easy to set up Lisp to use the "just run
>it" Basic approach.

Scheme is relatively new to me, I assume that it is one of the simpler
lisps you refer to.  While it is a nice simple clean language that I find
rather appealing, it supports a programming style that, in my opinion,
is fundamentally a software oriented style, not a hardware one.

Is there a simple model of a scheme 'machine' that would allow
someone to predict behaviour of the software?  I would have thought
that scheme is its own best model.  Wasn't that kind of the point of
scheme?

My first reaction to the 'just run it' approach to lisp was a bit negative.
But when you think about it 'just running' lisp is probably not much
different than the kind of C programming we get.  It also holds the
promise that as the programmer gains experience the other aspects
of lisp become available.

>
>Nonetheless, I think that in practice Lisp *is* often hard to learn.
>I'm not sure I can say whether it's easier or harder than C.  It
>would depend, for one thing, on how much of C and how well it must
>be understood, and on how much of which Lisp.

I can tell you that my problem with CL was finding a subset of it
that I could do something with.  CLtL was not much use for that.
It wasn't until I came across Paul Graham's "On Lisp: ..." that things
'switched on' with CL.  Scheme was much easier.

>
>Anyway, in my view the following factors are responsible for much
>of the difficulty:
>
>  (1) The fully parenthesized prefix syntax.
>  (2) Peculiar, unfamiliar names such as car, cdr, cond, and lambda.
>  (3) Hard topics such as recursion that tend to be mixed in with
>      learning Lisp.
>  (4) Confusing presentations of eval, quote, and "evaluates its
>      arguments" that make the question of what gets evaluated
>      seem much harder than it is.  (The syntax also contributes
>      to this, because it's so uniform.)
>  (5) Teaching that has a mathematical flavour and emphasises the
>      functional side of Lisp.  This is great for some students but 
>      makes Lisp much harder for others.  E.g. box-and-arrow diagrams
>      are tied to the discussion of mutation, and hence aren't
>      available when people are first trying to figure out what lists
>      are.  (A number of odd models can result from this.)
>

Most of these points are illustrations of what I mean by support
for 'software'.  These are software ideas not hardware.  Though, I
should mention that in my case car, cdr, cond were no problem
at all (even in 1976) -- they were unique names for things so if
anything they reduced confusion.  Lambda still seems to imply
more than it does, something deep that isn't really there, and so
is a bit distracting :-)

>Some of these are already questions of how Lisp is taught.  Others,
>such as the fully parenthesized syntax, require more care in
>presentation than they often receive.  It will also be interesting
>to see how much difference it makes to change the syntax (as in
>Dylan).
>

This will be interesting. Though what about ML?  It has been around
for a while now, what is the experience with that?

I think that this discussion is interesting, and possibly even useful,
the real issue with lisp is social and political.

--
Bob Hutchison, ·····@RedRock.com
RedRock, 135 Evans Avenue, Toronto, Ontario, Canada M6S 3V9
(416) 760-0565
From: Bailin Jeremy
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CwLL9D.vB@cdf.toronto.edu>
I've noticed that it seems that the old paradigm of learning vocal languages
(English, French, Japanese etc) seems to be equally valid with programming
languages, ironically enough. Basically, learn as many different ones as
possible while you're still young. <grin> People have difficulty moving from
one type of language paradigm (say C) to a very different one (say Lisp),
whereas both can be learned quite well early on.
On the other hand... Every now and then I look back at my old Basic code...
and shudder. :-)=


     _________________________________
   / "I took a drink of holy water     \   Jeremy Bailin
 / It tasted like the pipes were rusty   \ 
| I listened to the words of wise men     |   SVP111     BAILIN.92B
| It sounded like their words were dusty" |   ········@cdf.utoronto.ca
 \                - Cause & Effect       /  
   \___________________________________/   MCMXCIV
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <781453181snz@wildcard.demon.co.uk>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

> In any case, someone learning Lisp doesn't have to learn everything
> at once, so I don't see why closures have to be an obstacle.

Agreed. In fact, the 2nd ed of Winston and Horn avoids some of the
CL semantics of closures. This was probably so that their code could
work with Golden CL, which I believe used dynamic scoping at the time.
Is that an example of what were saying about closures, if GCL didn't
support them?

I wonder if AutoLisp supports closures? It wouldn't have to, even
if it would be handy if it did. I've seen an embedded language in
another CAD program that didn't support closures, and I can't even
recall if it used lists.

> I find this very odd.  Doing calculations on paper gives a useful
> model for C but not for Lisp?

I think it could still be done for Lisp. I'm reading a book about
Hope that shows each step in a calculation, and there's no reason
to say that it can be done in Hope but not in Lisp.

Since the languages I'm familiar with are all procedural at some
level, this can be done, but that may be because I'm thinking of
what the implementation does, not the language. There must, however,
be a point at which I won't know how the implementation works,
and I learned Lisp long before I ever used it, so the model I used
back then is likely to have been simpler than the one I use now.

> The C compiler manipulates programs.  Do people suppose it works
> in a mysterious way?  Don't they think "cat" and "grep" are programs?

I know that some of my friends have no idea how a compiler works.
In the same way, I have no idea how some other things work.

> When I was in high school, I wrote a Lisp interpreter in Basic.
> It was a couple of pages long.  I couldn't have written a C compiler,
> or at least I'd have found it much harder.  I'm surprised that you
> think a C compiler might be easier.

I wrote my first compiler in Basic. It dumped the screen to a
source file, along with the code to load and display the screen
image. The rest of the Basic code created a logo on the screen,
and then ran the compiler. The last step was to run the assembler.

Now I do similar things in Lisp, C, Forth etc. The main difference
is that Lisp and Forth make the implementation of a "compiler"
much easier, as it has a lot of leverage. All of Forth is part
of the Forth compiler, even if you don't exactly intend it to be.
You have to either tell Forth not to create a header for the word,
or remove it later. The reason I say that your word will be part
of the Forth compiler is because of the way the compiler and text
interpreter may be the same code in many Forths. There's no single
monolithic unit of code that can be identified as "the compiler".

In a similar way, in CL you could claim that CL some functions
and all macros perform the same function. Because of the limited
way they interact, there's a lot of information hiding. One macro
doesn't need to know what another macro does, or how it does it.
If it needs to look at the expanded code, it can call macroexpand.
That kind of decoupling makes it very easy to write "compilers".
 
> >Interactive environments like lisp and smalltalk are clearly winners over
> >the compile/link/debug cycle of C (or the compile/link/reboot cycle in
> >Windows -- yes, it is even more dramatic in windows).  However, some
> >of these C/C++ compilers on PCs are getting awfully fast.
> 
> I was thinking of the interactive rather than the speed.

So was I. ;-) For me, interactive _is_ speed. Perhaps a better
word to use is immediate, as you can see the result of an expr
immediately, rather than after compiling and then running the
code.

>   let a$ = b$
>   let a$ = "b$"
> 
> Would you now say quotation in Basic is "in support of manipulating
> software by programs"?

It could if your Basic has an Eval function, as some do, but
you'd still do all your "software manipulation" thru string
processing. That might be great in SNOBOL4, but I'd hate to try
it in Basic. Of course, I did first design my first interpreter,
as a thought experiment, in Basic. I didn't write it then, as
I had no idea what an interpreter could be used for, but I
remember imagining how to use arrays to simulate the memory
for operators and operands. I later used that idea in a Lisp
compiler/interpreter, with the addition of a stack. I expect
I could have done that in Basic instead of C, but I didn't
have good enough Basic at that time (it had too many bugs
and was hideously slow, even for a token-based Basic).

> >I think we will always have languages for programming hardware
> >and languages for programming software.
> 
> I think it's an interesting data point that I spend lots of time
> programming in Lisp, also program in C, and find that position odd.

Plus, it's possible to program for hardware in Lisp and Prolog
etc. In fact, you could say that the VIPER CPU was designed
using a declarative language interpreted by Prolog, and then
compiled by Prolog into silicon, via a CAD/CAM system. If that
can be done, however "badly" (it produced a 1 Mhz CPU), then
I can believe anything is possible.

Years ago I heard someone say, "You can do anything in Forth.
All you have to do is implement it." I suspect that this is
also true of many other languages, including Lisp. Some people
are busy proving it!
From: Henry G. Baker
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <hbakerCxDu55.9HH@netcom.com>
In article <············@wildcard.demon.co.uk> ············@wildcard.demon.co.uk writes:
>I wonder if AutoLisp supports closures? It wouldn't have to, even
>if it would be handy if it did. I've seen an embedded language in
>another CAD program that didn't support closures, and I can't even
>recall if it used lists.

AutoLisp is similar to XLISP circa 1982 which ran in 32K bytes.
AutoLisp has conses, strings and file handles.  AutoLisp has dynamic
scoping.  AutoLisp had (as of Rel. 9) immutable strings and immutable
lists, but it does have SETQ, so it isn't a completely functional
language.  AutoLisp does _not_ have closures, at least in Rel. 9.

AutoLisp has a persistent database, which is the 'drawing file', but
it isn't exactly general purpose.

      Henry Baker
      Read ftp.netcom.com:/pub/hbaker/README for info on ftp-able papers.
From: Fernando Mato Mira
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <37ga57$9v8@disunms.epfl.ch>
In article <············@wildcard.demon.co.uk>, ············@wildcard.demon.co.uk (Cyber Surfer) writes:

> No, but AutoLisp makes a great embedded language. ;-)

Is this irony on "great"? I guess so.

[ There's _NO_ LET in it (you have to make your locals be `aux' variables).
  I have used it. It _stinks_. ]

-- 
F.D. Mato Mira           http://ligwww.epfl.ch/matomira.html                  
Computer Graphics Lab    ········@epfl.ch 
EPFL                     FAX: +41 (21) 693-5328
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CxML6s.9qG@cogsci.ed.ac.uk>
In article <·························@155.50.21.58> ········@keps.com (Geoffrey Clements) writes:
>Hi,
>
>I'm a C programmer. I'll admit it. I've been reading this thread and it
>has been interesting.
>
>I think I know why C is popular.
>
>I've programmed in: C, C++, Modula, PDP-11 68000 and 80x86 assembly,
>FORTRAN, PASCAL, Lisp, Scheme, Smalltalk, and a few other that I forget at
>the moment. Granted some of the programs were no more complicated than
>"Hello, world!". But I've tried them.
>
>Every time I need to hack up a quick and dirty tool I usually turn to my
>trusty C compiler and dash something off. I think most people do the same
>thing because it is easy to start coding WITHOUT a design in C.
>
>When you code in C++, Lisp, Smalltalk, etc. you need some kind of design
>before you can start.

For Lisp I have to disagree, and I suspect many others will do so as
well.  For one thing, Most Lisp implementations are interactive, which
helps make it easy to play around with things while you think about
them.  That declarations aren't required also makes this easy, because
you can write functions before you've figured out what types will
eventually be used.  For instance, you can write function on lists
without knowing what they'll be lists of.

However, I agree that C is very often better for quick and dirty tools
of certain sorts.  For one thing, it's not clear how to get a Lisp
program to be a standalone tool.  It's almost completely implementation-
dependent.  There are also lots of different languages in the Lisp
family, which naturally makes things vary more than they do in a
single language such as C.

If there were a standard way to write a standalone program in some
Lisp-family language, the situation would be just as you describe it
in C so far as that aspect if concerned.  And the standard way 
could even be

(defun main (argc argv)
  ...)

>People use the tools that let them
>get their job done.

Sure, but this is a different reason from the one you started with.

-- jeff
From: Robert Sanders
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <37k2s1$f7m@henri.mindspring.com>
In article <··········@cogsci.ed.ac.uk>,
Jeff Dalton <····@aiai.ed.ac.uk> wrote:

>If there were a standard way to write a standalone program in some
>Lisp-family language, the situation would be just as you describe it
>in C so far as that aspect if concerned.  And the standard way 
>could even be
>
>(defun main (argc argv)
>  ...)

Two Scheme compilers that I know of let you write stand-alone
programs with this construct:

(define (main argv)
  ...)

I find Bigloo and Scheme->C excellent examples of how useful a
practical -- though perhaps imperfect -- relationship with C can 
be.  They link together, they can call each other, and the stand-
alone program conventions are similar.

  -- Robert
From: Mike Haertel
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <MIKE.94Oct13140351@pacifix.cs.uoregon.edu>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>And the standard way [to get a standalone Lisp program]
>could even be

>(defun main (argc argv)
>  ...)

Of course, in Lisp you wouldn't need argc, because Lisp arrays
record their own length... :-)

And wouldn't "argl", an argument LIST, be more lispy?
--
Mike Haertel <····@cs.uoregon.edu>
From: Devin Cook
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <37otug$j8g@news.u.washington.edu>
In article <·························@155.50.21.58>,
Geoffrey Clements <········@keps.com> wrote:

>
>Every time I need to hack up a quick and dirty tool I usually turn to my
>trusty C compiler and dash something off. I think most people do the same
>thing because it is easy to start coding WITHOUT a design in C.
>
>When you code in C++, Lisp, Smalltalk, etc. you need some kind of design
>before you can start.
>

This comment is very intresting because it is exactly the opposite of what
I have read on lisp.  With Lisp, you work from the bottom up, so you start
to build so small utilities until your ready to actually figure out how to
use them.

Here is my two cents worth:

Lisp is slow and comes with two much baggage.  The number of keywords you
need to get going is overwhelming.  Also, since Lisp is weak at I/O, its
hard to write small programs to play with.  ( I mean its hard to get excited
about a language when your limited to writing code that find the next item
in a sequence when you start out!  Yawn.... )

Debugging is also a real pain.  (Trace) just doesn't cut it in this day
and age.  The problem is that with properly written Lisp code, ( at least
this is what it says in the books! ) there are NO intermediate results to
examine and tell you why some code doesn't work.  In C or Borland's Pascal
( my favorite language ) I can break up an expression as small as I like
and watch the variable I'm intrested in.

If the MC2 interface is as nice a Lisp IDE as Lisp guys can come up with,
Lisp is dead in the "real world".  Of course, I'm not sure how many of
these limitations is due to the Mac's these run on.  And with as SLOW as
MC2 is, its a shame it won't run on a Power Mac.





-- 
 ==========================================================================
 |  He's good and kind and only kills what he needs   |                   |
 |  to eats.  SNL                                     |   Devin Cook      |
 ==========================================================================
From: Simon Brooke
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CxrB0E.zJ@rheged.dircon.co.uk>
Wee laddie, if you don't know what you're talking about, don't post.

In article <··········@news.u.washington.edu> ···@u.washington.edu (Devin Cook) writes:

   This comment is very intresting because it is exactly the opposite of what
   I have read on lisp.  With Lisp, you work from the bottom up, so you start
   to build so small utilities until your ready to actually figure out how to
   use them.

With LisP, you start wherever you like. The bottom up approach is
possible, and may be the best way to proceed when you really don't
know what you're doing, but many people work top down.

   Here is my two cents worth:

I wouldn't pay two cents for ignorance.

   Lisp is slow and comes with two much baggage.  The number of keywords you
   need to get going is overwhelming.  Also, since Lisp is weak at I/O, its
   hard to write small programs to play with.  ( I mean its hard to get excited
   about a language when your limited to writing code that find the next item
   in a sequence when you start out!  Yawn.... )

At the time of the launch of the SPARC machines, Sun claimed that
their Common LISP compiler generated faster code than their C
compiler. Generally, however, it seems to be agreed that well written
Common LISP programs will run a little slower than equally well
written C programs -- but the factor will be better than 1.75:1.
Common LISP, because of its size and complexity, is not likely to be a
fast LisP, although I haven't really looked at comparative benchmarking.

There are no keywords in LisP, so the number your need to get you
started is nil. The core of LisP comprises about a dozen functions, with 
which everything else can be built.

   Debugging is also a real pain.  (Trace) just doesn't cut it in this day
   and age.  The problem is that with properly written Lisp code, ( at least
   this is what it says in the books! ) there are NO intermediate results to
   examine and tell you why some code doesn't work. In C or Borland's Pascal
   ( my favorite language ) I can break up an expression as small as I like
   and watch the variable I'm intrested in.

In any decent LisP you can watch the stack, where all intermediate
values are held. You can watch or break any arbitrarily small
expression, and edit it at source level within the break, with all its
runtime information available, to fix the problem. You can then
continue the execution with the corrected code. You can't do this in
any ALGOL family language.

If you want a small, fast, efficient LisP in a DOS environment, you
might look at MuLISP.

-- ·····@rheged.dircon.co.uk

			-- mens vacua in medio vacuo --
From: Markku Laukkanen
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <37tqrfINNc17@dur-news.ctron.com>
In article <··········@news.u.washington.edu> ···@u.washington.edu (Devin Cook) writes:
>In article <·························@155.50.21.58>,
>Geoffrey Clements <········@keps.com> wrote:
>
>Here is my two cents worth:
>
>Lisp is slow and comes with two much baggage.  The number of keywords you
>need to get going is overwhelming.  Also, since Lisp is weak at I/O, its
>hard to write small programs to play with.  ( I mean its hard to get excited
>about a language when your limited to writing code that find the next item
>in a sequence when you start out!  Yawn.... )

basic is slow in I/O. Some C++ compilers are slow with (io)strstream etc.. etc..

And always, when I have to write code, which does some analysing in the input, I use LISP instead of sed/awk etc.. etc..


>
>Debugging is also a real pain.  (Trace) just doesn't cut it in this day
>and age.  The problem is that with properly written Lisp code, ( at least
>this is what it says in the books! ) there are NO intermediate results to
>examine and tell you why some code doesn't work.  In C or Borland's Pascal
>( my favorite language ) I can break up an expression as small as I like
>and watch the variable I'm intrested in.

stepper and inspector exist.

I have been doing Lisp to C translating environment for couple of
years (hoppy), and the most difficulties arise with the C code.

Once I spent 200 hours with dbx/assembler mode to catch one register
allocation bug inside AIX/RS6000 C compiler.

I prefer using step/trace instead of assembler debugger.

PKY
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Cy1DqA.3xH@cogsci.ed.ac.uk>
In article <··········@news.u.washington.edu> ···@u.washington.edu (Devin Cook) writes:

>Here is my two cents worth:
>
>Lisp is slow and comes with two much baggage.  The number of keywords you
>need to get going is overwhelming.

Perhaps you are thinking of Common Lisp rather than Lisp?

And for "slow", it's presumably particular implementations.

>  Also, since Lisp is weak at I/O, its
>hard to write small programs to play with. 

So what does "Lisp" I/O lack?  (This is a real question, not a
rhetorical one.)

>Debugging is also a real pain.  (Trace) just doesn't cut it in this day
>and age.  The problem is that with properly written Lisp code, ( at least
>this is what it says in the books! ) there are NO intermediate results to
>examine and tell you why some code doesn't work. 

What books have you been reading?  There are plenty of intermediate
results, and many of them will be the values of variables.

It's true that trace alone is somewhat limited (though a good
trace can do more than you might expect).  But trace if far from
the only debugging tool available.

>If the MC2 interface is as nice a Lisp IDE as Lisp guys can come up with,
>Lisp is dead in the "real world".

So what would you suggest as a good IDE for comparison purposes?

-- jeff
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <782420381snz@wildcard.demon.co.uk>
In article <··········@disunms.epfl.ch>
           ········@di.epfl.ch "Fernando Mato Mira" writes:

> Is this irony on "great"? I guess so.
> 
> [ There's _NO_ LET in it (you have to make your locals be `aux' variables).
>   I have used it. It _stinks_. ]

I didn't say it was perfect. Perhaps I should merely have said that I
like the idea of using a Lisp as an embedded language for a CAD app?
-- 
"Internet? What's that?" -- Simon "CompuServe" Bates
http://cyber.sfgate.com/examiner/people/surfer.html
From: Lawrence G. Mayka
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <LGM.94Oct19083020@polaris.ih.att.com>
In article <·························@155.50.21.58> ········@keps.com (Geoffrey Clements) writes:

   Every time I need to hack up a quick and dirty tool I usually turn to my
   trusty C compiler and dash something off. I think most people do the same
   thing because it is easy to start coding WITHOUT a design in C.

   When you code in C++, Lisp, Smalltalk, etc. you need some kind of design
   before you can start.

   Every C program starts out like:

   main (int argc, char *argv[]) {
   }

   Then you add:

   main (int argc, char *argv[]) {

      parse_arguments (argc, argv);
   }

   Then:

   main (int argc, char *argv[]) {

      parse_arguments (argc, argv);

      do_stuff ();
      clean_up ();
   }

   There we have a nice C program without having any design or even a concept
   of what the program is supposed to do.

If you're referring to the ability to generate standalone
applications, it is true that CLOS (ANSI Common Lisp) implementations
often make the packaging of a standalone application for delivery more
difficult than typical C/C++ implementations do, especially if the
application in question is a trivial "Hello world" program, which
C/C++ excels at.

On the other hand, if you're referring to program functionality
itself, clearly the "nice C program" you cite is simply useless
baggage that CLOS avoids entirely.  CLOS already knows how to accept
and parse arguments, invoke operations on them, and "clean up" (e.g.,
garbage collection, which CLOS does automatically, or environmental
cleanup, which CLOS does via WITH- macros).  This is why the "nice C
program" requires no design: because it doesn't actually do anything
useful!  It is true, however, that mindless busy-work such as the
"nice C program" gives novices the =illusion= that they are getting
useful work done.  This =illusion= of immediate productivity does
serve the purpose of retaining their interest and increasing their
self-esteem.

This "busy-work => self-esteem" effect I have observed even in highly
experienced programmers.  It appears that everyone has a psychological
need for repetitive ritual.  Some people satisfy that need with
religious or family-life rituals, which (I believe) actually serve
useful transcendental purposes.  Others, though, project their need
for ritual onto programming, not only wasting their and others' time
and money but also dragging down the entire industry by effectively
coercing everyone else (by reason of "popularity," or the misuse of
terms like "industry standard" and "open system") to do the same.
Very unfortunate.

   People have been talking about converting C programmers to Lisp by
   describing some of the goodies Lisp has that C doesn't. What are they? 

Essentially, CLOS offers various abstractions, and various kinds of
abstraction, which reduce development and maintenance time as well as
human-perceived complexity, thereby making software systems more
powerful, flexible, and evolvable.  The difficulty I've seen is that a
person who =likes= to repetitively program the same low-level
operations over and over doesn't really =want= such abstraction,
because abstraction reduces the busy-work that provides her/him with
self-esteem.

   Can you write a Photoshop plugin in Lisp? An INIT? A cdev? These are the
   things I normally do to make a living. People use the tools that let them
   get their job done. If I was writting a large simulation or testing
   programming language designs I'd probably use Smalltalk or Lisp.

If everyone had the freedom to use the best tool for the job, the
programming industry would be a much better place.
--
        Lawrence G. Mayka
        AT&T Bell Laboratories
        ···@ieain.att.com

Standard disclaimer.
From: Tony Tanzillo
Subject: Re: Why do people like C? (Was: Compa..
Date: 
Message-ID: <388t6p$6v5$4@mhadg.production.compuserve.com>
Your point about the relative difficulty of delivering stand-alone 
LISP-based applications (verses C/C++), is valid for _stand-alone_ 
applications.

Conersely, that is also what makes LISP one of the best choices 
for an embedded language (e.g., for CADD macro programming).  

The entire host application becomes a common-demoninator and 
application framework that makes the delivery of embedded LISP 
applications much simpler than C/C++

-- 
Tony Tanzillo
Design Automation Consulting
AutoCAD programming/customization services
From: J W Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CxBFyD.3CD@festival.ed.ac.uk>
············@wildcard.demon.co.uk (Cyber Surfer) writes:

>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

>> Sure, its the "same thing" in a sense, especially if the Basic
>> is compiled.  But it's not the same in terms of what the user
>> has to do to get a program to run.  If what the user sees is
>> different, it's not solely an implementation issue.

>I agree. I was looking at it from the perspective of the compiler
>writer. [...]

>For example, if you compare an interative Basic with a compiled
>Basic, there are usually big differences. If the interactive
>system uses an incremental compiler, then there differences may
>be smaller ones. The programmer might not even notice the compiler.

Just FYI:
So far as I know, Dartmouth Basic always used a non-interactive
compiler.  But it was a fast compiler, and the model was that you
entered a program using some editor (one -- using line numbers --
was built into the command interpreter, which was a multi-tasked
system rather than a lot of spearate shells).  The you typed "run".
Many interpreters followed the same model.  

>>   shell% liszt -r foo
>>   shell% foo
>> 
>> which are even closer to the C model.  

>Which supports my belief that there are no real problems with
>"Lisp". There are only differences between dialects and their
>implementations.

I agree.

>> Why do you think Dylan has an advantage as a language over other
>> Lisps?  The parsing task is more complex, so it ought to be slower
>> overall.

>Partly because Dylan can (or promises to) do some things that
>no Lisps availabe to me currently support. Sharing code at the
>class level is one feature that I'm particularly hoping will be
>used in Dylan implementations for Windows.

Why does CLOS not support this?

>I'm not concerned about the "parsing task", as I currently have
>use C++ and compile with enourmous headers defining large class
>heirarchies. An incremental compiler would help, but I know of
>no C++ compilers that support this. Even if Dylan's parser must
>be more complex than a C++ parser, that's nothing to make me
>worry. 

I meant compared to Lisp.  The parser s.b. simpler than one for C++.

>    I should add that most language systems are not available
>to be, and most of the "Lisps" that are available to me have not
>impressed me as much as the Smalltalk or Actor systems I've used.
>That may be because many of them don't use a GUI (yet), which I'd
>prefer, and that the ST and Actor systems I've used both had
>excellent GUI support.

Sometimes the GUI is part of the problem.  I'm all for having some
Lisps w/ fancy GUIs, but there should also be some alternatives.

>It may be worth looking at some books for C programmers about
>other languages, and see how they do it. When I've browsed thru
>such books they seem to adopt a model close to C, or whatever
>language they assume the programmer already uses.

>Another way could be to find some alternative model and to
>attempt to re-educate the C programmer. This is why I suspect
>that it's bette to catch someone before they learn C.

People used to say things like this: you can learn Lisp in a
day, or three if you know FORTRAN.  (That's more or less a
direct quote from the Eclisp documentation -- anyone ever
encounter that beast?  It was for DG Eclipse machines.)

>This reminds me of the Hope book I'm currently reading, which
>assumes no previous programming experience. I'm fequently
>suprised by how much it explains - it goes into far greater
>detail as far as the model is concerned, but without going
>to a lower hardware level. It _explains_ more about what
>is being done. Most books I've read assume a great deal of
>prior knowledge, even when they try to avoid that.

What books is that?  Still in print?  I'll get one if so.

>Roger Bailey seems to have anticipated far more of the things
>that someone knew to computers and programming may misunderstand.
>Using notes for teaching students may have helped, and the book's
>intro supports that. As experienced programmers, we might find
>it harder to think like a newcomer to programming, or like a
>newcomer to Lisp, or like a C programmer.

I *used* to understand what new programmers were like, but if
all this talk of hardware models is true they may have changed.
My encounters w/ new programmers were a number of years back.

-- jeff
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <782420014snz@wildcard.demon.co.uk>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

> Now, why is sealing necessary for "sharing code at the class level",
> which is what I meant my "this" to refer to?

I didn't say it was necessary. I was merely implying that if Apple
are encouraging implementors to use this feature, then we might get
some Dylan implementations that support it. I don't know of any
language implementations that offer such a feature, but if you can
tell me of any, then I'd appreciate it.

> Surely it can.  For a slightly different case, consider freeze-defstruct
> in AKCL.

I don't know what that is. Could you please describe it? Thanks.

> If the system lets you ignore the GUI.

An implementation could still not provide a framework for the GUI.
For example, with some platforms that offer a GUI, it's easy to use
a window in a language without explicitly using GUI in the source
code for your program. The implementation for the language you write
in might create a window, and then you could just write to it using
what appear to be I/O functions. The Windows version of XLISP works
like that.

Would that be what you'd call "ignoring the GUI"?

-- 
"Internet? What's that?" -- Simon "CompuServe" Bates
http://cyber.sfgate.com/examiner/people/surfer.html
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Cy1E6D.44v@cogsci.ed.ac.uk>
In article <············@wildcard.demon.co.uk> ············@wildcard.demon.co.uk writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:
>
>> Now, why is sealing necessary for "sharing code at the class level",
>> which is what I meant my "this" to refer to?
>
>I didn't say it was necessary. I was merely implying that if Apple
>are encouraging implementors to use this feature, then we might get
>some Dylan implementations that support it. I don't know of any
>language implementations that offer such a feature, but if you can
>tell me of any, then I'd appreciate it.

You wrote:

  Partly because Dylan can (or promises to) do some things that
  no Lisps availabe to me currently support. Sharing code at the
  class level is one feature that I'm particularly hoping will be
  used in Dylan implementations for Windows.

I asked why CLOS couldn't support sharing code at the class level
and you gave me something about sealing.  I'm still wondering 
what the connection between sealing and sharing is, but you still
don't explain it.  All you say is "I didn't say it was necessary".

>> Surely it can.  For a slightly different case, consider freeze-defstruct
>> in AKCL.
>
>I don't know what that is. Could you please describe it? Thanks.

It says no more subclasses and so some things (such as the
"-p" predicates defstruct defines) can take this into account.

>> If the system lets you ignore the GUI.
>
>An implementation could still not provide a framework for the GUI.
>For example, with some platforms that offer a GUI, it's easy to use
>a window in a language without explicitly using GUI in the source
>code for your program. The implementation for the language you write
>in might create a window, and then you could just write to it using
>what appear to be I/O functions. The Windows version of XLISP works
>like that.
>
>Would that be what you'd call "ignoring the GUI"?

I don't fully understand what you're describing.

But if the system creates any windows, that's not letting me
ignore the GUI.

-- jeff
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <783017512snz@wildcard.demon.co.uk>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

> I asked why CLOS couldn't support sharing code at the class level
> and you gave me something about sealing.  I'm still wondering 
> what the connection between sealing and sharing is, but you still
> don't explain it.  All you say is "I didn't say it was necessary".

That may be coz I don't recall the Dylan giving any specifics.
Until we see a few implementations, I'd say this is a little
acedemic, but I'm willing to wait before judging it.

If you want a detailed explanation, I suggest asking Apple about
it. All I can do is read the stuff on UseNet and Internet. The
DIRM is available at cambridge.apple.com.

> It says no more subclasses and so some things (such as the
> "-p" predicates defstruct defines) can take this into account.

Thanks. My experience of Lisp is limited to the systems available
to me. AKCL isn't on the list (yet).

> I don't fully understand what you're describing.

I was trying to be general, and that may have made me a little
unclear. Sorry. It's just that every example I could think of
had at least one exception!

Some systems make it trivial, while others require you to get your
hands "dirty" and platform specific. A good framework might hide the
platform, but some don't. Some language systems don't even have a
framework, just an API strongly resembling the platform's API.

Is that any clearer?

> But if the system creates any windows, that's not letting me
> ignore the GUI.

It could be, if there's no assumption in the code that you're using
a window. You might simply think of it as a stream connected to an
"output", whatever that might be. Plus, the language system might
"know" about windows, but your code (using a subset of the system)
might not. It's hard to say more without giving detailed examples.

Are you familiar with Actor?

-- 
Please vote for moderation in comp.lang.visual
http://cyber.sfgate.com/examiner/people/surfer.html
From: J W Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CxBIs3.8r9@festival.ed.ac.uk>
·····@RedRock.com (Bob Hutchison) writes:

>In <··········@cogsci.ed.ac.uk>, ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <··········@relay.tor.hookup.net> ·····@RedRock.com (Bob Hutchison) writes:
>>>
>>>What I mean when I say 'hardware' is something with a very low level
>>>of abstraction (maybe?)  Perhaps high degree of concreteness?  What
>>>happens at this level is very well defined and is easily translated to
>>>common knowledge.  For example, most people understand what adding
>>>two integers together means and what to expect as a result.
>>
>>But why doesn't that work just as well for Lisp?  Indeed, you can
>>go further in many Lisps without running into issues of word size.

>We are starting to talk past each other I think.  Let me clarify a few
>things before I continue.

>First, I am not talking about any possible language that could be called
>lisp when I say 'lisp'.

Well, anything *could* be *called* "Lisp".  (Like Lincoln's if
we call a tail a leg, how many legs does a dog have?  4.  Calling
it a leg doesn't make it one.)

By "Lisp" I never mean "anything that could be called Lisp".  I mean
actual Lisps plus similar languages that I can reasonably expect to be
possible.  Still, I do preject trends, including some from the past
that have not progressed much recently.

>  There are a few basic properties that are important
>to distinguish lisp from C.  One of the biggest is the closure and the
>ability to pass them around a program.  

Franz Lisp did not have closures.  (Ok, eventually there were
fclosures, a kind of closure over dynamic/special vars.)  That's
just one example.  There were many lisps in which (lambda ...)
or (function (lambda ...)) were treated as if separate (though
anonymous) top-level functions.

A discussion of "Lisp" cannot exclude so many Lisps, and they
weren't improperly called "Lisp" either.

>In fact, I think I would be at a
>loss to explain any advantage a lisp without closures would have
>over C.  If you could take that out of lisp then you could take anything
>out and then we would be having a meaningless discussion, I think.

I disagree, very strongly.  I have used such Lisps extensively in
preference to various alternatives, including C.  But other people
could just as reasonably prefer those other languages to the Lisps
I was using.  (I want to avoid turning this into a "religious" dispute.)

BTW, it's not too difficult to implement a form of lexical
closure in Franz.  If anyone wants, I can make the code available.

In any case, I don't see why the discussion becomes *meaningless*
if we include Lisps that lack closures.

However, I think many of arguments would be better directed against
Scheme than against Lisp.  Scheme is too narrow, there being a number
of lexical closure Lisps that are not Scheme, but Lisp is too broad.

>I believe that to throw a novice into a lisp environment we will expect
>that programmer to learn how a closure works.  This novice will not,
>I contend, be prepared for this with only a C background. 

I assume that you've at least looked at some intorductory Lisp books.
(For all I know, you've written some -- I'm not trying to slight your
experience.)

Closures are not the first topic covered, and in some books they
don't appear at all.  (In dynamically scoped Lisps, you can of course
go a fair way with ordinary lambda-exprs.)

In any case, I don't understand why you think closures are such
a problem.

> Even an
>experienced C programmer would be a novice with lisp.

But experiences programmers who try to learn Lisp may well have very
different problems that people who are not experienced programmers.
If we're to talk about novices in general, we'll have to establish
what they have in common.

>>>For example, scheme allows you to manipulate functions as you would
>>>any other data (type?), it has continuations.  Some languages support
>>>concurrency in the language (Erlang, BETA) rather than as a library.
>>
>>In C, you can put functions (or ptrs to them -- one of C's complexities)
>>in structures, pass them as arguments, and return them as results.
>>C doesn't have closures over variables, but neither do some Lisps.

>I think this is a pale comparison. 

It's using functions as data.  And any *do* you think closures are
radically different?  Moreover, if it's closures that are the key
problem, why not say closures are difficult rather than that Lisp
is?  That will significantlyu reduce misunderstanding.

> Though It does raise a point.  Have
>you ever tried to explain a table of function pointers to a C programmer?

No.  But I'd be surprised if C programmers couldn't deal with structs
that contain fns these days.  If that were so difficult, people would
be finding C++ more difficult than they do.

>>In any case, someone learning Lisp doesn't have to learn everything
>>at once, so I don't see why closures have to be an obstacle.

>Only if that person has the luxury of learning lisp in a non-commercial
>environment.  Otherwise that person will be forced to read 'good' lisp,
>and expected to understand it (and perhaps even modify it :-)

You are surely not saying that in a commercial env everythign is
learned at once.

Now, if you want to talk about real programs, there are just as
many difficulties in C and C++.  You think not?  Then propose a
metric.

>>>Furthermore, languages like lisp and others, are designed expecting
>>>programs to be hierarchies of this kind, and so support it with
>>>strange things like call-with-current-continuation.
>>
>>Most Lisps don't have call/cc.  What they have is similar to
>>setjmp and longjmp in C (but simpler).

>This is true.  But I've seen call/cc implemented with macros and
>closures in CL so the language is capable of providing the
>service (I *think* nothing unique to CL was used -- this isn't the
>old library trick, quite, I'm just trying to indicate how close it
>is, call/cc in C would be truely interesting to see).

Standard CL can't support full call/cc, because the continuations
have dynamic extent.  Here's a possible CL version:

(defun call/cc (fn)
  (block call/cc-block
    (funcall fn #'(lambda (value) (return-from call/cc-block value)))))

>>>>>Use of a hardware model to aid in learning a programming language
>>>>>would apply to most languages.  I don't doubt that there is a suitable
>>>>>hardware model to explain lisp, but I don't think it is the same one.
>>>>>Unfortunatly the one availble to a C programmer is the one taught,
>>>>>at least where I went to school.
>>>>
>>>>Can you say something more about this?  I learned how to program
>>>>before I knew abything about how the hardware worked.
>>>
>>>The university I attended (still) presents basic CPU architecture
>>>very early (brief overview in first year, details first thing second year)
>>
>>Humm.  How common is it to teach about hardware before teaching
>>about programming?  I didn't encounter that order of doing things.

>I think very common, since I include other things as 'hardware' below :-)

>>
>>>I think you probably had a pretty good idea of some simple concepts
>>>from hardware.  The idea of a data store, arithmetic, updating values,
>>>precise sequential operation (probably this is the first hard thing to 
>>>really learn, it is understood early, but...).  Stuff like that.  You had
>>>used a calculator I would imagine.
>>
>>Actually, I wrote programs before I'd used (or even seen) a
>>calculator.  (This may show how old I am.)

>gee, and I thought I've been around for a while :-)  I played with one
>for the first time when I was 17.

It must have been about the same for me.  I ran into computers
fairly early for the time, and calculators weren't the ubiquitous
objects they are today.

>>> (If you had a good grounding in doing calculations on paper,
>>>would it ever occur to you that you could group some of those written
>>>lines of calculations, and perform calculations on them in turn?  
>>
>>Isn't that what one does in high school algebra?

>Not in my high school.  We dealth with composition of functions,
>like C does, but if we ever got into currying or functions that
>generate functions it blew right over me!

Algebra involves calculations on the formulae that appear in
calculations.  That isn't quite calculations on (lines of)
calculations (maybe proofs in high school geometry is closer),
but then Lisp programs don't perform calculations on calculations
either, so I think your reasoning is in similar trouble.

Anyway, I am surprised that you think currying and function values are
central to Lisp.  Pre-scheme, most use of functions was as arguements
to mapping functions and the like.

>>>Not likely, and I think this is similar to a C programmer's difficulty in
>>>thinking of manipulating his programs -- you can't do it in C, so you
>>>don't think about it.)
>>
>>The C compiler manipulates programs.  Do people suppose it works
>>in a mysterious way?  Don't they think "cat" and "grep" are programs?

>They don't manipulate programs at runtime except by controling
>relatively simple parameters (basic types, perhaps a function
>call).  They cannot generate closures, they cannot curry, they
>cannot generate new functions.

Lisp macros manipluate source code (though not programs), and 
can do so at run-time.  A compiler or cross-referencer might
manipulate source code in units that might be considered programs.
Some Lisp code might "modify itself", I suppose, but that's far
from central to Lisp.  Currying does not manipulate programs.
However, it is possible to construct source code as a list and
then call EVAL.  I have not noticed that EVAL was an especially
difficult concept.

Perhaps the above shows how different our understanding of
this must be?

>I think C programmer understand manipulating source code quite
>well.

So how is it that Lisp goes w/ software and C w/ hardware?

Anyway, it sounds like the only major problem is generating new functions.
I don't think even that is very difficult, though obviously some cases
are tricky.

>>>>>The other difficulty with languages like the lisps and other high level
>>>>>languages, is that they provide a fair bit of support for the
>>>>>development of 'software'.  I wonder what a hardware model of
>>>>>a continuation in scheme or ml would look like, or a non-deterministic
>>>>>program written using them?  
>>>>
>>>>What is the hardware model of a coroutine or a thread?  There are
>>>>such models, of course, and there are similar ones for continuations.
>>>
>>>Sure, but a novice doesn't know those models.  Have you seen many
>>>novice programmers that can handle even coroutines much less
>>>a thread?  Actually, I think my personal model of a thread is an
>>>abstract one, not concrete -- I know how it is implemented because
>>>I need to, but that knowledge isn't used when designing software.
>>
>>So why count things a novice won't learn against Lisp but not
>>against C?

>Ahh!  I'm not counting anything against lisp! 

Don't read topo much into one word!  I see things like "The other
difficulty with languages like the lisps", and it doesn't seem to
me that "count against" is out of place.

>   I am counting it
>against C!  I think these novices are capable of leaning these things
>and that they should learn these things and that C doesn't let them
>much less make them.

Ok, that wasn't clear (to me) before.  You do seem to think Lisp
will be harder to learn because C goes better with the hardware,
or am I confusing you w/ someone else.  I can't check earlier
articles right now, because the machine I normally use for News
is down.

>>> This is both a good thing and a bad thing.
>>>There are any number of programmers intensely put off by this lack
>>>of 'control' over how their numbers will be represented.
>>
>>I feel that I can sufficiently control how numbers are represented
>>in the Common Lisps I use.  Anyway, you again seem to be moving
>>to experienced programmers rather than novices.

>What about control in the other lisps?

I guess I'm not sure just what the issue is here.  In C, I can't
control how my numbers are represented.  The compiler and the
hardware conspire to determine that.  I get to say that something
(a variable) is an int, or a float, or whatever, and in Lisp
I do that (for an object rather than a variable) by using the
right source notation.

>I'm not sure that that the experienced programmer is more annoyed
>than the inexperienced programmer with this numerical stuff.  The
>experienced programmer is more likely to feel a bit of relief with the
>situation.  The programmer implementing an algorithm from a book,
>or a paper, and not understanding clearly what is algorithm and what
>is to address numerical problems, is inexperienced and about to be
>frustrated by lisp :-)

Algorithm books don't match Lisp very well, that's for sure,
even when the algorithms would be far simpler in Lisp.  I'm not
sure what can be done about that short of writing different books
and hoping they catch on.

>>>I think we will always have languages for programming hardware
>>>and languages for programming software.
>>
>>I think it's an interesting data point that I spend lots of time
>>programming in Lisp, also program in C, and find that position odd.

>Which position?

"we will always have languages for programming hardware and languages
for programming software."

>       programming in lisp and C, or that we will always
>have languages that address software (lisp) and languages that
>address hardware (C)?

The latter.

-- jd
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <781539380snz@wildcard.demon.co.uk>
In article <··········@relay.tor.hookup.net>
           ·····@RedRock.com "Bob Hutchison" writes:

> When was Brown's book written?

The copyright says 1979. That must have been about the time that
a more experienced programmer leant it to me.

> Forth is an interesting language, but by the time Forth became an option
> for me, C had already won.  The first papers I remember about threaded
> interpreters was around '83-'84, though certainly the technique was not
> new then (I had used it myself in a UI definition language I wrote).  These
> papers are what brought forth into my conciousness.

Well, I was fortunate enough to get my hands on a fig-forth for my
first machine. I had to pay for my first C compiler, and even then,
it could only just run on the machine, it was so big. 48K is not a
lot of RAM, even for an enhanced Small C. Curiously, the first time
I read about symbolic math was in a Byte article that used a Lisp
running on the unexpanded 16K version of the machine I was using.
I'm still amazed at how much could be done in so little RAM!

> I cannot remember what the version of C was that I ported but
> small C is familiar... This compiler was in fact quite portable for
> the time.  Step 1, change the assembler output from the host machines
> to the target's; Step 2, recompile the compiler on the host machine
> using the new assembler output; Step 3, take the assembler to the
> target machine and assemble it, and... ported (there were a few iterations
> of course :-)

Yes. You can see a similar approach used by a BCPL compiler, with
its intcode assembler/interpreter, and I have a book on the P4
Pascal compiler that uses a technique, but with an early p-code.

I wonder if my model of how C works was based on the assembler
code from the compiler? My 2nd C compiler (for CP/M-68K) also
generated assembly language. I'm often tempted to write a Lisp
compiler (perhaps for a subset of Scheme) that generates an asm
file as its output. The point would simply be to demonstrate that
it can be done, and done as well as for Small C, and to show the
programmer what the code does at the machine level.

> The experienced programmers become the ones who place the ads for new
> programmers.

Yes, we can only answer the ads that we find. Very few of us seem
able to solicit work from employers, rather than the other way
around. We can also say that we can only buy the machines that
vendors sell, program with the language systems that are available,
etc. As far as the apps are concerned, if we're self employed,
then we're free to write what we want to, but ensuring that anyone
will buy our software is another matter.

At least that way we can choose the tools we use to develop the
software with. I do sometimes suspect that if I were to write
an app in Lisp, then I should perhaps not tell anyone that it's
_not_ written in C++. Let them guess, if they can!

It could be better than telling them, which might just give them
a reason to say, "Well, of course it's slow, it's written in Lisp",
which would be ignoring all the apps written in C++ that are big
and slow. I know people who see things like that, and it saddens
me. They value machine time more than human time, but which is
more expensive these days?

Oops. Now this is exactly the kind of comment that I always dread
in discussions like this, probably coz it's just what I'm thinking,
and I'd like to find that it isn't true! ;-) Well, regardless of
how we'd like it to be, or how some of us find it is, that's the
way many others find it.

Perhaps when someone asks, "Why isn't Lisp used as much as C?",
they're asking, "Why are there more jobs for C programmers than
Lisp programmers?" If that's true, how do we answer that? One way
has been to look at the languages, as we have been doing, but
another way would be to look at the people offering the jobs
themselves. As you said, they're likely to be experienced
programmers, and probably old enough to remember a time when
"Lisp" wasn't as fast as it can be today.

Software development is changing fast, as it always has been,
and new technologies appear. GUIs, OOPLs, visual graphs, dataflow,
etc. It's hard to keep up with these things, even if you have
the time. Also, many programmers are just "9 to 5"ers who don't
particularly care about new ways of doing things. It's all the
same, as long as they get paid the same. I get the feeling from
reading comp.lang.lisp (and some other language newsgroups) that
not many of us here are that kind of programmer. It's only natural
that we wonder why there are people who laugh or sneer at us, or
just ignore us.

On the other hand, those might not be typical experiences of most
Lisp programmers. I don't know, as most programmers I have contact
with either use C/C++ or Basic. I don't expect them to have a
positive attitude to a language they don't use. Some of them enjoy
a very negative attitude towards certain languages. I _hope_ that's
not typical.

> I've been following Dylan pretty closely for what seems like years
> now.  We'll see.  I think it will have to provide some pretty fancy
> tools and be used to implement some pretty significant applications
> before it gets taken seriously by the software development
> community (this has to include developers, managers, marketing,
> sales, press, educators).

Agreed. It'll be a few years before I'll be able to use it,
never mind commit to using it. Who knows when the Windows
version will be available, and for how much? I'll very keen
to see how it goes, tho.

> >So C/C++ will be around for a few decades after we all "know" that
> >it's dead? ;-)
> 
> How can you think otherwise?  :-)

Well, I'm already starting to use it as the output of other
compilers. That could be a good sign that I'm ready to stop
using a language. ;-)
-- 
"Internet? What's that?" -- Simon "CompuServe" Bates
http://cyber.sfgate.com/examiner/people/surfer.html
From: William D. Gooch
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Pine.A32.3.90.941007145054.16592M-100000@swim5.eng.sematech.org>
On Fri, 7 Oct 1994, Cyber Surfer wrote:

> ....
> Perhaps when someone asks, "Why isn't Lisp used as much as C?",
> they're asking, "Why are there more jobs for C programmers than
> Lisp programmers?" ....

Well, how many C programmers does it take to screw in a light bulb?

(sorry, it's Friday)
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <782074568snz@wildcard.demon.co.uk>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

> As well as ignoring lots of other evidence that slow is often
> acceptable.  Windows is pretty slow.  People run Macs w/ the
> cache off.  They use shells and other interpreters.

This is true. There are also people whp can't understand why anyone
would use Windows. I encountered one of them a few weeks ago, and it
was a while before they finally accepted by justification for using
Window, which was simply, "because I _can_". This seems to be a hard
concept for them to grasp.

I could also have said, "because I consider my time more important
than my machine's", but that's missing what I feel is the _real_
point, which is that I can do it if I choose to. It doesn't have
to be "optimal", esp since that word can be so hard to define in
this context.

Perhaps we should continue this by email, as I think we've reached
an old subject. ;-)
-- 
"Internet? What's that?" -- Simon "CompuServe" Bates
http://cyber.sfgate.com/examiner/people/surfer.html
From: Simon Brooke
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CxMpD2.525@rheged.dircon.co.uk>
In article <··········@vertex.tor.hookup.net> ·····@RedRock.com (Bob Hutchison) writes:


   In <············@wildcard.demon.co.uk>, ············@wildcard.demon.co.uk (Cyber Surfer) writes:
   >In article <··········@relay.tor.hookup.net>
   >           ·····@RedRock.com "Bob Hutchison" writes:
   Well, I'm happy to say that for a number of years now the companies
   I've dealt with all understood very well that programmer's time was
   both valuable and limited.  Unfortunately they often didn't know what
                               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   to do to increase the productivity of the programmer.  This, however,
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   doesn't mean that these companies view programmer time as anything
   other than a resource to be managed.

Some time ago (about 1988, I think?) I remember that I used to quote
to everybody a study done by the US DoD into programmer productivity.
In this study, expert programmers in a number of different languages
(including Ada, LisP and ProLog, but I think several others) were
asked to code identical specifications in their preferred languages.
LisP came out first by a long lead. However (1) I've long since lost
the reference (anybody know it?); (2) This was just about the time
when X3J13 were driving their nails into the coffin of LisP, so modern
LisP programmers (if forced to use the aluminium book) would probably
be slower; (3) I doubt whether C++ would have been considered at the
time (too new).

Study might be worth doing again, if anybody can raise the funding.

-- 
·····@rheged.dircon.co.uk

			-- mens vacua in medio vacuo --
From: William D. Gooch
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Pine.A32.3.91.941014091539.42306C-100000@swim5.eng.sematech.org>
On Thu, 13 Oct 1994, Simon Brooke wrote:

> .... This was just about the time
> when X3J13 were driving their nails into the coffin of LisP, ....

This seems to me to be extremely unfair to those who worked hard to put 
together a comprehensive and IMO high-quality standard for Common Lisp.  
Do you have any justification for this slam?  Did you offer your help?  

I don't think the X3J13 work in any way contributed to the slump in the 
Lisp market, which was well underway before the result of their efforts 
became widely available.
From: Neves
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <37m8r7$398@anaxagoras.ils.nwu.edu>
William D. Gooch (······@swim5.eng.sematech.org) wrote:
: On Thu, 13 Oct 1994, Simon Brooke wrote:

: > .... This was just about the time
: > when X3J13 were driving their nails into the coffin of LisP, ....

: This seems to me to be extremely unfair to those who worked hard to put 
: together a comprehensive and IMO high-quality standard for Common Lisp.  
: Do you have any justification for this slam?  Did you offer your help?  
I also think this quite unfair.  Back then it was much worse with the
different and incompatable dialects of Lisp like MacLisp, Zeta Lisp,
UCI Lisp, and Interlisp.  I don't believe Lisp is less popular now.
It is just that programming on microcomputers has exploded and Lisp
didn't participate in that explosion.

-David
From: Henry G. Baker
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <hbakerCxquDG.LEF@netcom.com>
In article <········································@swim5.eng.sematech.org> "William D. Gooch" <······@swim5.eng.sematech.org> writes:
>On Thu, 13 Oct 1994, Simon Brooke wrote:
>
>> .... This was just about the time
>> when X3J13 were driving their nails into the coffin of LisP, ....
>
>This seems to me to be extremely unfair to those who worked hard to put 
>together a comprehensive and IMO high-quality standard for Common Lisp.  
>Do you have any justification for this slam?  Did you offer your help?  

Any language revision which more than doubles the size of an already
bloated language is putting nails into its coffin.  A large language
is very expensive to write and maintain a compiler for, and nearly
impossible to properly tune for each architecture.  If much of the
tuning cannot be done via extremely general purpose optimizations
(e.g., deep 'inlining'), then the user is stuck with poor performance
of the built-in stuff, and if the language isn't reflective, then he
can't even 'go under the hood' and fix it himself.

Any language revision which adds more stuff which is impossible to
compile efficiently without a huge number of special-purpose ad hoc
hacks in the compiler puts nails into the coffins of its vendors, as
well as those of its users who must explain to their management why
Lisp is slow and getting slower instead of faster.  (Exactly the same
complaint is true of Ada.)

More details in the following papers:

"Critique of DIN Kernel Lisp Definition", Lisp & Symb. Comput. 4,4
(Mar 1992), 371-398.  (Really a critique of Common Lisp itself).  In
my ftp directory.

"CLOStrophobia: Its Etiology and Treatment".  ACM OOPS Messenger 2,4
(Oct 1991), 4-15.  In my ftp directory.

>I don't think the X3J13 work in any way contributed to the slump in the 
>Lisp market, which was well underway before the result of their efforts 
>became widely available.

The mere existence of a standardization committee which is known to be
making dramatic changes to a language provides enough of a 'FUD'
factor (fear, uncertainty, doubt) to damage the credibility of the
language.  Any dramatic changes in the language are guaranteed to
cause the marginal vendors to decide whether to 'fish or cut bait'.
In the Ada market, more than 50% of the vendors decided to bail out
rather than upgrade to Ada9X.

--------

This isn't to say that an 'object-oriented Lisp' isn't a good thing.
However, classes and methods grafted onto the back of a
non-class/method-oriented language end up looking like the hunchback
of Notre Dame.  Better that CLOS/Dylan should have been a totally new
language, without 450 pages of baggage to support.

      Henry Baker
      Read ftp.netcom.com:/pub/hbaker/README for info on ftp-able papers.
From: Simon Brooke
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Cxxwx0.1nC@rheged.dircon.co.uk>
   In article
   <········································@swim5.eng.sematech.org>
   "William D. Gooch" <······@swim5.eng.sematech.org> writes:
   >On Thu, 13 Oct 1994, Simon Brooke wrote:
   >
   >> .... This was just about the time
   >> when X3J13 were driving their nails into the coffin of LisP, ....
   >
   >This seems to me to be extremely unfair to those who worked hard to put 
   >together a comprehensive and IMO high-quality standard for Common Lisp.  
   >Do you have any justification for this slam?  Did you offer your help?  

To deal with your questions in reverse order: 

(i) yes, I served on the British Standards Institution LisP committee
for a time (I'm not pretending my contribution was particularly
valuable).

(ii) There are a number of specific arguments I would advance (a) as
to the flawed nature of the Common LISP design, and (b) to defend the
claim that these flaws were a consequence of the commercial/political
axes being ground in X3J13.

(iia: Flaws in the language design)

(iia1) Prior to the definition of Common LISP, many LisP programmers
used an in-core development style. This style of development has
significant advantages: the development cycle is edit/test rather than
edit/load/test. More significantly, the code of a function actually on
the stack can be edited. By definition (CLtL 1, p347) Common LISP
comments are not read. Consequently, code edited in core loses its
documentation.

Richard Barbour's Procyon workaround, and the Envos Medley workaround,
are technically in breach of the standard definition, which
effectively prevents in-core development. A language definition which
prevents established practitioners working in their preferred manner
is broken.

(iia2) I don't think anyone any longer defends the decision to choose
a LISP2. It is especially true in LisP that code *is* data.

(iia3) Before the development of Common LISP a number of typographical
tricks were widespread in the LisP community which greatly assisted
the readbility of code. As examples, it was common to capitalise
theInitialLettersOfEmbbeddedWords in names. Similarly, we used to have
a convention that functions, methods etc had names Which Started With
a Capital Letter, whereas other variables had names which started in
lower case. INCOMMONLISPTHISISNOLONGERPOSSIBLE. Any language which
cannot tell the difference between an atom and AN ATOM is broken.

To put it another way, Portable Standard LisP (for example) is a
language for poets, but in Common LISP you can only shout.

(iia4) The implementation of the sequence functions is a mess. It's a
shame, because one can admire the o'erleaping ambition, but such a
huge monolithic structure is required to make it work that any Common
LISP environment has to be unwieldy. If Common LISP had been
object-oriented from the bottom up, a la EuLisP, it would have worked;
but given that decision wasn't taken (and that really isn't X3J13's
fault -- O-O was too new at the time), it would have been better to
admit that lists and vectors are fundamentally different things.

(iia5) I remain unconviced that keywords in lambda-lists are a good
idea. A number of points here: it is fundamental to the nature of LisP
that it is syntax-less and keyword-less -- that's a lot of what gives
it it's elegance, and what allows a LisP interpreter to be so small
and simple. A Common LISP interpreter must include a parser to handle
lambda lists, and once again is neither small nor simple.

(iia6) I have complained often enough before about the abhomination,
SETF. I rehearse my objections briefly. Destructively altering lists
may be dangerous, and should always be done consciously and with care.
If you use RPLAC, you know you what you are doing. SETF makes
destructive change invisible to the naiive user: it says 'take this bit
of memory, I don't know where it is, I don't know who owns it, I don't
know who else is holding pointers to it, and trample all over it'.
It's direct equivalent is the BASIC keyword, POKE. I *shudder*.

Note that in critiqueing the language definition, I have not
reiterated Henry Baker's objections (see message id
<················@netcom.com>) to the sheer size of the language as
defined, although I share them, and for very much the reasons he
states.

(iib) These flaws are consequences of political goals

At the time of the formation of X3J13, overwhelmingly the largest
company seriously involved in the LisP business was Xerox. Xerox's
InterLISP was big and bloated and inconsistent enough, God knows, but
it was nevertheless a wonderful programmers toolkit. Furthermore,
the Xerox D series machine, although expensive in itself, was very
substantially cheaper than competing LisP machines.

Given this circumstance, I am convinced by and happy to repeat
publicly the allegation that has been made frequently in the past that
the essential aim of a substantial number of the participants in X3J13
was to make Common LISP as different as possible from InterLISP, in
order to make it more difficult for Xerox to compete. 

This allegation explains both the comments system and the choice of
LISP2, two decisions each of which are otherwise inexplicable. I am
prepared to believe the claim that the case-insensitive reader was a
requirement of the United States Department of Defense.

In summary, I claim that in a number of instances, X3J13 deliberately
and knowingly chose the less good of technical choices, in order to
produce a language closer to that of the smaller vendors, and further
from that of Xerox.

You say:

   >I don't think the X3J13 work in any way contributed to the slump in the 
   >Lisp market, which was well underway before the result of their efforts 
   >became widely available.

I hope that you may be right, but do not myself agree. I believe, and
I guess that you do, that languages within the LisP family offer at
the very least a better programming idiom than C++. Ten years ago,
machines which could run a big LisP system were at last becoming
cheap, and all the major computer vendors were showing a considerable
interest in the language.

Five years later that interest had waned. It's no co-incidence, I
think, that this was contemporaneous with the introduction of a new,
'standard' version of the language so complex that compilers were
exceedingly difficult to develop, optimise and maintain, and so
different from existing, well established variants of the language
that experienced programmers were at a loss. 

Although, as I say, I believe that LisP is at least as good as the
languages in common use today as a language for the development of
complex software systems, I doubt whether its decline can now be
arrested.  I do not believe your claim that '...the slump in the LisP
market...'  was '...well underway...' in 1984, when the aluminium book
was published. Remember, the middle eighties were the period of the
Japanese '5th Generation Project', the British Alvey Programme, and a
number of similar initiatives througout the world. The same period saw
the founding of Harlequin and the commercialisation of POPLOG. These
things taken together seem to me to indicate considerable strength in
the LisP market.

I believe that the slump (but am prepared to listen to contrary
evidence) began subsequent to the publication of the aluminium book.
An argument _post hoc ergo propter hoc_ is liable to be flawed, I
know. That doesn't make it false.
-- 
---------------
"There is no point in making further promises now about reducing
taxation: nobody believes us." Edward Heath, October 1994
From: Scott Fahlman
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <3854ul$r5r@cantaloupe.srv.cs.cmu.edu>
Simon,

This pile of conspiracy theories about Lisp (or "LisP" as you call it)
is truly amazing.  Actually, Common Lisp got screwed up because Howard
Hughes and Elvis, alive and hiding in Argentina, had a big investment
in AT&T and wanted C++ to succeed.  So they hired the Saucer People to
use their confusion rays on the brains of software managers worldwide,
making them choose C++ despite the screams and curses of their
programmers.

But seriously...  I won't try to refute all of the points in your
message that I disagree with -- just about everything -- but will
address a couple of them.

   At the time of the formation of X3J13, overwhelmingly the largest
   company seriously involved in the LisP business was Xerox. Xerox's
   InterLISP was big and bloated and inconsistent enough, God knows, but
   it was nevertheless a wonderful programmers toolkit. Furthermore,
   the Xerox D series machine, although expensive in itself, was very
   substantially cheaper than competing LisP machines.

Well, the largest company doing anything with Lisp implementation was
probably IBM.  If you want to look just at the size of the Lisp
effort, I'm not sure that the Xerox effort was larger than others,
such as Symbolics, TI, and later Lucid.  And Xerox's Lisp machines
were killed more by Xerox's ambivalence about being in the computer
business than anything else, though their decision to stay apart from
the Common Lisp effort until very late in the game also hurt.

   Given this circumstance, I am convinced by and happy to repeat
   publicly the allegation that has been made frequently in the past that
   the essential aim of a substantial number of the participants in X3J13
   was to make Common LISP as different as possible from InterLISP, in
   order to make it more difficult for Xerox to compete. 

Horseshit.  At the time the Common Lisp effort started, there was a
deep split between the Intelisp and Maclisp/Zetalisp worlds (with
assorted minor Lisps mostly closer to Maclisp).  The Xerox people
declined to participate in the Common Lisp effort early on, so the
language naturally reflected the established tastes of the Maclisp
clan.

Interestingly, by the time X3J13 was formed, Xerox had decided to
embrace Common Lisp after all.  Larry Masinter took over from me as
chairman of the cleanup committee (which dealt with the kinds of
issues you point to), and Danny Bobrow and Gregor Kiczales played a
leading role in the definition of CLOS.  So the suggestion that X3J13
was a conspiracy to wound Xerox is ridiculous.

   This allegation explains both the comments system and the choice of
   LISP2, two decisions each of which are otherwise inexplicable. I am
   prepared to believe the claim that the case-insensitive reader was a
   requirement of the United States Department of Defense.

Any technical decision you disagree with is inexplicable, unless it's
part of a political conspiracy?  You must be lots of fun to work with.

The case-insensitive reader has nothing to do with DoD requirments.
It was the heritage of Maclisp and the general culture on the
PDP-6/10/20.  It was pretty much the norm throughout all of
computerdom until Unix and the Saucer People started rotting people's
brains.  :-)

Actually, the issue of case sensitivity had more to do with the
formation of X3J13 than any other technical/political issue.  Everyone
in the "gang of five" who were coordinating informal the Common Lisp
design process favored Common Lisp's current rules (case-insensitive,
sort of).  The Franz Inc. people, who were not represented in the gang
of five, vehemently disagreed with this decision and started talking
about lawsuits.  This caused us all to realize that we'd better start
doing things in a more formal way that would give us some legal cover.
If there ever was a conspiracy, X3j13 was the end of it, not the
beginning.

I do think that this transition to a formal standards effort did
considerable harm to the Common Lisp design.  It slowed down the
cleanup process tremendously, at a time when there were still many
loose ends.  If we had been able to push forward informally for
another year or so, the language would have been much improved.
However, I doubt that this would have made much difference to CL's
future.

   ... Ten years ago,
   machines which could run a big LisP system were at last becoming
   cheap, and all the major computer vendors were showing a considerable
   interest in the language.

   Five years later that interest had waned. It's no co-incidence, I
   think, that this was contemporaneous with the introduction of a new,
   'standard' version of the language so complex that compilers were
   exceedingly difficult to develop, optimise and maintain, and so
   different from existing, well established variants of the language
   that experienced programmers were at a loss. 

Your chronology is all screwed up.  The big "mainstream" vendors
became interested in Lisp only after Common Lisp was well on the way
to becoming a de facto standard, and (in my opinion) because of that
emerging standard.  The subsequent decline was due to many factors
including the following:

1. Common Lisp was not designed for small PC-class machines, and these
came to dominate the market in a way that none of us anticipated.

2. The AI boom ended.

3. C and C++ achieved enough dominacne that positive feedback set in.
Everyone used them because they were the languages that everyone used.

   I do not believe your claim that '...the slump in the LisP
   market...'  was '...well underway...' in 1984, when the aluminium book
   was published.

The claim was that the slump in Lisp was well underway when X3J13 was
formed.  The publication of the first edition of CLtL was years
earlier, early in the AI boom.  Actually, I don't think the Lisp
market slumped until some years after X3J13 began its work.

-- Scott

===========================================================================
Scott E. Fahlman			Internet:  ····@cs.cmu.edu
Principal Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 268-5576 (new!)
Carnegie Mellon University		Latitude:  40:26:46 N
5000 Forbes Avenue			Longitude: 79:56:55 W
Pittsburgh, PA 15213			Mood:      :-)
===========================================================================
From: Arun Welch
Subject: CL History (was Re: Why do people like C?)
Date: 
Message-ID: <WELCH.94Oct20153410@thor.oar.net>
Hokay, before things get too far out of hand on this discussion...
JonL wrote the following over 10 years ago (my how time flies!), and
it's been floating around ever since. It's been posted at least
once before, and I'm willing to make updates to a master copy if
people wish to add to it, and I'll send it on to Mark once I'm done for
inclusion in his archives. Scott, you're the only one of the "gang of
five" still following this, you might have some insights...



14-Sep-84  1636	·······@Xerox.ARPA 	Re: CL History      
Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 14 Sep 84  16:35:17 PDT
Received: from Semillon.ms by ArpaGateway.ms ; 14 SEP 84 16:31:56 PDT
Date: 14 Sep 84 16:31 PDT
From: ·······@XEROX.ARPA
Subject: Re: CL History    
In-reply-to: Dick Gabriel <···@SU-AI.ARPA>'s message of 14 Sep 84 08:48
 PDT
To: ···@SU-AI.ARPA
cc: ······@TL-20A.ARPA, ·······@XEROX.ARPA

Below is a copy of a historical note which I sent to the group in August
1982, because the material was still fresh on my mind (and because it
was clear that Xerox was not then going to have a significant
involvement with Common Lisp, so that "history" would probably be my
last major contribution for some time to come).

Incidentally, the technal staff -- Larry, Bill, and myself -- have been
asked by Xerox management *not* to attend the meeting next week.  Beau
Sheil and Gary Moskovitz (the new head of the A.I. Systems Business
Unit) will represent Xerox.  Sigh.

------------------------------------------------------------------

Mail-from: Arpanet host MIT-MC rcvd at 24-AUG-82 1950-PDT
Date: 24 August 1982 22:44-EDT
From: Jon L White <JONL at MIT-MC>
Subject:  Roots of "Yu-Shiang Lisp"
To: JONL at MIT-MC, RPG at SU-AI, Guy.Steele at CMU-10A,
    Fahlman at CMU-10A
cc: MOON at MIT-MC, Shostak at SRI-CSL, Griss at UTAH-20, DLW at MIT-AI,
    RG at MIT-AI, GSB at MIT-ML, Brooks at CMU-20C,
    Scherliss at CMU-20C, Engelmore at USC-ISI, Balzer at USC-ISIB,
    Hedrick at RUTGERS


In a brief attempt to remember the roots of "Yu-Shiang Lisp", subsequently
named COMMON LISP, I searched my old mail files which are still on-line,
and found a few tidbits of history.  Mostly, my mail stuff got deleted,
but the "Call" for the conference at SRI on Apr 8, 1981, by Bob Engelmore
survived, along with an interchange, about a week after the "birth",
between Ed Feigenbaum and Scott Fahlman.  These I've packeged up in the
file at MIT-MC JONL;COMMON HIST along with Chuck Hedrick's overall summary
of the April 8 meeting.

I'd like to ask you all to jog your memory cells, and see if any of the 
uncertainties below can be filled in, and if additional significant
steps towards the CommonLisp can be identified.  Needless to say, this
listing is a view from where I was standing during those days.




Mar 12, 1981:  Bob Engelmore invites many Lisp implementors and users 
    from the ARPA community to a conference at SRI to clear up issues 
    surrounding the future of Lisp.  Since ARPA was diminishing its 
    support of Lisp development and maintenance, his "call" may have
    had the seeds of CommonLisp in it's second paragraph:
      " . . .   There are now several respectable Lisp dialects in
       use, and others under development.  The efficiency,
       transportability and programming environment varies significantly
       from one to the other.  Although this pluralism will probably
       continue indefinitely, perhaps we can identify a single
"community
       standard" that can be maintained, documented and distributed in a
       professional way, as was done with Interlisp for many years. "

Apr 8, 1981:  Moby meeting at SRI.  InterLisp crowd appears to be unified;
Scott Fahlman characterises the post-MacLisp crowd as 4 horses going in 5
directions.  Nils Nilsson, over a glass of beer, asks Jonl to join SRI in
beginning a Lisp development and maintenance center; Jonl insists on RPG
being a principal of the effort.  The advantages of a Lisp with Arpa
support, which "... can be maintained, documented and distributed in a
professional way ...", appeared enormous.

Apr 9, 1981:  RPG, Jonl, and GLS ride in a cramped car to Livermore,
during which time the prospect of merging the Vax/NIL, S-1/NIL, and
Spice/Lisp projects is proposed to GLS.  Some technical obstacles are
worked out.  Later that night, at Brian Reed's house, SEF is apprised of
the prospects.  He too quickly realizes the advantages of a common dialect
when presenting plans to funding agencies; more technical details are
worked out, in particular the administrative plan of the CMU folks that
the manual will be written first before coding commences, and the manual
will be under the control of GLS.

Apr 10, 1981: Jonl and RPG meet with Nils Nilsson, Gary Hendrix, Karl
Leavitt, Jack Goldberg, and Rob Shostack; brief outline is made of what
SRI would contribute, what Lawrence-Livermore would contribute, and what
CMU would contribute.  Nils takes plans to Arpa to "get a reading".

Apr 13, 1981: More meetings between RPG, Jonl, and Goldberg, Leavitt,
Shostack.  SRI has a proposal for a "portable InterLisp" in the works, and
the NIL/Spice plan is to be merged with that project, under the CSL
section.  Details are worked out about how CMU will retain "ownership" of
the manual, but SRI will be a distribution center.

Later that week:  Nils reports mixed reception in Washington from Arpa.
SEF and GLS are already back at CMU.  Plans are made to meet at CMU
sometime "soon" since the S-1/NIL group will be re-locating to CMU for the
summer.

Next week:  Feigenbaum gives tacit approval to the plan, in an Arpa-Net
    letter to SEF.  Such support is received with joy.

May 1981: Jonl and Shostak prepare a written plan for SRI involvement,
   with a view to obtaining ARPA funding.

First week of June (Saturday):  Meeting at CMU to resolve particular
language
    issues.  Attending were GLS, SEF, RPG, JONL, Bill Scherliss, and Rod
    Brooks.  A lot of time was spent on treatement of Multiple-values;
    NIL versus () remains unresolved.  Lunch is had at Ali Baba's, and
    the name Yu-Shiang Lisp is proposed to replace Spice Lisp; also
proposed
    is to retain the generic name NIL, but to specialize between
Spice/NIL
    S-1/NIL, Vax/NIL etc.  Importance is recognized of bringing in the
    other post-MacLisp groups, notably Symbolics and LMI.

July: Report from ARPA looks negative for any funding for proposal from
SRI.

Summer:  Symbolics greets the idea of a "standardizing" with much
    support.  Noftsker in particular deems it desirable to have a
    common dialect on the Vax through which potential LispMachine
    customers can be exposed to Lisp.  Moon pushes for a name, which
    by default seems to be heading for CommonLisp.  GLS produces the
    "Swiss Cheese" edition of the Spice Lisp manual.

Sept: Change in administration in ARPA casts new light on SRI hopes:
    A big "smile" is offered to the plan, it is met with approval, but
    but not with money.  Later on, it appears that hopes for an ARPA
    proposal are futile; word is around even that ARPA is pulling out
    of the InterLisp/VAX support.

Last week of November 1981: Meeting in Cambridge, at Symbolics, to resolve
many issues; excelent "footwork" done by GLS to get a written notebook to
each attendee of the various issues, along with a Ballot sheet.  First day
goes moderately; second day degenerates into much flaming.  Many hard
issues postponed.  Several other groups were now "aboard", in particular
the InterLisp community sent Bill vanMelle as a observer.

[At some point in time, RPG contacted the Utah people to get them to 
 interested.  Also, RPG dealt with Masinter as a representative of the
 InterLisp Community?  Bill Woods at BBN also expresses interest in
 the development, so that InterLisp can keep "up to date".]

Fall 1981:  Michael Smith, major sales representative of DEC, asks for
    advice on getting DEC into the lisp market.  Both outside customers
    and internal projects make it imperative that DEC do something soon.
    Internally, Chinnaswamy in Engineering at the Marlborough plant, and
    John Ulrich in the new "Knowledge Engineering" project at Tewksbury
    apply internal pressure for DEC to take action quickly.

Mid December, 1981:  Sam Fuller calls several people in to DEC for
consultation about what DEC can do to support Lisp.  Jonl makes a case for
DEC joining the CommonLisp bandwagon, rather than any of the other options
namely:  jump in wholeheartedly behind InterLisp/VAX, or behind Vax/NIL,
or (most likely) strike aout afresh with their own DEC Lisp.  Chuch
Hedrick is given a contract by DEC's LCG (the TOPS-20 people) to do an
extended- addressing 20-Lisp, of whatever flavor is decided upon by the
VAX group.

Jan 1982: DEC gives CMU a short contract to develop a CommonLisp on the
VAX.

Spring 1982: Discussion continues via ARPA-net mails, culminating in a
    very productive day long session at CMU on Aug 21, 1981.

18-Dec-81  0918	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	information about Common Lisp implementation  
Date: 18 Dec 1981 1214-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: information about Common Lisp implementation
To: rpg at SU-AI, jonl at MIT-AI

We are about to sign a contract with DEC's LCG whereby they sponsor us
to produce an extended addressing Lisp.  We are still discussing whether
this should be Interlisp or Common Lisp.  I can see good arguments in
both directions, and do not have a strong perference, but I would
slightly prefer Common Lisp.  Do you know whether there are any
implementations of Common Lisp, or something reasonably close to it? I
am reconciled to producing my own "kernel", probably in assembly
language, though I have some other candidates in mind too. But I would
prefer not to have to do all of the Lisp code from scratch.

As you may know, DEC is probably going to support a Lisp for the VAX. My
guess is that we will be very likely to do the same dialect that  is
decided upon there.  The one exception would be if it looks like MIT (or
someone else) is going to do an extended implementation of Common Lisp.
If so, then we would probably do Interlisp, for completeness.

We have some experience in Lisp implementation now, since Elisp (the
extended implementation of Rutgers/UCI Lisp) is essentially finished.
(I.e. there are some extensions I want to put in, and some optimizations,
but it does allow any sane R/UCI Lisp code to run.) The interpreter now
runs faster than the original R/UCI lisp interpreter. Compiled code is
slightly slower, but we think this is due to the fact that we are not
yet compiling some things in line that should be. (Even CAR is not
always done in line!)  The compiler is Utah's portable compiler,
modified for the R/UCI Lisp dialect.  It does about what you would want
a Lisp compiler to do, except that it does not open code arithmetic
(though a later compiler has some abilities in that direction).  I
suspect that for a Common Lisp implementation we would try to use the
PDP-10 Maclisp compiler as a base, unless it is too crufty to understand
or modify.  Changing compilers to produce extended code turns out not to
be a very difficult job.
-------

21-Dec-81  0702	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Extended-addressing Common Lisp 
Date: 21 Dec 1981 0957-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Extended-addressing Common Lisp
To: JONL at MIT-XX
cc: rpg at SU-AI
In-Reply-To: Your message of 18-Dec-81 1835-EST

thanks.  At the moment the problem is that DEC is not sure whether they
are interested in Common Lisp or Interlisp.  We will probably
follow the decision they make for the VAX, which should be done
sometime within a month.  What surprised me about that was from what I
can hear one of Interlisp's main advantages was supposed to be that the
project was further along on the VAX than the NIL project.  That sounds
odd to me.  I thought NIL had been released.  You might want to talk
with some of the folks at DEC.  The only one I know is Kalman Reti,
·········@DEC-MARLBORO.
-------

21-Dec-81  1512	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Common Lisp
Date: 21 Dec 1981 1806-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Common Lisp
To: rpg at SU-AI, griss at UTAH-20

I just had a conversation with JonL which I found to be somewhat
unsettling.  I had hoped that Common Lisp was a sign that the Maclisp
community was willing to start doing a common development effort. It
begins to look like this is not the case.  It sounds to me like the most
we can hope for is a bunch of Lisps that will behave quite differently,
have completely different user facilities, but will have a common subset
of language facilities which will allow knowlegable users to write
transportable code, if they are careful.  I.e. it looks a lot like the
old Standard Lisp effort, wherein you tried to tweak existing
implementations to support the Standard Lisp primitives.  I thought more
or less everyone agreed that hadn't worked so well, which is why the new
efforts at Utah to do something really transportable.  I thought
everybody agreed that these days the way you did a Lisp was to write
some small kernel in an implementation language, and then have a lot of
Lisp code, and that the Lisp code would be shared.

Supposing that we and DEC do agree to proceed with Common Lisp, would
you be interested in starting a Common Lisp sub-conspiracy, i.e. a group
of people interested in a shared Common Lisp implementation?  While we
are going to have support from DEC, that support is going to be $70K
(including University overhead) which is going to be a drop in the
bucket if we have to do a whole system, rather than just a VM and some
tweaking.

-------

21-Dec-81  0717	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Re: Common Lisp      
Date: 21 Dec 1981 1012-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Re: Common Lisp   
To: RPG at SU-AI
In-Reply-To: Your message of 20-Dec-81 2304-EST

thanks.  Are you sure Utah is producing Common Lisp?  they have a thing
they call Standard Lisp, which is something completely different.  I have
never heard of a Common Lisp project there, and I work very closely with
their Lisp development people so I think I would have.
-------

I visited there the middle of last month for about 3 days and talked
the technical side of Common Lisp being implemented in their style. Martin told
me that if we only insisted on a small virtual machine with most of the
rest in Lisp code from the Common Lisp people he'd like to do it.

I've been looking at their stuff pretty closely for the much behind schedule
Lisp evaluation thing and I'm pretty impressed with them. We discussed
grafting my S-1 Lisp compiler front end on top of their portable compiler.
			-rpg-
02-Jan-82  0908	Griss at UTAH-20 (Martin.Griss) 	Com L  
Date:  2 Jan 1982 1005-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Com L
To: guy.steele at CMU-10A, rpg at SU-AI
cc: griss at UTAH-20

I have retrieved the revisions and decisions, will look them over.
I will try to set up arrangements to be at POPL Mondat-Wednesday,
depends on flights,

What is Common LISP schedule, next meeting, etc? Will we be invited to
attend, or is this one of topics for us to dicuss, etc. at POPL.
What in fact are we to dicuss, and what should I be thinking about.
As I explained, I hope to finish this round of PSL implementation
on DEC-20, VAX and maybe even first version on 68000 by then.
We then will fill in some missing features, and start bringup up REDUCE,
meta-compiler, BIGfloats, and PictureRLISP graphics. At that point I
have accomplished a significant amount of my NSF goals this year.

Next step is to signficantly improve PSL, SYSLISP, merge with Mode Analysis
phase for improved LISP<->SYSLISP comunications and efficiency.

At the same time, we will be looking over various LISP systems to see what sort of good
features can be adapted, and what sort of compatibility packages (eg, UCI-LISP
package, FranzLISP package, etc).

Its certainly in this pahse that I could easily attempt to modify PSL to
provide a ComonLISP kernel, assuming that we have not already adapted much of the
code.
M
-------

15-Jan-82  0109	RPG   	Rutgers lisp development project 
 14-Jan-82  1625	HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility) 	Rutgers lisp development project    
Mail-from: ARPANET site RUTGERS rcvd at 13-Jan-82 2146-PST
Date: 14 Jan 1982 0044-EST
From: HEDRICK at RUTGERS (Mngr DEC-20's/Dir LCSR Comp Facility)
Subject: Rutgers lisp development project
To: bboard at RUTGERS, griss at UTAH-20, admin.mrc at SU-SCORE, jsol at RUTGERS
Remailed-date: 14 Jan 1982 1622-PST
Remailed-from: Mark Crispin
Remailed-to: Feigenbaum at SUMEX-AIM, REG at SU-AI

It now appears that we are going to do an implementation of Common Lisp
for the DEC-20.  This project is being funded by DEC.

		Why are we doing this project at all?

This project is being done because a number of our researchers are going
to want to be able to move their programs to other systems than the
DEC-20.  We are proposing to get personal machines over the next few
years.  Sri has already run into problem in trying to give AIMDS to
someone that only has a VAX.  Thus we think our users are going to want
to move to a dialect that is widely portable.

Also, newer dialects have some useful new features.  Although these
features can be put into Elisp, doing so will introduce
incompatibilities with old programs.  R/UCI Lisp already has too many
inconsistencies introduced by its long history.  It is probably better
to start with a dialect that has been designed in a coherent fashion.

			Why Common Lisp?

There are only three dialects of Lisp that are in wide use within the
U.S. on a variety of systems:  Interlisp, meta-Maclisp, and Standard
Lisp.  (By meta-Maclisp I mean a family of dialects that are all
related to Maclisp and generally share ideas.)  Of these, Standard Lisp
has a reputation of not being as "rich" a language, and in fact is not
taken seriously by most sites.  This is not entirely fair, but there is
probably nothing we can do about that fact at this stage. So we are left
with Interlisp and meta-Maclisp.  A number of implementors from the
Maclisp family have gotten together to define a common dialect that
combines the best features of their various dialects, while still being
reasonable in size.  A manual is being produced for it, and once
finished will remain reasonably stable.  (Can you believe it?
Documentation before coding!)  This dialect is now called Common Lisp.
The advantages of Common Lisp over Interlisp are:

  - outside of BBN and Xerox, the Lisp development efforts now going on
	all seem to be in the Maclisp family, and now are being
	redirected towards Common Lisp.  These efforts include 
	CMU, the Lisp Machine companies (Symbolics, LMI), LRL and MIT.

  - Interlisp has some features, particularly the spaghetti stack,
	that make it impossible to implement as efficiently and cleanly
	as Common Lisp.  (Note that it is possible to get as good
	effiency out of compiled code if you do not use these features,
	and if you use special techniques when compiling.  However that
	doesn't help the interpreter, and is not as clean.)

  - Because of these complexities in Interlisp, implementation is a
	large and complex job.  ARPA funded a fairly large effort at
	ISI, and even that seems to be marginal.  This comment is based
	on the report on the ISI project produced by Larry Masinter,
	<lisp>interlisp-vax-rpt.txt.  Our only hope would be to take
	the ISI implementation and attempt to transport it to the 20.
	I am concerned that the result of this would be extremely slow.
	I am also concerned that we might turn out not to have the
	resources necessary to do it a good job.

  - There seems to be a general feeling that Common Lisp will have a
	number of attractive features as a language.  (Notice that I am
	not talking about user facilities, which will no doubt take some
	time before they reach the level of Interlisp.)  Even people
	within Arpa are starting to talk about it as the language of the
	future.  I am not personally convinced that it is seriously
	superior to Interlisp, but it is as good (again, at the language
	level), and the general Maclisp community seems to have a number
	of ideas that are significantly in advance of what is likely to
	show up in Interlisp with the current support available for it.

There are two serious disadvantages of Common Lisp:

  - It does not exist yet.  As of this week, there now seem to be
	sufficient resources committed to it that we can be sure it will
	be implemented.  The following projects are now committed, at a
	level sufficient for success:  VAX (CMU), DEC-20 (Rutgers), PERQ
	and other related machines (CMU), Lisp Machine (Symbolics), S-1
	(LRL).  I believe this is sufficient to give the language a
	"critical mass".

  - It does not have user facilities defined for it.  CMU is heavily
	committed to the Spice (PERQ) implementation, and will produce
	the appropriate tools.  They appear to be funded sufficiently
	that this will happen.

		 Why is DEC funding it, and what will be
		 	our relationship with them?

LCG (the group within DEC that is responsible for the DEC-20) is
interested in increasing the software that will support the full 30-bit
address space possible in the DEC-20 architecture.  (Our current
processor will only use 23 bits of this, but this is still much better
than what was supported by the old software, which is 18 bits.)  They
are proceeding at a reasonable rate with the software that is supported
by DEC.  However they recognize that many important languages were
developed outside of DEC, and that it will not be practical for them
to develop large-address-space implementations of all of them in-house.
Thus DEC is attempting to find places that are working on the more
important of these languages, and they are funding efforts to develop
large address versions.  They are sponsoring us for Lisp, and Utah
for C.  Pascal is being done in a slightly complex fashion.  (In fact
some of our support from DEC is for Pascal.)

DEC does not expect to make money directly from these projects.  We will
maintain control over the software we develop, and could sell support
for it if we wanted to. We are, of course, expected to make the software
widely available. (Most likely we will submit it to DECUS but also
distribute it ourselves.)  What DEC gets out of it is that the large
address space DEC-20 will have a larger variety of software available
for it than otherwise.  I believe this will be an important point for
them in the long run, since no one is going to want to buy a machine for
which only the Fortran compiler can generate programs larger than 256K.
Thus they are facing the following facts:
  - they can't do things in house nearly as cheaply as universities
	can do them.
  - universities are no longer being as well funded to do language
	development, particularly not for the DEC-20.

			How will we go about it?

We have sufficient funding for one full-time person and one RA.  Both
DEC and Rutgers are very slow about paperwork.  But these people should
be in place sometime early this semester.  The implementation will
involve a small kernel, in assembly language, with the rest done in
Lisp.  We will get the Lisp code from CMU, and so will only have to do
the kernel.  This project seems to be the same size as the Elisp
project, which was done within a year using my spare time and a month of
so of Josh's time.  It seems clear that we have sufficient manpower. (If
you think maybe we have too much, I can only say that if we finish the
kernel sooner than planned, we will spend the time working on user
facilities, documentation, and helping users here convert to it.) CMU
plans to finish the VAX project in a year, with a preliminary version in
6 months and a polished release in a year.  Our target is similar.
-------

19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Scott:
	Here are some messages I received recently. I'm worried about
Hedrick and the Vax. I'm not too worried about Lisp Machine, you guys,
and us guys (S-1). I am also worried about Griss and Standard Lisp,
which wants to get on the bandwagon. I guess I'd like to settle kernel
stuff first, fluff later.

	I understand your worry about sequences etc. Maybe we could try
to split the effort of studying issues a little. I dunno. It was just
a spur of the moment thought.
			-rpg-

19-Jan-82  1448	Feigenbaum at SUMEX-AIM 	more on common lisp 
Date: 19 Jan 1982 1443-PST
From: Feigenbaum at SUMEX-AIM
Subject: more on common lisp
To:   gabriel at SU-AI

Mail-from: ARPANET host PARC-MAXC rcvd at 19-Jan-82 1331-PST
Date: 19 Jan 1982 13:12 PST
From: Masinter at PARC-MAXC
to: ··········@sumex-aim
Subject: Common Lisp- reply to Hedrick

It is a shame that such misinformation gets such rapid dissemination....

Date: 19 Jan 1982 12:57 PST
From: Masinter at PARC-MAXC
Subject: Re: CommonLisp at Rutgers
To: ·······@Rutgers
cc: Masinter

A copy of your message to "bboard at RUTGERS, griss at UTAH-20, admin.mrc at
SU-SCORE, jsol at RUTGERS" was forwarded to me. I would like to rebut some of
the points in it:

I think that Common Lisp has the potential for being a good lisp dialect which
will carry research forward in the future. I do not think, however, that people
should underestimate the amount of time before Common Lisp could possibly be a
reality.

The Common Lisp manual is nowhere near being complete. Given the current
rate of progress, the Common Lisp language definition would probably not be
resolved for two years--most of the hard issues have merely been deferred (e.g.,
T and NIL, multiple-values), and there are many parts of the manual which are
simply missing. Given the number of people who are joining into the discussion,
some drastic measures will have to be taken to resolve some of the more serious
problems within a reasonable timeframe (say a year).

Beyond that, the number of things which would have to be done to bring up a
new implementation of CommonLisp lead me to believe that the kernel for
another machine, such as the Dec-20, would take on the order of 5 man-years at
least. For many of the features in the manual, it is essential that the be built
into the kernel (most notably the arithmetic features and the multiple-value
mechanism) rather than in shared Lisp code. I believe that many of these may
make an implementation of Common Lisp more "difficult to implement efficiently
and cleanly" than Interlisp. 

I think that the Interlisp-VAX effort has been progressing quite well. They have
focused on the important problems before them, and are proceeding quite well. I
do not know for sure, but it is likely that they will deliver a useful system
complete with a programming enviornment long before the VAX/NIL project,
which has consumed much more resources. When you were interacting with the
group of Interlisp implementors at Xerox, BBN and ISI about implementing
Interlisp, we cautioned you about being optimistic about the amount of
manpower required. What seems to have happened is that you have come away
believing that Common Lisp would be easier to implement.  I don't think that is
the case by far.

Given your current manpower estimate (one full-time person and one RA) I do
not believe you have the critical mass to bring off a useful implemention of
Common Lisp. I would hate to see a replay of the previous situation with
Interlisp-VAX, where budgets were made and machines bought on the basis of a
hopeless software project. It is not that you are not competent to do a reasonable
job of implementation, it is just that creating a new implementation of an already
specified language is much much harder than merely creating a new
implementation of a language originally designed for another processor. 

I do think that an Interlisp-20 using extended virtual addressing might be
possible, given the amount of work that has gone into making Interlisp
transportable, the current number of compatible implementations (10, D, Jericho,
VAX) and the fact that Interlisp "grew up" in the Tenex/Tops-20 world, and that
some of the ordinarily more difficult problems, such as file names and operating
system conventions, are already tuned for that operating system. I think that a
year of your spare time and Josh for one month seems very thin.

Larry
-------

20-Jan-82  2132	Fahlman at CMU-20C 	Implementations
Date: 21 Jan 1982 0024-EST
From: Fahlman at CMU-20C
Subject: Implementations
To: rpg at SU-AI
cc: steele at CMU-20C, fahlman at CMU-20C

Dick,

I agree that, where a choice must be made, we should give first priority
to settling kernel-ish issues.  However, I think that the debate on
sequence functions is not detracting from more kernelish things, so I
see no reason not to go on with that.

Thanks for forwarding Masinter's note to me.  I found him to be awfully
pessimistic.  I believe that the white pages will be essentially complete
and in a form that just about all of us can agree on within two months.
Of course, the Vax NIL crowd (or anyone else, for that matter) could delay
ratification indefinitely, even if the rest of us have come together, but I
think we had best deal with that when the need arises.  We may have to
do something to force convergence if it does not occur naturally.  My
estimate may be a bit optimistic, but I don't see how anyone can look at
what has happened since last April and decide that the white pages will
not be done for two years.

Maybe Masinter's two years includes the time to develop all of the
yellow pages stuff -- editors, cross referencers, and so on.  If so, I
tend to agree with his estimate.  To an Interlisper, Common Lisp will
not offer all of the comforts of home until all this is done and stable,
and a couple of years is a fair estimate for all of this stuff, given
that we haven't really started thinking about this.  I certainly don't
expect the Interlisp folks to start flocking over until all this is
ready, but I think we will have the Perq and Vax implementations
together within 6 months or so and fairly stable within a year.

I had assumed that Guy had been keeping you informed of the negotiations
we have had with DEC on Common Lisp for VAX, but maybe he has not.  The
situation is this: DEC has been extremely eager to get a Common Lisp up
on Vax VMS, due to pressure from Slumberger and some other customers,
plus their own internal plans for building some expert systems.  Vax NIL
is not officially abandoned, but looks more and more dubious to them,
and to the rest of us.  A couple of months ago, I proposed to DEC that
we could build them a fairly decent compiler just by adding a
post-processor to the Spice Lisp byte-code compiler.  This
post-processor would turn the simple byte codes into in-line Vax
instructions and the more complex ones into jumps off to hand-coded
functions.  Given this compiler, one could then get a Lisp system up
simply by using the Common Lisp in Common Lisp code that we have
developed for Spice.  The extra effort to do the Vax implementation
amounts to only a few man-months and, once it is done, the system will
be totally compatible with the Spice implementation and will track any
improvements.  With some additional optimizations and a bit of tuning,
the performance of this sytem should be comparable to any other Lisp on
the Vax, and probably better than Franz.

DEC responded to this proposal with more enthusiasm than I expected.  It
is now nearly certain that they will be placing two DEC employees
(namely, ex-CMU grad students Dave McDonald and Water van Roggen) here
in Pittsburgh to work on this, with consulting by Guy and me.  The goal
is to get a Common Lisp running on the Vax in six months, and to spend
the following 6 months tuning and polishing.  I feel confident that this
goal will be met.  The system will be done first for VMS, but I think we
have convinced DEC that they should invest the epsilon extra effort
needed to get a Unix version up as well.

So even if MIT totally drops the ball on VAX NIL, I think that it is a
pretty safe bet that a Common Lisp for Vax will be up within a year.  If
MIT wins, so much the better: the world will have a choice between a
hairy NIL and a basic Common Lisp implementation.

We are suggesting to Chuck Hedrick that he do essentially the same thing
to bring up a Common Lisp for the extended-address 20.  If he does, then
this implementation should be done in finite time as well, and should
end up being fully compatible with the other systems.  If he decides
instead to do a traditinal brute-force implementation with lots of
assembly code, then I tend to agree with Masinter's view: it will take
forever.

I think we may have come up with an interesting kind of portability
here.  Anyway, I thought you would be interested in hearing all the
latest news on this.

-- Scott
-------

12-Sep-82  1623	RPG  	Vectors versus Arrays   
To:   common-lisp at SU-AI  

Watching the progress of the Common Lisp committee on the issue
of vectors over the past year I have come to the conclusion that
things are on the verge of being out of control. There isn't an
outstanding issue with regard to vectors versus arrays that
disturbs me especially as much as the trend of things - and almost
to the extent that I would consider removing S-1 Lisp from Common Lisp.

When we first started out there were vectors and arrays; strings and bit
vectors were vectors, and we had the situation where a useful data
structure - derivable from others, though it is - had a distinct name and
a set of facts about them that a novice user could understand without too
much trouble. At last November's meeting the Symbolics crowd convinced us
that changing things were too hard for them, so strings became
1-dimensional arrays. Now, after the most recent meeting, vectors have
been canned and we are left with `quick arrays' or `simple arrays' or
something (I guess they are 1-dimensional arrays, are named `simple
arrays', and are called `vectors'?).

Of course it is trivial to understand that `vectors' are a specialization
of n-dimensional arrays, but the other day McCarthy said something that
made me wonder about the idea of generalizing too far along these lines.
He said that mathematicians proceed by inventing a perfectly simple,
understandable object and then writing it up. Invariably someone comes
along a year later and says `you weren't thinking straight; your idea is
just a special case of x.' Things go on like this until we have things
like category theory that no one can really understand, but which have the
effect of being the most general generalization of everything.

There are two questions: one regarding where the generalization about vectors
and arrays should be, and one regarding how things have gone politically.

Perhaps in terms of pure programming language theory there is nothing
wrong with making vectors a special case of arrays, even to the extent of
making vector operations macros on array operations. However, imagine
explaining to a beginner, or a clear thinker, or your grandchildren, that
to get a `vector' you really make a `simple array' with all sorts of
bizarre options that simply inform the system that you want a streamlined
data structure. Imagine what you say when they ask you why you didn't just
include vectors to begin with.

Well, you can then go on to explain the joys of generalizations, how
n-dimensional arrays are `the right thing,' and then imagine how you
answer the question:  `why, then, is the minimum maximum for n, 63?' I
guess that's 9 times easier to answer than if the minimum maximum were 7.

Clearly one can make this generalization and people can live with it. 
We could make the generalization that LIST can take some other options,
perhaps stating that we want a CDR-coded list, and it can define some
accessor functions, and some auxilliary storage, and make arrays a 
specialization of CONS cells, but that would be silly (wouldn't it??).

The point is that vectors are a useful enough concept to not need to suffer
being a specialization of something else.

The political point I will not make, but will leave to your imagination.

			-rpg-

12-Sep-82  1828	MOON at SCRC-TENEX 	Vectors versus Arrays    
Date: Sunday, 12 September 1982  21:23-EDT
From: MOON at SCRC-TENEX
To: Dick Gabriel <RPG at SU-AI>
Cc: common-lisp at SU-AI
Subject: Vectors versus Arrays   

I think the point here, which perhaps you don't agree with, is that
"vector" is not a useful concept to a user (why is a vector different from
a 1-dimensional array?)  It's only a useful concept to the implementor, who
thinks "vector = load the Lisp pointer into a base register and index off
of it", but "array = go call an interpretive subroutine to chase indirect
pointers", or the code-bummer, who thinks "vector = fast", "array = slow".
Removing the vector/array distinction from the guts of the language is in
much the same spirit as making the default arithmetic operators generic
across all types of numbers.

I don't think anyone from "the Symbolics crowd convinced us that changing
things were too hard for them"; our point was always that we thought it was
silly to put into a language designed in 1980 a feature that was only there
to save a few lines of code in the compiler for the VAX (and the S1), when
the language already requires declarations to achieve efficiency on those
machines.

If you have a reasonable rebuttal to this argument, I at least will listen.
It is important not to return to "four implementations going in four different
directions."

12-Sep-82  2131	Scott E. Fahlman <Fahlman at Cmu-20c> 	RPG on Vectors versus Arrays   
Date: Sunday, 12 September 1982  23:47-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>
To:   common-lisp at SU-AI
Subject: RPG on Vectors versus Arrays   


I'm sure each of us could design a better language than Common Lisp is
turning out to be, and that each of those languages would be different.
My taste is close to RPG's, I think: in general, I like primitives that
I can build with better than generalizations that I can specialize.
However, Common Lisp is politics, not art.  If we can come up with a
single language that we can all live with and use for real work, then we
will have accomplished a lot more than if we had individually gone off
an implemented N perfect Lisp systems.

When my grandchildren, if any, ask me why certain things turned out in
somewhat ugly ways, I will tell them that it is for the same reason that
slaves count as 3/5 of a person in the U.S. Constitution -- that is the
price you pay for keeping the South on board (or the North, depending).
A few such crocks are nothing to be ashamed of, as long as the language
is still something we all want to use.  Even with the recent spate of
ugly compromises, I think we're doing pretty well overall.

For the record, I too believe that Common Lisp would be a clearer and
more intuitive language if it provided a simple vector data type,
documented as such, and presented hairy multi-D arrays with fill
pointers and displacement as a kind of structure built out of these
vectors.  This is what we did in Spice Lisp, not to fit any particular
instruction set, but because it seemed obviously right, clear, and
easily maintainable.  I have always felt, and still feel, that the Lisp
Machine folks took a wrong turn very early when they decided to provide
a hairy array datatype as primary with simple vectors as a degenerate
case.

Well, we proposed that Common Lisp should uniformly do this our way,
with vectors as primary, and Symbolics refused to go along with this.  I
don't think this was an unreasonable refusal -- it would have required
an immense effort for them to convert, and most of them are now used to
their scheme and like it.  They have a big user community already,
unlike the rest of us.  So we have spent the last N months trying to
come up with a compromise whereby they could do things their way, we
could do things our way, and everything would still be portable and
non-confusing.

Unfortunately, these attempts to have it both ways led to all sorts of
confusing situations, and many of us gradually came to the conclusion
that, if we couldn't have things entirely our way, then doing things
pretty much the Lisp Machine way (with the addition of the simple-vector
hack) was the next best choice.  In my opinion, the current proposal is
slightly worse than making vectors primary, but not much worse, and it
is certainly something that I can live with.  The result in this case is
close to what Symbolics wanted all along, but I don't think this is the
result of any unreasonable political tactics on their part.  Of course,
if RPG is seriously unhappy with the current proposal, we will have to
try again.  There is always the possibility that the set of solutions
acceptable to RPG or to the S1 group does not intersect with the set
acceptable to Symbolics, and that a rift is inevitable, but let us hope
that it does not come down to that.

-- Scott

13-Sep-82  1133	RPG  	Reply to Moon on `Vectors versus Arrays'    
To:   common-lisp at SU-AI  
The difference to a user between a vector and an array is that an array is
a general object, with many features, and a vector is a commonly used
object with few features: in the array-is-king scheme one achieves a
vector via specialization.  An analogy can be made between arrays/vectors
and Swiss Army knives. A Swiss army knife is a fine piece of engineering;
and, having been at MIT for a while 10 years ago, I know that they are
well-loved there. However, though a keen chef might own a Swiss Army
knife, he uses his boning knife to de-bone - he could use his Swiss Army
knife via specialization. We all think of programs as programs, not as
categories with flow-of-control as mappings, and, though the latter
is correct, it is the cognitive overhead of it that makes us favor the
former over the latter.

To me the extra few lines of code in the compiler are meaningless (why
should a few extra lines bother the co-author of a 300-page compiler?); a
few extra lines of emitted code are not very relevant either if it comes
to that (it is , after all, an S-1).  Had I been concerned with saving `a
few lines of code in the compiler' you can trust that I would have spoken
up earlier about many other things.

The only point I am arguing is that the cognitive overhead of making
vectors a degenerate array *may* be too high.

			-rpg-

14-Sep-82  1823	JonL at PARC-MAXC 	Re: `Vectors versus Arrays',  and the original compromise    
Date: 14 Sep 1982 18:23 PDT
From: JonL at PARC-MAXC
Subject: Re: `Vectors versus Arrays',  and the original compromise
In-reply-to: RPG's message of 13 Sep 1982 1133-PDT
To: Dick Gabriel <RPG at SU-AI>, ····@mit-mc
cc: common-lisp at SU-AI

During the Nov 1981 CommonLisp meeting, the LispM folks (Symbolics, and 
RG, and RMS) were adamantly against having any datatype for "chunked" 
data other than arrays.  I thought, however, that some sort of compromise was
reached shortly afterwards, at least with the Symbolics folks, whereby VECTORs
and STRINGs would exist in CL pretty much the way they do in other lisps not
specifically intended for special purpose computers (e.g., StandardLisp, PSL,
Lisp/370, VAX/NIL etc).

It was admitted that the Lispm crowd could emulate these datatypes by some
trivial variations on their existing array mechanisms -- all that would be forced
on the Lispm crowd is some kind of type-integrity for vectors and strings, and
all that would be forced on the implementors of the other CLs would be the 
minimal amount for these two "primitive" datatypes.  Portable code ought to use
CHAR or equivalent rather than AREF on strings, but that wouldn't be required,
since all the generic operations would still work for vectors and strings.

So the questions to be asked are:
 1) How well have Lisps without fancy array facilities served their
    user community?  How well have they served the implementors
    of that lisp?   Franz and PDP10 MacLisp have only primitive
    array facilities, and most of the other mentioned lisps have nothing
    other than vectors and strings (and possibly bit vectors).   
 2) How much is the cost of requiring full-generality arrays to be
    part of the white pages?  For example, can it be assured that all
    memory management for them will be written in portable CL, and
    thus shared by all implementations?  How many different compilers
    will have to solve the "optimization" questions before the implementation
    dependent upon that compiler will run in real time?
 3) Could CL thrive with all the fancy stuff of arrays (leaders, fill pointers,
    and even multiple-dimensioning) in the yellow pages?  Could a CL
    system be reasonably built up from only the VECTOR- and STRING-
    specific operations (along with a primitive object-oriented thing, which for
    lack of a better name I'll call EXTENDs, as  in the NIL design)?  As one
    data point, I'll mention that VAX/NIL was so built, and clever things
    like Flavors were indeed built over the primitives provided.
I'd think that the carefully considered opinions of those doing implementations
on "stock" hardware should prevail, since the extra work engendered for the
special-purpose hardware folks has got to be truly trivial.

It turns out that I've moved from the "stock" camp into the "special-purpose"
camp, and thus in one sense favor the current LispM approach to index-
accessible data (one big uniform data frob, the ARRAY).   But this may
turn out to be relatively unimportant -- in talking with several sophisticated
Interlisp users, it seems that the more important issues for them are the ability 
to have arrays with user-tailorable accessing methods (I may have to remind 
you all that Interlisp doesn't even have multi-dimension arrays!), and the ability
to extend certain generic operators, like PLUS, to arrays (again, the reminder that
Interlisp currently has no standard for object-oriented programming, or for
procedural attachment).


04-Oct-82  2145	STEELE at CMU-20C 	/BALLOT/   
Date:  5 Oct 1982 0041-EDT
From: STEELE at CMU-20C
Subject: /BALLOT/
To: common-lisp at SU-AI
cc: b.steele at CMU-10A

?????????????????????????????????????????????????????????????????????????????
?  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  ?
?  %  =================================================================  %  ?
?  %  =  $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$  =  %  ?
?  %  =  $  +++++++++++++++++++++++++++++++++++++++++++++++++++++  $  =  %  ?
?  %  =  $  +  ###############################################  +  $  =  %  ?
?  %  =  $  +  #  /////////////////////////////////////////  #  +  $  =  %  ?
?  %  =  $  +  #  /  The October 1982 Common LISP Ballot  /  #  +  $  =  %  ?
?  %  =  $  +  #  /////////////////////////////////////////  #  +  $  =  %  ?
?  %  =  $  +  ###############################################  +  $  =  %  ?
?  %  =  $  +++++++++++++++++++++++++++++++++++++++++++++++++++++  $  =  %  ?
?  %  =  $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$  =  %  ?
?  %  =================================================================  %  ?
?  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  ?
?????????????????????????????????????????????????????????????????????????????

Here is what you have all been waiting for!  I need an indication of
consensus or lack thereof on the issues that have been discussed by
network mail since the August 1982 meeting, particularly on those issues
that were deferred for proposal for which proposals have now been made.

There are 28 questions, each requiring only a one-letter answer.  As always,
if you don't like any of the choices, answer "x".  To make my life easier
by permitting mechanical collation of responses, please respond as follows:
	(a) send a reply message to Guy.Steele @ CMU-10A.
	(b) *PLEASE* be sure the string "/BALLOT/" is in the subject line,
	    as it is in this message (the double quotes, not the slashes,
	    are metasyntactic!).
	(c) The very first non-blank line of your message should have
	    exactly 29 non-blank characters on it.  The first should be a
	    tilde ("~") and the rest should be your votes.
	    You may put spaces between the letters to improve readability.
	(d) Following the first non-blank line, place any remarks about
	    issues on which you voted "x".
Thank you for your help.  I would appreciate response by Friday, October 8.
--Guy

1.  How shall the case for a floating-point exponent specifier
output by PRINT and FORMAT be determined?
	(a) upper case, for example 3.5E6
	(b) lower case, for example 3.5e6
	(c) a switch
	(d) implementation-dependent

2.  Shall we change the name SETF to be SET?   (y) yes   (n) no

3.  Shall there be a type specifier QUOTE, such that (QUOTE x) = (MEMBER x)?
Then MEMBER can be eliminated; (MEMBER x y z) = (OR 'x 'y 'z).  Also one can
write such things as (OR INTEGER 'FOO) instead of (OR INTEGER (MEMBER FOO)).
	(y) yes   (n) no

4.  Shall MOON's proposal for LOAD keywords, revised as shown below, be used?
	(y) yes   (n) no
----------------------------------------------------------------
Date: Wednesday, 25 August 1982, 14:01-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>
	[slightly revised]
Here is a revised proposal:

Keyword		Default		Meaning

:PACKAGE	NIL		NIL means use file's native package, non-NIL
				is a package or name of package to load into.

:VERBOSE	*LOAD-VERBOSE*	T means print a message saying what file is
				being loaded into which package.

:PRINT		NIL		T means print values of forms as they are evaluated.

:ERROR		T		T means handle errors normally; NIL means that
				a file-not-found error should return NIL
				rather than signalling an error.  LOAD returns
				the pathname (or truename??) of the file it
				loaded otherwise.

:SET-DEFAULT-PATHNAME	*LOAD-SET-DEFAULT-PATHNAME*
				T means update the pathname default
				for LOAD from the argument, NIL means don't.

:STREAM		NIL		Non-NIL means this is an open stream to be
				loaded from.  (In the Lisp machine, the
				:CHARACTERS message to the stream is used to
				determine whether it contains text or binary.)
				The pathname argument is presumed to be associated
				with the stream, in systems where that information
				is needed.

The global variables' default values are implementation dependent, according
to local conventions, and may be set by particular users according to their
personal taste.

I left out keywords to allow using a different set of defaults from the normal
one and to allow explicit control over whether a text file or a binary file
is being loaded, since these don't really seem necessary.  If we put them in,
the consistent names would be :DEFAULT-PATHNAME, :CHARACTERS, and :BINARY.
----------------------------------------------------------------

5.  Shall closures over dynamic variables be removed from Common LISP?
	(y) yes   (n) no

6.  Shall LOOP, as summarized below, be included in Common LISP?
	(y) yes   (n) no
----------------------------------------------------------------
Date: 26 August 1982 18:51-EDT
From: David A. Moon <MOON at MIT-MC>

Here is an extremely brief summary of the proposed new LOOP design, which
has not yet been finalized.  Consult the writeup on LOOP in the Lisp
Machine manual or MIT LCS TM-169 for background information.  Constructive
comments are very welcome, but please reply to BUG-LOOP at MIT-ML, not to
me personally.

(LOOP form form...) repeatedly evaluates the forms.

In general the body of a loop consists of a series of clauses.  Each
clause is either: a series of one or more lists, which are forms to be
evaluated for effect, delimited by a symbol or the end of the loop; or
a clause-introducing symbol followed by idiosyncratic syntax for that
kind of clause.  Symbols are compared with SAMEPNAMEP.  Atoms other than
symbols are in error, except where a clause's idiosyncratic syntax permits.

1. Primary clauses

1.1 Iteration driving clauses

These clauses run a local variable through a series of values and/or
generate a test for when the iteration is complete.

REPEAT <count>
FOR/AS <var> ...
CYCLE <var> ...

  I won't go into the full syntax here.  Features include: setting
  to values before starting/on the first iteration/on iterations after
  the first; iterating through list elements/conses; iterating through
  sequence elements, forwards or backwards, with or without sequence-type
  declaration; iterating through arithmetic progressions.  CYCLE reverts
  to the beginning of the series when it runs out instead of terminating
  the iteration.

  It is also possible to control whether or not an end-test is generated
  and whether there is a special epilogue only evaluated when an individual
  end-test is triggered.

1.2 Prologue and Epilogue

INITIALLY form form...		forms to be evaluated before starting, but
				after binding local variables.
FINALLY form form...		forms to be evaluated after finishing.

1.3 Delimiter

DO	a sort of semicolon needed in odd situations to terminate a clause,
	for example between an INITIALLY clause and body forms when no named
	clause (e.g. an iteration-driving clause) intervenes.
	We prefer this over parenthesization of clauses because of the
	general philosophy that it is more important to make the simple cases
	as readable as possible than to make micro-improvements in the
	complicated cases.

1.4 Blockname

NAMED name		Gives the block generated by LOOP a name so that
			RETURN-FROM may be used.

This will be changed to conform with whatever is put into Common Lisp
for named PROGs and DOs, if necessary.

2. Relevant special forms

The following special forms are useful inside the body of a LOOP.  Note
that they need not appear at top level, but may be nested inside other
Lisp forms, most usefully bindings and conditionals.

(COLLECT <value> [USING <collection-mode>] [INTO <var>] [BACKWARDS]
		[FROM <initial-value>] [IF-NONE <expr>] [[TYPE] <type>])
This special form signals an error if not used lexically inside a LOOP.
Each time it is evaluated, <value> is evaluated and accumulated in a way
controlled by <collection-mode>; the default is to form an ordered list.
The accumulated values are returned from the LOOP if it is finished
normally, unless INTO is used to put them into a variable (which gets
bound locally to the LOOP).  Certain accumulation modes (boolean AND and
OR) cause immediate termination of the LOOP as soon as the result is known,
when not collecting into a variable.

Collection modes are extensible by the user.  A brief summary of predefined
ones includes aggregated boolean tests; lists (both element-by-element and
segment-by-segment); commutative/associative arithmetic operators (plus,
times, max, min, gcd, lcm, count); sets (union, intersection, adjoin);
forming a sequence (array, string).

Multiple COLLECT forms may appear in a single loop; they are checked for
compatibility (the return value cannot both be a list of values and a
sum of numbers, for example).

(RETURN value) returns immediately from a LOOP, as from any other block.
RETURN-FROM works too, of course.

(LOOP-FINISH) terminates the LOOP, executing the epilogue and returning
any value defined by a COLLECT special form.

[Should RESTART be interfaced to LOOP, or only be legal for plain blocks?]

3. Secondary clauses

These clauses are useful abbreviations for things that can also be done
using the primary clauses and Lisp special forms.  They exist to make
simple cases more readable.  As a matter of style, their use is strongly
discouraged in complex cases, especially those involving complex or
nested conditionals.

3.1 End tests

WHILE <expr>		(IF (NOT <expr>) (LOOP-FINISH))
UNTIL <expr>		(IF <expr> (LOOP-FINISH))

3.2 Conditionals

WHEN <expr> <clause>	The clause is performed conditionally.
IF <expr> <clause>	synonymous with WHEN
UNLESS <expr> <clause>	opposite of WHEN

AND <clause>		May be suffixed to a conditional.  These two
ELSE <clause>		might be flushed as over-complex.

3.3 Bindings

WITH <var> ...		Equivalent to wrapping LET around the LOOP.
			This exists to promote readability by decreasing
			indentation.

3.4 Return values

RETURN <expr>		synonymous with (RETURN <expr>)

COLLECT ...		synonymous with (COLLECT ...)
NCONC ...		synonymous with (COLLECT ... USING NCONC)
APPEND, SUM, COUNT, MINIMIZE, etc. are analogous
ALWAYS, NEVER, THEREIS	abbreviations for boolean collection

4. Extensibility

There are ways for users to define new iteration driving clauses which
I will not go into here.  The syntax is more flexible than the existing
path mechanism.

There are also ways to define new kinds of collection.

5. Compatibility

The second generation LOOP will accept most first-generation LOOP forms
and execute them in the same way, although this was not a primary goal.
Some complex (and unreadable!) forms will not execute the same way or
will be errors.

6. Documentation

We intend to come up with much better examples.  Examples are very
important for developing a sense of style, which is really what LOOP
is all about.
----------------------------------------------------------------

7.  Regardless of the outcome of the previous question, shall CYCLE
be retained and be renamed LOOP, with the understanding that statements
of the construct must be non-atomic, and atoms as "statements" are
reserved for extensions, and any such extensions must be compatible
with the basic mening as a pure iteration construct?
	(y) yes   (n) no

8.  Shall ARRAY-DIMENSION be changed by exchanging its arguments,
to have the array first and the axis number second, to parallel
other indexing operations?
	(y) yes   (n) no

9.  Shall MACROEXPAND, as described below, replace the current definition?
	(y) yes   (n) no
----------------------------------------------------------------
Date: Sunday, 29 August 1982, 21:26-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>

Here is my promised proposal, with some help from Alan.

MACRO-P becomes a predicate rather than a pseudo-predicate.
Everything on pages 92-93 (29July82) is flushed.

Everything, including the compiler, expands macros by calling MACROEXPAND
or MACROEXPAND-1.  A variable, *MACROEXPAND-HOOK*, is provided to allow
implementation of displacing, memoization, etc.

The easiest way to show the details of the proposal is as code.  I'll try to
make it exemplary.

(DEFVAR *MACROEXPAND-HOOK* 'FUNCALL)

(DEFUN MACROEXPAND (FORM &AUX CHANGED)
  "Keep expanding the form until it is not a macro-invocation"
  (LOOP (MULTIPLE-VALUE (FORM CHANGED) (MACROEXPAND-1 FORM))
	(IF (NOT CHANGED) (RETURN FORM))))

(DEFUN MACROEXPAND-1 (FORM)
  "If the form is a macro-invocation, return the expanded form and T.
  This is the only function that is allowed to call macro expander functions.
  *MACROEXPAND-HOOK* is used to allow memoization."
  (DECLARE (VALUES FORM CHANGED-FLAG))

  (COND ((AND (PAIRP FORM) (SYMBOLP (CAR FORM)) (MACRO-P (CAR FORM)))
	 (LET ((EXPANDER (---get expander function--- (CAR FORM))))
	   ---check for wrong number of arguments---
	   (VALUES (FUNCALL *MACROEXPAND-HOOK* EXPANDER FORM) T)))
	(T FORM)))

;You can set *MACROEXPAND-HOOK* to this to get traditional displacing
(DEFUN DISPLACING-MACROEXPAND-HOOK (EXPANDER FORM)
  (LET ((NEW-FORM (FUNCALL EXPANDER FORM)))
    (IF (ATOM NEW-FORM)
	(SETQ NEW-FORM `(PROGN ,NEW-FORM)))
    (RPLACA FORM (CAR NEW-FORM))
    (RPLACD FORM (CDR NEW-FORM))
    FORM))

The above definition of MACROEXPAND-1 is oversimplified, since it can
also expand other things, including lambda-macros (the subject of a separate
proposal that has not been sent yet) and possibly implementation-dependent
things (substs in the Lisp machine, for example).

The important point here is the division of labor.  MACROEXPAND-1 takes care
of checking the length of the macro-invocation to make sure it has the right
number of arguments [actually, the implementation is free to choose how much
of this is done by MACROEXPAND-1 and how much is done by code inserted into
the expander function by DEFMACRO].  The hook takes care of memoization.  The
macro expander function is only concerned with translating one form into
another, not with bookkeeping.  It is reasonable for certain kinds of
program-manipulation programs to bind the hook variable.

I introduced a second value from MACROEXPAND-1 instead of making MACROEXPAND
use the traditional EQ test.  Otherwise a subtle change would have been
required to DISPLACING-MACROEXPAND-HOOK, and some writers of hooks might get
it wrong occasionally, and their code would still work 90% of the time.


Other issues:

On page 93 it says that MACROEXPAND ignores local macros established by
MACROLET.  This is clearly incorrect; MACROEXPAND has to get called with an
appropriate lexical context available to it in the same way that EVAL does.
They are both parts of the interpreter.  I don't have anything to propose
about this now; I just want to point out that there is an issue.  I don't
think we need to deal with the issue immediately.

A related issue that must be brought up is whether the Common Lisp subset
should include primitives for accessing and storing macro-expansion
functions.  Currently there is only a special form (MACRO) to set a
macro-expander, and no corresponding function.  The Lisp machine expedient of
using the normal function-definition primitive (FDEFINE) with an argument of
(MACRO . expander) doesn't work in Common Lisp.  Currently there is a gross
way to get the macro expander function, but no reasonable way.  I don't have
a clear feeling whether there are programs that would otherwise be portable
except that they need these operations.
----------------------------------------------------------------

10.  Shall all global system-defined variables have names beginning
and ending with "*", for example *PRINLEVEL* instead of PRINLEVEL
and *READ-DEFAULT-FLOAT-FORMAT* instead of READDEFAULT-FLOAT-FORMAT?
	(y) yes   (n) no

11.  Same question for named constants (other than T and NIL), such as
*PI* for PI and *MOST-POSITIVE-FIXNUM* for MOST-POSITIVE-FIXNUM.
	(y) yes   (n) no   (o) yes, but use a character other than "*"

12.  Shall a checking form CHECK-TYPE be introduced as described below?
	(y) yes   (n) no
----------------------------------------------------------------
Date: Thursday, 26 August 1982, 03:04-EDT
From: David A. Moon <Moon at SCRC-TENEX at MIT-MC>

See p.275 of the 29 July Common Lisp manual and p.275 of the revision
handed out at the Lisp conference.

I suggest that we include CHECK-ARG-TYPE in the language.  Although
CHECK-ARG, CHECK-ARG-TYPE, and ASSERT have partially-overlapping
functionality, each has its own valuable uses and I think all three
ought to be in the language.

Note that CHECK-ARG and CHECK-ARG-TYPE are used when you want explicit
run-time checking, including but not limited to writing the interpreter
(which of course is written in Lisp, not machine language!).

The details:
CHECK-ARG-TYPE arg-name type &OPTIONAL type-string	[macro]

If (TYPEP arg-name 'type) is false, signal an error.  The error message
includes arg-name and a "pretty" English-language form of type, which
can be overridden by specifying type-string (this override is rarely
used).  Proceeding from the error sets arg-name to a new value and
makes the test again.

Currently arg-name must be a variable, but it should be generalized to
any SETF'able place.

type and type-string are not evaluated.

This isn't always used for checking arguments, since the value of any
variable can be checked, but it is usually used for arguments and there
isn't an alternate name that more clearly describes what it does.

Date: 2 Sep 1982 12:30 PDT
From: JonL at PARC-MAXC

PDP10 MacLisp and VAX/NIL have had the name CHECK-TYPE for several 
years for essentially this functionality (unless someone has recently renamed
it).   Since it is used to certify the type of any variable's value,  it did not
include the "-ARG" part.  The motivation was to have a "checker" which was
more succinct than CHECK-ARGS, but which would generally open-code the
type test (and hence introduce no delay to the non-error case).  

I rather prefer the semantics you suggested, namely that the second argument 
to CHECK-TYPE be a type name (given the CommonLisp treatment of type
hierarchy).  At some level, I'd think a "promise" of fast type checking should
be guaranteed (in compiled code) so that persons will prefer to use this
standardized facililty;  without some indication of performance, one would
be tempted to write his own in order not to slow down the common case.
----------------------------------------------------------------

13.  Shall a checking form CHECK-SUBSEQUENCE be introduced as described below?
	(y) yes   (n) no
----------------------------------------------------------------
Date: 2 Sep 1982 12:30 PDT
From: JonL at PARC-MAXC

If the general sequence functions continue to thrive in CommonLisp, I'd
like to suggest that the corresponding CHECK-SUBSEQUENCE macro (or
whatever renaming of it should occur) be included in CommonLisp.  

  CHECK-SUBSEQUENCE (<var> <start-index> <count>) &optional <typename>)

provides a way to certify that <var> holds a sequence datum of the type
<typename>, or of any suitable sequence type (e.g., LIST, or STRING or 
VECTOR etc) if <typename> is null; and that the indicated subsequence
in it is within the size limits.

[GLS: probably <end> is more appropriate than <count> for Common LISP.]
----------------------------------------------------------------

14.  Shall the functions LINE-OUT and STRING-OUT, eliminated in November,
be reinstated?
	(y) yes   (n) no

15.  Shall the REDUCE function be added as described below?
	(y) yes   (n) no
----------------------------------------------------------------
Date:  3 September 1982 1756-EDT (Friday)
From: Guy.Steele at CMU-10A

I would like to mildly re-propose the REDUCE function for Common
LISP, now that adding it would require only one new function, not ten
or fifteen:

REDUCE function sequence &KEY :START :END :FROM-END :INITIAL-VALUE
    The specified subsequence of "sequence" is reduced, using the "function"
    of two arguments.  The reduction is left-associative, unless
    :FROM-END is not false, in which case it is right-associative.
    If the an :INITIAL-VALUE is given, it is logically placed before the
    "sequence" (after it if :FROM-END is true) and included in the
    reduction operation.  If no :INITIAL-VALUE is given, then the "sequence"
    must not be empty.  (An alternative specification: if no :INITIAL-VALUE
    is given, and "sequence" is empty, then "function" is called with
    zero arguments and the result returned.  How about that?  This idea
    courtesy of Dave Touretzky.)

    (REDUCE #'+ '(1 2 3 4)) => 10
    (REDUCE #'- '(1 2 3 4)) => -8
    (REDUCE #'- '(1 2 3 4) :FROM-END T) => -2   ;APL-style
    (REDUCE #'LIST '(1 2 3 4)) => (((1 2) 3) 4)
    (REDUCE #'LIST '(1 2 3 4) :FROM-END T) => (1 (2 (3 4)))
    (REDUCE #'LIST '(1 2 3 4) :INITIAL-VALUE 'FOO) => ((((FOO 1) 2) 3) 4)
    (REDUCE #'LIST '(1 2 3 4) :FROM-END T :INITIAL-VALUE 'FOO)
				 => (1 (2 (3 (4 FOO))))
----------------------------------------------------------------

16.  Shall the Bawden/Moon solution to the "invisible block" problem
be accepted?  The solution is to define (RETURN x) to mean precisely
(RETURN-FROM NIL x), and to specify that essentially all standard
iterators produce blocks named NIL.  A block with a name other than
NIL cannot capture a RETURN, only a RETURN-FROM with a matching name.
	(y) yes   (n) no

17.  Shall the TAGBODY construct be incorporated?  This expresses just
the behavior of the GO aspect of a PROG.  Any atoms in the body
are not evaluated, but serve as tags that may be specified to GO.
Tags have lexical scope and dynamic extent.  TAGBODY always returns NIL.
	(y) yes   (n) no

18.  What shall be done about RESTART?  The following alternatives seem to
be the most popular:
	(a) Have no RESTART form.
	(b) RESTART takes the name of a block.  What happens when you say
	    (RESTART NIL) must be clarified for most iteration constructs.
	(c) There is a new binding form called, say, RESTARTABLE.
	    Within (RESTARTABLE FOO . body), (RESTART FOO) acts as a jump
	    to the top of the body of the enclosing, matching RESTARTABLE form.
	    RESTART tags have lexical scope and dynamic extent.

19.  Shall there be a built-in identity function, and if so, what shall it
be called?
	(c) CR   (i) IDENTITY   (n) no such function

20.  Shall the #*... bit-string syntax replace #"..."?  That is, shall what
was before written #"10010" now be written #*10010 ?
	(y) yes   (n) no

21.  Which of the two outstanding array proposals (below) shall be adopted?
	(s) the "simple" proposal
	(r) the "RPG memorial" proposal
	(m) the "simple" proposal as amended by Moon
----------------------------------------------------------------
*********** "Simple" proposal **********
Date: Thursday, 16 September 1982  23:27-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>

Here is a revision of my array proposal, fixed up in response to some of
the feedback I've received.  See if you like it any better than the
original.  In particular, I have explictly indicated that certain
redundant forms such as MAKE-VECTOR should be retained, and I have
removed the :PRINT keyword, since I now believe that it causes more
trouble than it is worth.  A revised printing proposal appears at the
end of the document.


Arrays can be 1-D or multi-D.  All arrays can be created by MAKE-ARRAY
and can be accessed with AREF.  Storage is done via SETF of an AREF.
The term VECTOR refers to any array of exactly one dimension.
Vectors are special, in that they are also sequences, and can be
referenced by ELT.  Also, only vectors can have fill pointers.

Vectors can be specialized along several distinct axes.  The first is by
the type of the elements, as specified by the :ELEMENT-TYPE keyword to
MAKE-ARRAY.  A vector whose element-type is STRING-CHAR is referred to
as a STRING.  Strings, when they print, use the "..." syntax; they also
are the legal inputs to a family of string-functions, as defined in the
manual.  A vector whose element-type is BIT (alias (MOD 2)), is a
BIT-VECTOR.  These are special because they form the set of legal inputs
to the boolean bit-vector functions.  (We might also want to print them
in a strange way -- see below.)

Some implementations may provide a special, highly efficient
representation for simple vectors.  A simple vector is (of course) 1-D,
cannot have a fill pointer, cannot be displaced, and cannot be altered
in size after its creation.  To get a simple vector, you use the :SIMPLE
keyword to MAKE-ARRAY with a non-null value.  If there are any
conflicting options specified, an error is signalled.  If an
implementation does not support simple vectors, this keyword/value is
ignored except that the error is still signalled on inconsistent cases.

We need a new set of type specifiers for simple things: SIMPLE-VECTOR,
SIMPLE-STRING, and SIMPLE-BIT-VECTOR, with the corresponding
type-predicate functions.  Simple vectors are referenced by AREF in the
usual way, but the user may use THE or DECLARE to indicate at
compile-time that the argument is simple, with a corresponding increase
in efficiency.  Implementations that do not support simple vectors
ignore the "simple" part of these declarations.

Strings (simple or non-simple) self-eval; all other arrays cause an
error when passed to EVAL.  EQUAL descends into strings, but not
into any other arrays.  EQUALP descends into arrays of all kinds,
comparing the corresponding elements with EQUALP.  EQUALP is false
if the array dimensions are not the same, but it is not sensitive to
the element-type of the array, whether it is simple, etc.  In comparing
the dimensions of vectors, EQUALP uses the length from 0 to the fill
pointer; it does not look at any elements beyond the fill pointer.

The set of type-specifiers required for all of this is ARRAY, VECTOR,
STRING, BIT-VECTOR, SIMPLE-VECTOR, SIMPLE-STRING, SIMPLE-BIT-VECTOR.
Each of these has a corresponding type-P predicate, and each can be
specified in list from, along with the element-type and dimension(s).

MAKE-ARRAY takes the following keywords: :ELEMENT-TYPE, :INITIAL-VALUE,
:INITIAL-CONTENTS, :FILL-POINTER, and :SIMPLE.  There is still some
discussion as to whether we should retain array displacement, which
requires :DISPLACED-TO and :DISPLACED-INDEX-OFFSET.

The following functions are redundant, but should be retained for
clarity and emphasis in code: MAKE-VECTOR, MAKE-STRING, MAKE-BIT-VECTOR.
MAKE-VECTOR takes the same keywords as MAKE-ARRAY, but can only take a
single integer as the dimension argument.  MAKE-STRING and
MAKE-BIT-VECTOR are like MAKE-VECTOR, but do not take the :ELEMENT-TYPE
keyword, since the element-type is implicit.  Similarly, we should
retain the forms VREF, CHAR, and BIT, which are identical in operation
to AREF, but which declare their aray argument to be VECTOR, STRING, or
BIT-VECTOR, respectively.

If the :SIMPLE keyword is not specified to MAKE-ARRAY or related forms,
the default is NIL.  However, vectors produced by random forms such as
CONCATENATE are simple, and vectors created when the reader sees #(...)
or "..." are also simple.

As a general rule, arrays are printed in a simple format that, upon
being read back in, produces a form that is EQUALP to the original.
However, some information may be lost in the printing process:
element-type restrictions, whether a vector is simple, whether it has a
fill pointer, whether it is displaced, and the identity of any element
that lies beyond the fill pointer.  This choice was made to favor ease
of interactive use; if the user really wants to preserve in printed form
some complex data structure containing non-simple arrays, he will have
to develop his own printer.

A switch, SUPPRESS-ARRAY-PRINTING, is provided for users who have lots
of large arrays around and don't want to see them trying to print.  If
non-null, this switch causes all arrays except strings to print in a
short, non-readable form that does not include the elements:
#<array-...>.  In addition, the printing of arrays and vectors (but not
of strings) is subject to PRINLEVEL and PRINLENGTH.

Strings, simple or otherwise, print using the "..."  syntax.  Upon
read-in, the "..." syntax creates a simple string.

Bit-vectors, simple or otherwise, print using the #"101010..." syntax.
Upon read-in, this format produces a simple bit-vector.  Bit vectors do
observe SUPPRESS-ARRAY-PRINTING.

All other vectors print out using the #(...) syntax, observing
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  This format reads
in as a simple vector of element-type T.

All other arrays print out using the syntax #nA(...), where n is the
number of dimensions and the list is a nest of sublists n levels deep,
with the array elements at the deepest level.  This form observes
PRINLEVEL, PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  This format reads
in as an array of element-type T.

Query: I am still a bit uneasy about the funny string-like syntax for
bit vectors.  Clearly we need some way to read these in that does not
turn into a type-T vector.  An alternative might be to allow #(...) to
be a vector of element-type T, as it is now, but to take the #n(...)
syntax to mean a vector of element-type (MOD n).  A bit-vector would
then be #2(1 0 1 0...) and we would have a parallel notation available
for byte vectors, 32-bit word vectors, etc.  The use of the #n(...)
syntax to indicate the length of the vector always struck me as a bit
useless anyway.  One flaw in this scheme is that it does not extend to
multi-D arrays.  Before someone suggests it, let me say that I don't
like #nAm(...), where n is the rank and m is the element-type -- it
would be too hard to remember which number was which.  But even with
this flaw, the #n(...) syntax might be useful.

********** "RPG memorial" proposal **********
Date: Thursday, 23 September 1982  00:38-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>

Several people have stated that they dislike my earlier proposal because
it uses the good names (VECTOR, STRING, BIT-VECTOR, VREF, CHAR, BIT) on
general 1-D arrays, and makes the user say "simple" when he wants one of
the more specialized high-efficiency versions.  This makes extra work
for users, who will want simple vectors at least 95% of the time.  In
addition, there is the argument that simple vectors should be thought of
as a first-class data-type (in implementations that provide them) and
not as a mere degenerate form of array.

Just to see what it looks like, I have re-worked the earlier proposal to
give the good names to the simple forms.  This does not really eliminate
any of the classes in the earlier proposal, since each of those classes
had some attributes or operations that distinguished it from the others.

Since there are getting to be a lot of proposals around, we need some
nomencalture for future discussions.  My first attempt, with the
user-settable :PRINT option should be called the "print-switch"
proposal; the next one, with the heavy use of the :SIMPLE switch should
be the "simple-switch" proposal; this one can be called the "RPG
memorial" proposal.  Let me know what you think about this vs. the
simple-switch version -- I can live with either, but I really would like
to nail this down pretty soon so that we can get on with the
implementation.

Arrays can be 1-D or multi-D.  All arrays can be created by MAKE-ARRAY
and can be accessed with AREF.  Storage is done via SETF of an AREF.
1-D arrays are special, in that they are also of type SEQUENCE, and can
be referenced by ELT.  Also, only 1-D arrays can have fill pointers.

Some implementations may provide a special, highly efficient
representation for simple 1-D arrays, which will be of type VECTOR.  A
vector is 1-dimensional, cannot have a fill pointer, cannot be
displaced, and cannot be altered in size after its creation.  To get a
vector, you use the :VECTOR keyword to MAKE-ARRAY with a non-null value.
If there are any conflicting options specified, an error is signalled.
The MAKE-VECTOR form is equivalent to MAKE-ARRAY with :VECTOR T.

A STRING is a VECTOR whose element-type (specified by the :ELEMENT-TYPE
keyword) is STRING-CHAR.  Strings are special in that they print using
the "..." syntax, and they are legal inputs to a class of "string
functions".  Actually, these functions accept any 1-D array whose
element type is STRING-CHAR.  This more general class is called a
CHAR-SEQUENCE. 

A BIT-VECTOR is a VECTOR whose element-type is BIT, alias (MOD 2).
Bit-vectors are special in that they print using the #*... syntax, and
they are legal inputs to a class of boolean bit-vector functions.
Actually, these functions accept any 1-D array whose element-type is
BIT.  This more general class is called a BIT-SEQUENCE.

All arrays can be referenced via AREF, but in some implementations
additional efficiency can be obtained by declaring certain objects to be
vectors, strings, or bit-vectors.  This can be done by normal
type-declarations or by special accessing forms.  The form (VREF v n) is
equivalent to (AREF (THE VECTOR v) n).  The form (CHAR s n) is
equivalent to (AREF (THE STRING s) n).  The form (BIT b n) is equivalent
to (AREF (THE BIT-VECTOR b) n).

If an implementation does not support vectors, the :VECTOR keyword is
ignored except that the error is still signalled on inconsistent cases;
The additional restrictions on vectors are not enforced.  MAKE-VECTOR is
treated just like the equivalent make-array.  VECTORP is true of every
1-D array, STRINGP of every CHAR-SEQUENCE, and BIT-VECTOR of every
BIT-SEQUENCE.

CHAR-SEQUENCEs, including strings, self-eval; all other arrays cause an
error when passed to EVAL.  EQUAL descends into CHAR-SEQUENCEs, but not into
any other arrays.  EQUALP descends into arrays of all kinds, comparing
the corresponding elements with EQUALP.  EQUALP is false if the array
dimensions are not the same, but it is not sensitive to the element-type
of the array, whether it is a vector, etc.  In comparing the dimensions of
vectors, EQUALP uses the length from 0 to the fill pointer; it does not
look at any elements beyond the fill pointer.

The set of type-specifiers required for all of this is ARRAY, VECTOR,
STRING, BIT-VECTOR, SEQUENCE, CHAR-SEQUENCE, and BIT-SEQUENCE.
Each of these has a corresponding type-P predicate, and each can be
specified in list from, along with the element-type and dimension(s).

MAKE-ARRAY takes the following keywords: :ELEMENT-TYPE, :INITIAL-VALUE,
:INITIAL-CONTENTS, :FILL-POINTER, :DISPLACED-TO, :DISPLACED-INDEX-OFFSET,
and :VECTOR.

The following functions are redundant, but should be retained for
clarity and emphasis in code: MAKE-VECTOR, MAKE-STRING, MAKE-BIT-VECTOR.
MAKE-VECTOR takes a single length argument, along with :ELEMENT-TYPE,
:INITIAL-VALUE, and :INITIAL-CONTENTS.  MAKE-STRING and MAKE-BIT-VECTOR
are like MAKE-VECTOR, but do not take the :ELEMENT-TYPE keyword, since
the element-type is implicit.

If the :VECTOR keyword is not specified to MAKE-ARRAY or related forms,
the default is NIL.  However, sequences produced by random forms such as
CONCATENATE are vectors.

Strings always are printed using the "..." syntax.  Bit-vectors always
are printed using the #*... syntax.  Other vectors always print using
the #(...) syntax.  Note that in the latter case, any element-type
restriction is lost upon readin, since this form always produces a
vector of type T when it is read.  However, the new vector will be
EQUALP to the old one.  The #(...) syntax observes PRINLEVEL,
PRINLENGTH, and SUPPRESS-ARRAY-PRINTING.  The latter switch, if non-NIL,
causes the array to print in a non-readable form: #<ARRAY...>.

CHAR-SEQUENCEs print out as though they were strings, using the "..."
syntax.  BIT-SEQUENCES print out as BIT-STRINGS, using the #*... syntax.
All other arrays print out using the #nA(...) syntax, where n is the
number of dimensions and the list is actually a list of lists of lists,
nested n levels deep.  The array elements appear at the lowest level.
The #A syntax also observes PRINLEVEL, PRINLENGTH, and
SUPPRESS-ARRAY-PRINTING.  The #A format reads in as a non-displaced
array of element-type T.

Note that when an array is printed and read back in, the new version is
EQUALP to the original, but some information about the original is lost:
whether the original was a vector or not, element type restrictions,
whether the array was displaced, whether there was a fill pointer, and
the identity of any elements beyond the fill-pointer.  This choice was
made to favor ease of interactive use; if the user really wants to
preserve in printed form some complex data structure containing more
complex arrays, he will have to develop his own print format and printer.

********** Moon revision of "simple" proposal **********
Date: Thursday, 30 September 1982  01:59-EDT
From: MOON at SCRC-TENEX

I prefer the "simple switch" to the "RPG memorial" proposal, with one
modification to be found below.  The reason for this preference is that
it makes the "good" name, STRING for example, refer to the general class
of objects, relegating the efficiency decision to a modifier ("simple").
The alternative makes the efficiency issue too visible to the casual user,
in my opinion.  You have to always be thinking "do I only want this to
work for efficient strings, which are called strings, or should it work
for all kinds of strings, which are called arrays of characters?".
Better to say, "well this works for strings, and hmm, is it worth
restricting it to simple-strings to squeeze out maximal efficiency"?

Lest this seem like I am trying to sabotage the efficiency of Lisp
implementations that are stuck with "stock" hardware, consider the
following:

In the simple switch proposal, how is (MAKE-ARRAY 100) different from
(MAKE-ARRAY 100 :SIMPLE T)?  In fact, there is only one difference--it is
an error to use ADJUST-ARRAY-SIZE on the latter array, but not on the
former.  Except for this, simpleness consists, simply, of the absence of
options.  This suggests to me that the :SIMPLE option be flushed, and
instead a :ADJUSTABLE-SIZE option be added (see, I pronounce the colons).
Even on the Lisp machine, where :ADJUSTABLE-SIZE makes no difference, I
think it would be an improvement, merely for documentation purposes.  Now
everything makes sense: if you don't ask for any special features in your
arrays, you get simple ones, which is consistent with the behavior of the
sequence functions returning simple arrays always.  And if some
implementation decides they need the sequence functions to return
non-simple arrays, they can always add additional keywords to them to so
specify.  The only time you need to know about the word "simple" at all is
if you are making type declarations for efficiency, in which case you have
to decide whether to declare something to be a STRING or a SIMPLE-STRING.
And it makes sense that the more restrictive declaration be a longer word.
This also meets RPG's objection, which I think boils down to the fact
that he thought it was stupid to have :SIMPLE T all over his programs.
He was right.

I'm fairly sure that I don't understand the portability issues that KMP
brought up (I don't have a whole lot of time to devote to this).  But I
think that in my proposal STRINGP and SIMPLE-STRINGP are never the same
in any implementation; for instance, in the Lisp machine STRINGP is true
of all strings, while SIMPLE-STRINGP is only true of those that do not
have fill-pointers.  If we want to legislate that the :ADJUSTABLE-SIZE
option is guaranteed to turn off SIMPLE-STRINGP, I expect I can dig up
a bit somewhere to remember the value of the option.  This would in fact
mean that simple-ness is a completely implementation-independent concept,
and the only implementation-dependence is how much (if any) efficiency
you gain by using it, and how much of that efficiency you get for free
and how much you get only if you make declarations.

Perhaps the last sentence isn't obvious to everyone.  On the LM-2 Lisp
machine, a simple string is faster than a non-simple string for many
operations.  This speed-up happens regardless of declarations; it is a
result of a run-time dispatch to either fast microcode or slow microcode.
On the VAX with a dumb compiler and no tuning, a simple string is only
faster if you make declarations.  On the VAX with a dumb compiler but some
obvious tuning of sequence and string primitives to move type checks out of
inner loops (making multiple copies of the inner loop), simple strings are
faster for these operations, but still slow for AREF unless you make a type
declaration.  On the VAX with a medium-smart compiler that does the same
sort of tuning on user functions, simple strings are faster for user
functions, too, if you only declare (OPTIMIZE SPEED) [assuming that the
compiler prefers space over speed by default, which is the right choice in
most implementations], and save space as well as time if you go whole hog
and make a type declaration.  On the 3600 Lisp machine, you have sort of a
combination of the first case and the last case.

I also support the #* syntax for bit vectors, rather than the #" syntax.
It's probably mere temporal accident that the simple switch proposal
uses #" while the RPG memorial proposal uses #*.

To sum up:

A vector is a 1-dimensional array.  It prints as #(foo bar) or #<array...>
depending on the value of a switch.

A string is a vector of characters.  It always prints as "foo".  Unlike
all other arrays, strings self-evaluate and are compared by EQUAL.

A bit-vector is a vector of bits.  It always prints as #*101.  Since as
far as I can tell these are redundant with integers, perhaps like integers
they should self-evaluate and be compared by EQUAL.  I don't care.

A simple-vector, simple-string, or simple-bit-vector is one of the above
with none of the following MAKE-ARRAY (or MAKE-STRING) options specified:

	:FILL-POINTER
	:ADJUSTABLE-SIZE
	:DISPLACED-TO
	:LEADER-LENGTH, :LEADER-LIST (in implementations that offer them)

There are type names and predicates for the three simple array types.  In
some implementations using the type declaration gets you more efficient
code that only works for that simple type, which is why these are in the
language at all.  There are no user-visible distinctions associated with
simpleness other than those implied by the absence of the above MAKE-ARRAY
options.
----------------------------------------------------------------

22.  Shall the following proposal for the OPTIMIZE declaration be adopted?
	(y) yes   (n) no
----------------------------------------------------------------
Date: Wednesday, 15 September 1982  20:51-EDT
From: Scott E. Fahlman <Fahlman at Cmu-20c>

At the meeting I volunteered to produce a new proposal for the OPTIMIZE
declaration.  Actually, I sent out such a proposal a couple of weeks
ago, but somehow it got lost before reaching SU-AI -- both that machine
and CMUC have been pretty flaky lately.  I did not realize that the rest
of you had not seen this proposal until a couple of days ago.
Naturally, this is the one thing I did not keep a copy of, so here is my
reconstruction.  I should say that this proposal is pretty ugly, but it
is the best that I've been able to come up with.  If anyone out there
can do better, feel free.

Guy originally proposed a format like (DECLARE (OPTIMIZE q1 q2 q3)),
where each of the q's is a quality from the set {SIZE, SPEED, SAFETY}.
(He later suggested to me that COMPILATION-SPEED would be a useful
fourth quality.)  The ordering of the qualities tells the system which
to optimize for.  The obvious problem is that you sometimes want to go
for, say, SPEED above all else, but usually you want some level of
compromise.  There is no way in this scheme to specify how strongly the
system should favor one quality over another.  We don't need a lot of
gradations for most compilers, but the simple ordering is not expressive
enough.

One possibility is to simply reserve the OPTIMIZE declaration for the
various implementations, but not to specify what is done with it.  Then
the implementor could specify in the red pages whatever declaration
scheme his compiler wants to follow.  Unfortunately, this means that
such declarations would be of no use when the code is ported to another
Common Lisp, and users would have no portable way to flag that some
function is an inner loop and should be super-fast, or whatever.  The
proposal below tries to provide a crude but adequate optimization
declaration for portable code, while still making it possible for users
to fine-tune the compiler's actions for particular implementations.

What I propose is (DECLARE (OPTIMIZE (qual1 value1) (qual2 value2) ...),
where the qualities are the four mentioned above and each is paired with
a value from 0 to 3 inclusive.  The ordering of the clauses doesn't
matter, and any quality not specified gets a default value of 1.  The
intent is that {1, 1, 1, 1} would be the compiler's normal default --
whatever set of compromises the implementor believes is appropriate for
his user community.  A setting of 0 for some value is an indication that
the associated quality is unimportant in this context and may be
discrimintaed against freely.  A setting of 2 indicates that the quality
should be favored more than normal, and a setting of 3 means to go all
out to favor that quality.  Only one quality should be raised above 1 at
any one time.

The above specification scheme is crude, but sufficiently expressive for
most needs in portable code.  A compiler implementor will have specific
decisions to make -- whether to suppress inline expansions, whether to
type-check the arguments to CAR and CDR, whether to check for overflow
on arithmetic declared to be FIXNUM, whether to run the peephole
optimizer, etc. -- and it is up to him to decide how to tie these
decisions to the above values so as to match the users expressed wishes.
These decision criteria should be spelled out in that implementation's red
pages.  For example, it might be the case that the peephole optimizer is
not run if COMPILER-SPEED > 1, that type checking for the argument to
CAR and CDR is suppressed if SPEED > SAFETY+1, etc.
----------------------------------------------------------------

23.  Shall it be permitted for macros calls to expand into DECLARE forms
and then be recognized as valid declarations?  For example:
(DEFMACRO CUBOIDS (&REST VARS)
  `(DECLARE (TYPE (ARRAY SHORT-FLONUM 3) ,@VARS)
	    (SPECIAL ,@VARS)
	    (OPTIMIZE SPEED)
	    (INLINE HACK-CUBOIDS)))
(DEFUN CUBOID-EXPERT (A B C D)
  (CUBOIDS A C)
  ...)
This would not allows macros calls *within* a DECLARE form, only allow
macros to expand into a DECLARE form.
	(y) yes   (n) no

24.  Shall there be printer control variables ARRAY-PRINLEVEL and
ARRAY-PRINLENGTH to control printing of arrays?  These would not
limit the printing of strings.
	(y) yes   (n) no

25.  Shall lambda macros, as described below, be incorporated into
the language, and if so, shall they occupy the function name space
or a separate name space?
	(f) function name space   (s) separate name space   (n) no lambda macros
----------------------------------------------------------------
Date: Wednesday, 22 September 1982, 02:27-EDT
From: Howard I. Cannon <HIC at SCRC-TENEX at MIT-MC>

This is the documentation I wrote for lambda-macros as I implemented
them on the Lisp Machine.  Please consider this a proposed definition.

Lambda macros may appear in functions where LAMBDA would have previously
appeared.  When the compiler or interpreter detects a function whose CAR
is a lambda macro, they "expand" the macro in much the same way that
ordinary Lisp macros are expanded -- the lambda macro is called with the
function as its argument, and is expected to return another function as
its value.  Lambda macros may be accessed with the (3:lambda-macro*
2name*) function specifier.

defspec lambda-macro function-spec lambda-list &body body
Analagously with 3macro*, defines a lambda macro to be called
2function-spec*. 2lambda-list* should consist of one variable, which
will be the function that caused the lambda macro to be called.  The
lambda macro must return a function.  For example:

lisp
(lambda-macro ilisp (x)
  `(lambda (&optional ,@(second x) &rest ignore) . ,(cddr x)))
endlisp

would define a lambda macro called 3ilisp* which would cause the
function to accept arguments like a standard Interlisp function -- all
arguments are optional, and extra arguments are ignored.  A typical call
would be:

lisp
(fun-with-functional-arg #'(ilisp (x y z) (list x y z)))
endlisp

Then, any calls to the functional argument that
3fun-with-functional-arg* executes will pass arguments as if the
number of arguments did not matter.
enddefspec

defspec deflambda-macro
3deflambda-macro* is like 3defmacro*, but defines a lambda macro
instead of a normal macro.
enddefspec

defspec deflambda-macro-displace
3deflambda-macro-displace* is like 3defmacro-displace*, but defines
a lambda macro instead of a normal macro.
enddefspec

defspec deffunction function-spec lambda-macro-name lambda-list &body body 
3deffunction* defines a function with an arbitrary lambda macro
instead of 3lambda*.  It takes arguments like 3defun*, expect that
the argument immediatly following the function specifier is the name of
the lambda macro to be used.  3deffunction* expands the lambda macro
immediatly, so the lambda macro must have been previously defined.

For example:

lisp
(deffunction some-interlisp-like-function ilisp (x y z)
  (list x y z))
endlisp

would define a function called 3some-interlisp-like-function*, that
would use the lambda macro called 3ilisp*.  Thus, the function would
do no number of arguments checking.
enddefspec
----------------------------------------------------------------

26.  Shall the floating-point manipulations described below be adopted?
	(y) as described by MOON
	(a) as amended (FLOAT-SIGN changed) by GLS
	(n) do not adopt them
----------------------------------------------------------------
Date: Thursday, 30 September 1982  05:55-EDT
From: MOON at SCRC-TENEX

I am not completely happy with the FLOAT-FRACTION, FLOAT-EXPONENT, and
SCALE-FLOAT functions in the Colander edition.  At the meeting in August I
was assigned to make a proposal.  I am slow.

A minor issue is that the range of FLOAT-FRACTION fails to include zero (of
course it has to), and is inclusive at both ends, which means that there
are two possible return values for some numbers.  I guess that this ugliness
has to stay because some implementations require this freedom for hardware
reasons, and it doesn't make a big difference from a numerical analysis point
of view.  My proposal is to include zero in the range and to add a note about
two possible values for numbers that are an exact power of the base.

A more major issue is that some applications that break down a flonum into
a fraction and an exponent, or assemble a flonum from a fraction and an
exponent, are best served by representing the fraction as a flonum, while
others are best served by representing it as an integer.  An example of
the former is a numerical routine that scales its argument into a certain
range.  An example of the latter is a printing routine that must do exact
integer arithmetic on the fraction.

In the agenda for the August meeting it was also proposed that there be
a function to return the precision of the representation of a given flonum
(presumably in bits); this would be in addition to the "epsilon" constants
described on page 143 of the Colander.

A goal of all this is to make it possible to write portable numeric functions,
such as the trigonometric functions and my debugged version of Steele's
totally accurate floating-point number printer.  These would be portable
to all implementations but perhaps not as efficient as hand-crafted routines
that avoided bignum arithmetic, used special machine instructions, avoided
computing to more precision than the machine really has, etc.

Proposal:

SCALE-FLOAT x e -> y

  y = (* x (expt 2.0 e)) and is a float of the same type as x.
  SCALE-FLOAT is more efficient than exponentiating and multiplying, and
  also cannot overflow or underflow unless the final result (y) cannot
  be represented.

  x is also allowed to be a rational, in which case y is of the default
  type (same as the FLOAT function).

  [x being allowed to be a rational can be removed if anyone objects.  But
   note that this function has to be generic across the different float types
   in any case, so it might as well be generic across all number types.]

UNSCALE-FLOAT y -> x e
  The first value, x, is a float of the same type as y.  The second value, e,
  is an integer such that (= y (* x (expt 2.0 e))).

  The magnitude of x is zero or between 1/b and 1 inclusive, where b is the
  radix of the representation: 2 on most machines, but examples of 8 and
  16, and I think 4, exist.  x has the same sign as y.

  It is an error if y is a rational rather than a float, or if y is an
  infinity.  (Leave infinity out of the Common Lisp manual, though).
  It is not an error if y is zero.

FLOAT-MANTISSA x -> f
FLOAT-EXPONENT x -> e
FLOAT-SIGN x -> s
FLOAT-PRECISION x -> p
  f is a non-negative integer, e is an integer, s is 1 or 0.
  (= x (* (SCALE-FLOAT (FLOAT f x) e) (IF (ZEROP S) 1 -1))) is true.
  It is up to the implementation whether f is the smallest possible integer
  (zeros on the right are removed and e is increased), or f is an integer with
  as many bits as the precision of the representation of x, or perhaps a "few"
  more.  The only thing guaranteed about f is that it is non-negative and
  the above equality is true.

  f is non-negative to avoid problems with minus zero.  s is 1 for minus zero
  even though MINUSP is not true of minus zero (otherwise the FLOAT-SIGN function
  would be redundant).

  p is an integer, the number of bits of precision in x.  This is a constant
  for each flonum representation type (except perhaps for variable-precision
  "bigfloats").

  [I am amenable to converting these four functions into one function that
  returns four values if anyone can come up with a name.  EXPLODE-FLOAT is
  the best so far, and it's not very good, especially since the traditional
  EXPLODE function has been flushed from Common Lisp.  Perhaps DECODE-FLOAT.]

  [I am amenable to adding a function that takes f, e, and s as arguments
   and returns x.  It might be called ENCODE-FLOAT or MAKE-FLOAT.  It ought to
   take either a type argument or an optional fourth argument, the way FLOAT
   takes an optional second argument, which is an example of the type to return.]

FTRUNC x -> fp ip
  The FTRUNC function as it is already defined provides the fraction-part and
  integer-part operations.

These functions exist now in the Lisp machines, with different names and slightly
different semantics in some cases.  They are very easy to write.

Comments?  Suggestions for names?

Date:  4 October 1982 2355-EDT (Monday)
From: Guy.Steele at CMU-10A

I support Moon's proposal, but would like to suggest that FLOAT-SIGN
be modified to
	(FLOAT-SIGN x &optional (y (float 1 x)))
	returns z such that x and z have same sign and (= (abs y) (abs z)).
In this way (FLOAT-SIGN x) returns 1.0 or -1.0 of the same format as x,
and FLOAT-SIGN of two arguments is what the IEEE proposal calls COPYSIGN,
a useful function indeed in numerical code.
--Guy
----------------------------------------------------------------

27.  Shall DEFMACRO, DEFSTRUCT, and other defining forms also be
allowed to take documentation strings as possible and appropriate?
	(y) yes   (n) no

28.  Shall the following proposed revision of OPEN keywords be accepted?
	(y) yes   (n) no
----------------------------------------------------------------
Date: Monday, 4 October 1982, 17:08-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>

OPEN takes a filename as its first argument.  The rest of its arguments
are keyword/value pairs.

WITH-OPEN-STREAM's first subform is a list of a variable (to be bound to
a stream), a filename, and the rest of the elements are keyword/value
pairs.

The keywords are as follows, with their possible values and defaults:

:DIRECTION	:INPUT (the default), :OUTPUT, :APPEND, :OVERWRITE, :PROBE
	:INPUT - The file is expected to exist.  Output operations are not allowed.
	:OUTPUT - The file is expected to not exist.  A new file is created.  Input
			operations are not allowed.
	:APPEND - The file is expected to exist.  Input operations are not allowed.
			New characters are appened to the end of the existing file.
	:OVERWRITE - The file is expected to exist.  All operations are allowed.
			The "file pointer" starts at the beginning of the file.
	:PROBE - The file may or may not exist.  Neither input nor output operations
			are allowed.  Furthermore, it is not necessary to close the stream.

:CHARACTERS	T (the default), NIL, :DEFAULT
	T - Open the file for reading/writing of characters.
	NIL - Open the file for reading/writing of bytes (non-negative integers).
	:DEFAULT - Let the file system decide, based on the file it finds.

:BYTE-SIZE	a fixnum or :DEFAULT (the default)
	a fixnum - Use this byte size.
	:DEFAULT - Let the file system decide, based on the file it finds.

:IF-EXISTS	:ERROR (the default), :NEW-VERSION, :RENAME,
		:RENAME-AND-DELETE, :OVERWRITE, :APPEND, :REPLACE
	Ignored if direction is not :OUTPUT.  This tells what to do if the file
	that you're trying to create already exists.
	:ERROR - Signal an error.
	:NEW-VERSION - Create a file with the same filename except with "latest" version.
	:RENAME - Rename the existing file to something else and proceed.
	:RENAME-AND-DELETE - Rename the existing file and delete (but don't expunge,
		if your system has undeletion) it, and proceed.
	:OVERWRITE - Open for :OVERWRITE instead.  (If your file system doesn't have
		this, use :RENAME-AND-DELETE if you have undeletion and :RENAME otherwise.)
	:APPEND - Open for :APPEND instead.
	:REPLACE - Replace the existing file, deleting it when the stream is closed.

:IF-DOES-NOT-EXIST	:ERROR (the default), :CREATE
	Ignored if direction is neither :APPEND nor :OVERWRITE
	:ERROR - Signal an error.
	:CREATE - Create the file and proceed.


Notes:

I renamed :READ-ALTER to :OVERWRITE; :READ-WRITE might also be good.

The :DEFAULT values are very useful, although some systems cannot figure
out this information.  :CHARACTERS :DEFAULT is especially useful for
LOAD.  Having the byte size come from the file only when the option is
missing, as the latest Common Lisp manual says, is undesirable because
it makes things harder for programs that are passing the value of that
keyword argument as computed from an expression.

Example of OPEN:
     (OPEN "f:>dlw>lispm.init" :DIRECTION :OUTPUT)

Example of WITH-OPEN-FILE:
     (WITH-OPEN-FILE (STREAM "f:>dlw>lispm.init" :DIRECTION :OUTPUT) ...)

OPEN can be kept Maclisp compatible by recognizing whether the second
argument is a list or not.  Lisp Machine Lisp does this for the benefit
of old programs.  The new syntax cannot be mistaken for the old one.

I removed :ECHO because we got rid of MAKE-ECHO-STREAM at the last
meeting.

Other options that the Lisp Machine will probably have, and which might
be candidates for Common Lisp, are: :INHIBIT-LINKS, :DELETED,
:PRESERVE-DATES, and :ESTIMATED-SIZE.
----------------------------------------------------------------
-------

13-Oct-82  1309	STEELE at CMU-20C 	Ballot results  
Date: 13 Oct 1982 1608-EDT
From: STEELE at CMU-20C
Subject: Ballot results
To: common-lisp at SU-AI

?????????????????????????????????????????????????????????????????????????????
?  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  ?
?  %  =================================================================  %  ?
?  %  =  $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$  =  %  ?
?  %  =  $  +++++++++++++++++++++++++++++++++++++++++++++++++++++  $  =  %  ?
?  %  =  $  +  ###############################################  +  $  =  %  ?
?  %  =  $  +  #  /////////////////////////////////////////  #  +  $  =  %  ?
?  %  =  $  +  #  /  The October 1982 Common LISP Ballot  /  #  +  $  =  %  ?
?  %  =  $  +  #  /                RESULTS                /  #  +  $  =  %  ?
?  %  =  $  +  #  /////////////////////////////////////////  #  +  $  =  %  ?
?  %  =  $  +  ###############################################  +  $  =  %  ?
?  %  =  $  +++++++++++++++++++++++++++++++++++++++++++++++++++++  $  =  %  ?
?  %  =  $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$  =  %  ?
?  %  =================================================================  %  ?
?  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  ?
?????????????????????????????????????????????????????????????????????????????

Here are the tabulated votes on the October 1982 Common LISP Ballot.  For
each issue the summary vote shown between "***" is what I take to be a
consensus, with a "?" added if I am a bit uncertain.  I will edit the
manual according to these guidelines unless someone screams loudly and
soon over some issue.  A few of the issues had a very mixed response;
these I have labeled "Controversial" and will take no immediate action on.
--Guy

1.  How shall the case for a floating-point exponent specifier
output by PRINT and FORMAT be determined?
	(a) upper case, for example 3.5E6
	(b) lower case, for example 3.5e6
	(c) a switch
	(d) implementation-dependent

Issue 1: *** B? ***
Hedrick: B	Wholey: -	Fahlman: B	Weinreb: B	Killian: B
Zubkoff: C	Moon: B		van Roggen: D	Masinter: A	RMS: B
Dyer: B		Bawden: -	Feinberg: B	Ginder: B	Burke et al.: B
Brooks: -	Gabriel: A	DECLISP: B	Steele: C	Dill: D
Scherlis: -	Pitman: B	Anderson: B

2.  Shall we change the name SETF to be SET?   (y) yes   (n) no

Issue 2: *** N ***
Hedrick: N	Wholey: N	Fahlman: N	Weinreb: N	Killian: X
Zubkoff: Y	Moon: N		van Roggen: Y	Masinter: N	RMS: N
Dyer: N		Bawden: N	Feinberg: N	Ginder: N	Burke et al.: N
Brooks: N	Gabriel: N	DECLISP: N	Steele: N	Dill: N
Scherlis: Y	Pitman: N	Anderson: N

Killian: I have been convinced that renaming SETF to SET would be
wrong because it would require changing lots of old code.  But,
you seem to have ignored the rest of my suggestion in your
ballot, namely eliminating those horrid "F"s at the end of
several function names (INCF, GETF etc.).  If you don't do this,
then you're being inconsistent by not naming PUSH PUSHF, etc.
The "F" at the end of "SETF" would then be there purely for
compatibility, and could be renamed when another Lisp dialect
is designed, years hence.

Pitman: I think we should do this, but not at this time.

RMS: I very strongly do not want to have to change uses of the
traditional function SET in the Lisp machine system.

Feinberg: A better name than SETF (or SET) should be found.

3.  Shall there be a type specifier QUOTE, such that (QUOTE x) = (MEMBER x)?
Then MEMBER can be eliminated; (MEMBER x y z) = (OR 'x 'y 'z).  Also one can
write such things as (OR INTEGER 'FOO) instead of (OR INTEGER (MEMBER FOO)).
	(y) yes   (n) no

Issue 3: *** Y? ***
Hedrick: X	Wholey: Y	Fahlman: N	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: N	Masinter: Y	RMS: -
Dyer: X		Bawden: Y	Feinberg: Y	Ginder: -	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: N	Steele: Y	Dill: Y
Scherlis: Y	Pitman: Y	Anderson: -

4.  Shall MOON's proposal for LOAD keywords, revised as shown below, be used?
	(y) yes   (n) no

Issue 4: *** Y ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: X	RMS: -
Dyer: Y		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: Y	Pitman: X	Anderson: -

Moon: I thought we agreed to make LOAD take a stream as its first argument,
instead of a pathname, and flush the :STREAM keyword.  :ERROR should
control only "file does not exist" errors, not "host down", "directory
does not exist", "illegal character in file name", "no access to file",
and "file cannot be read because of disk error".  Nor should it affect
errors due to evaluation of forms in the file.  So I think it needs a
better name; how about :NONEXISTENT-OK?

Masinter: :ERROR, :SET-DEFAULT-PATHNAME options to LOAD should be
rationalized with OPEN; the handling here of search paths should
logically be handled by passing on some of the options from LOAD to OPEN
rather than having LOAD do special path-name processing. This is because
users who manipulate files want to do similar hacking, and the mechanism
should be common.

Pitman: I would vote YES except: As suggested by someone when it was
proposed, any mention of packages should be stricken pending the release
of a package system specification.

Dill: :PACKAGE & :VERBOSE should be flushed, since they are package system
dependent.

5.  Shall closures over dynamic variables be removed from Common LISP?
	(y) yes   (n) no

Issue 5: *** Y? ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: N	Killian: -
Zubkoff: -	Moon: -		van Roggen: Y	Masinter: -	RMS: -
Dyer: X		Bawden: -	Feinberg: Y	Ginder: Y	Burke et al.: -
Brooks: -	Gabriel: N	DECLISP: Y	Steele: Y	Dill: Y
Scherlis: Y	Pitman: N	Anderson: -

6.  Shall LOOP, as summarized below, be included in Common LISP?
	(y) yes   (n) no

Issue 6: Controversial
Hedrick: N	Wholey: N	Fahlman: N	Weinreb: Y	Killian: Y
Zubkoff: X	Moon: -		van Roggen: N	Masinter: X	RMS: N
Dyer: Y		Bawden: Y	Feinberg: N	Ginder: N	Burke et al.: Y
Brooks: N	Gabriel: X	DECLISP: Y	Steele: N	Dill: N
Scherlis: N	Pitman: N	Anderson: N

Fahlman: I am in favor of adding the LOOP package as described (once it is
completed) to the language as a portable yellow pages module.  I feel
rather strongly that it is premature to add LOOP to the white pages.

Zubkoff: The LOOP macro should be kept in the yellow pages until we've
had a chance to use it for a while and determine whether or not it is the
"right" thing.

Masinter: I feel strongly that the white pages SHOULD include a LOOP construct.
I care less about which one, but I like most of Moon's proposal better than DO
and what I saw of LetS. I'd get rid of AND and ELSE. I don't understand
if the "COLLECT" lexical scoping includes scoping under macro expansion.

Pitman: As a yellow-pages extension is ok by me. I strongly oppose its
placement in the white pages.

Feinberg: We should carefully examine all iteration construct proposals
before committing to any particular one.  I feel strongly about 
this.  I would very much like to see complete documentation
on Loop and any other loop construct which might be included 
in Common Lisp, especially before we decide to incorporate them
into the white pages.

Gabriel: I believe that a LOOP construct of some sort is needed: I am
constantly bumping into the limitations of MacLisp-style DO.  The
Symbolics people claim that LOOP, as defined in the current proposal, is
well thought-out and indispensible. Not having used it particularly, I
cannot pass judgement on this. I advocate putting LOOP into the hottest
regions of the Yellow Pages, meaning that people should use it immediately
so that any improvements to clarity can be made rapidly. The best possible
LOOP should then be moved to the White Pages.
My prejudice is that LOOP code is very difficult to understand. On the
other hand, closures are difficult for many people to understand, and
perhaps the difficulty is due to unfamiliarity in the LOOP case as it is
in the closure case.
In my current programming I do not define my own iteration construct
(though I have in the past) because I've found that other people (such as
myself at a later date) do not readily understand my code when it contains
idiosyncratic control structures.  If we do not standardize on a LOOP
construct soon we will be faced with the fact of many people defining
their own difficult-to-understand control structures.

7.  Regardless of the outcome of the previous question, shall CYCLE
be retained and be renamed LOOP, with the understanding that statements
of the construct must be non-atomic, and atoms as "statements" are
reserved for extensions, and any such extensions must be compatible
with the basic mening as a pure iteration construct?
	(y) yes   (n) no

Issue 7: *** Y? ***
Hedrick: Y	Wholey: -	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: -	Moon: Y		van Roggen: N	Masinter: Y	RMS: -
Dyer: Y		Bawden: Y	Feinberg: N	Ginder: -	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: N	Steele: Y	Dill: X
Scherlis: Y	Pitman: Y	Anderson: N

Feinberg: I don't think we should make any commitment at all, even to this
extent.  Loop is too nice a word to give up before we even agree about
installing it into the language.

8.  Shall ARRAY-DIMENSION be changed by exchanging its arguments,
to have the array first and the axis number second, to parallel
other indexing operations?
	(y) yes   (n) no

Issue 8: *** Y ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: -	RMS: Y
Dyer: Y		Bawden: -	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: -	Pitman: Y	Anderson: Y

9.  Shall MACROEXPAND, as described below, replace the current definition?
	(y) yes   (n) no

Issue 9: *** Y ***
Hedrick: Y	Wholey: -	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: -	Moon: Y		van Roggen: Y	Masinter: Y	RMS: -
Dyer: Y		Bawden: Y	Feinberg: -	Ginder: -	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: Y	Pitman: X	Anderson: -

Killian:  This is ok as far as it goes, but I intend to suggest
additions when I find the time.

Masinter: This seems right but not quite fully specified, e.g. LAMBDA-MACRO.

Pitman: I would vote YES except:
I am uncomfortable with saying that a form returns two
values and then returning only one (letting the rest default to NIL).
Does Common-Lisp specify anything on this? In any case, I would ammend
the (cond ((and (pairp ...) ...) (values (...) t))
	  (t form))
to (cond (...) (t (values form nil))) to make it clear that two values
are always returned. If this modification is made, I am happy with this
proposal.

10.  Shall all global system-defined variables have names beginning
and ending with "*", for example *PRINLEVEL* instead of PRINLEVEL
and *READ-DEFAULT-FLOAT-FORMAT* instead of READDEFAULT-FLOAT-FORMAT?
	(y) yes   (n) no

Issue 10: *** Y ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: N
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: X
Dyer: N		Bawden: N	Feinberg: Y	Ginder: Y	Burke et al.: X
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: Y	Pitman: Y	Anderson: Y

RMS: I would prefer a character other than *, such as "-".
It is easier to type, and easier to type correctly.

Bawden: I am in favor of variables named *FOO* over variables named FOO only
when that doesn't introduce an incompatibility with existing Lisps.  That is
why I voted NO on 10, because it involved an incompatible change to variables
like PRINLEVEL.  I voted YES for 11 because currently we have no named
constants as far as I know so there is no incompatibility.

Burke et al.: I really like only ONE "*" at the beginning of the name.  I got
tired of shifting for two years ago, but conversely couldn't stand to
not have the specialness of the variable not be obvious.

11.  Same question for named constants (other than T and NIL), such as
*PI* for PI and *MOST-POSITIVE-FIXNUM* for MOST-POSITIVE-FIXNUM.
	(y) yes   (n) no   (o) yes, but use a character other than "*"

Issue 11: Controversial
Hedrick: Y	Wholey: N	Fahlman: Y	Weinreb: Y	Killian: N
Zubkoff: Y	Moon: N		van Roggen: Y	Masinter: Y	RMS: X
Dyer: N		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: X
Brooks: O	Gabriel: Y	DECLISP: Y	Steele: N	Dill: X
Scherlis: -	Pitman: Y	Anderson: Y

Fahlman: Whatever is done about global vars, global constants should be the
same.  I oppose option 3 or any plan to make them look syntactically
different.

Moon: I like to use the stars to mean "someone may bind this" rather than
"don't use this as a local variable name", which is why I voted no on
putting stars around constants.  However, others might disagree with me
and I will defer to the majority.  I do think stars around variable names
are important.

12.  Shall a checking form CHECK-TYPE be introduced as described below?
	(y) yes   (n) no

Issue 12: *** Y ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: -
Dyer: Y		Bawden: Y	Feinberg: -	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: Y
Scherlis: Y	Pitman: Y	Anderson: Y

13.  Shall a checking form CHECK-SUBSEQUENCE be introduced as described below?
	(y) yes   (n) no

Issue 13: Controversial
Hedrick: N	Wholey: -	Fahlman: N	Weinreb: -	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: -	RMS: -
Dyer: N		Bawden: -	Feinberg: N	Ginder: Y	Burke et al.: N
Brooks: -	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: N
Scherlis: Y	Pitman: Y	Anderson: Y

Feinberg: It seems like we're taking this type checking stuff a little 
too far.  Let the user write his own type checking code, or
make a yellow pages package called Carefully (or Lint) or
something. 

Dill: There should be a succinct way about talking about the contents
of sequences, but this particular one doesn't have the right functionality.
I prefer a regular-expression notation of some form, but don't have it
well enough worked out to propose one.  Lets leave it out of the language
until someone figures out how to do it well.

14.  Shall the functions LINE-OUT and STRING-OUT, eliminated in November,
be reinstated?
	(y) yes   (n) no

Issue 14: *** Y ***
Hedrick: N	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: -	Moon: -		van Roggen: Y	Masinter: -	RMS: -
Dyer: X		Bawden: -	Feinberg: Y	Ginder: -	Burke et al.: Y
Brooks: -	Gabriel: -	DECLISP: Y	Steele: Y	Dill: X
Scherlis: -	Pitman: Y	Anderson: -

15.  Shall the REDUCE function be added as described below?
	(y) yes   (n) no

Issue 15: *** Y ***
Hedrick: N	Wholey: -	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: -
Dyer: Y		Bawden: -	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: Y
Scherlis: Y	Pitman: -	Anderson: N

Moon: Should the name be REDUCE, or something else?  Hearn aside, the name
doesn't instantly convey to me what it does.  I haven't come up with an
alternative suggestion, however.

Pitman: I have no use for this but have no strong objection.

16.  Shall the Bawden/Moon solution to the "invisible block" problem
be accepted?  The solution is to define (RETURN x) to mean precisely
(RETURN-FROM NIL x), and to specify that essentially all standard
iterators produce blocks named NIL.  A block with a name other than
NIL cannot capture a RETURN, only a RETURN-FROM with a matching name.
	(y) yes   (n) no

Issue 16: *** Y ***
Hedrick: Y	Wholey: -	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: -	RMS: N
Dyer: Y		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: Y	Pitman: Y	Anderson: -

RMS: I am strongly opposed to anything that would require me to find
all the named PROGs in the Lisp machine system which have simple
RETURNs that return from them.  This would make a lot of extra work
for me.  Please don't impose this on me.

Dill: It seems to me that it ought to be possible to exploit lexical
scoping to solve problems like this in a more general way.  If this is
possible, then this proposeal is redundant.

17.  Shall the TAGBODY construct be incorporated?  This expresses just
the behavior of the GO aspect of a PROG.  Any atoms in the body
are not evaluated, but serve as tags that may be specified to GO.
Tags have lexical scope and dynamic extent.  TAGBODY always returns NIL.
	(y) yes   (n) no

Issue 17: *** Y ***
Hedrick: N	Wholey: -	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: X
Dyer: Y		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: N
Scherlis: -	Pitman: Y	Anderson: Y

RMS: Why must GOBODY [sic] always return NIL just because PROG does?
It is just as easy to make GOBODY return the value of the last form in
it.  We can consider a PROG to expand into a GOBODY followed by a NIL.

Feinberg: A better name than TAGBODY should be found.  

18.  What shall be done about RESTART?  The following alternatives seem to
be the most popular:
	(a) Have no RESTART form.
	(b) RESTART takes the name of a block.  What happens when you say
	    (RESTART NIL) must be clarified for most iteration constructs.
	(c) There is a new binding form called, say, RESTARTABLE.
	    Within (RESTARTABLE FOO . body), (RESTART FOO) acts as a jump
	    to the top of the body of the enclosing, matching RESTARTABLE form.
	    RESTART tags have lexical scope and dynamic extent.

Issue 18: *** A ***
Hedrick: A	Wholey: A	Fahlman: A	Weinreb: A	Killian: A
Zubkoff: A	Moon: A		van Roggen: A	Masinter: A	RMS: C
Dyer: A		Bawden: A	Feinberg: A	Ginder: A	Burke et al.: A
Brooks: A	Gabriel: B	DECLISP: A	Steele: C	Dill: X
Scherlis: -	Pitman: C	Anderson: A

Fahlman: I now believe that RESTART is more trouble than it is worth.  I am
strongly opposed to any plan, such as option 3, that would add a RESTART
form but make it impossible to use this with the implicit block around a
DEFUN.  If you have to introduce a RESTARTABLE block, you may as
well use PROG/GO.

19.  Shall there be a built-in identity function, and if so, what shall it
be called?
	(c) CR   (i) IDENTITY   (n) no such function

Issue 19: *** I ***
Hedrick: I	Wholey: I	Fahlman: I	Weinreb: I	Killian: -
Zubkoff: I	Moon: I		van Roggen: I	Masinter: I	RMS: I
Dyer: X		Bawden: I	Feinberg: I	Ginder: I	Burke et al.: I
Brooks: I	Gabriel: -	DECLISP: I	Steele: I	Dill: X
Scherlis: I	Pitman: I	Anderson: -

RMS: The canonical identity function is now called PROG1, but the name
IDENTITY is ok by me.

20.  Shall the #*... bit-string syntax replace #"..."?  That is, shall what
was before written #"10010" now be written #*10010 ?
	(y) yes   (n) no

Issue 20: *** Y ***
Hedrick: Y	Wholey: -	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: -
Dyer: X		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: N	DECLISP: Y	Steele: Y	Dill: Y
Scherlis: Y	Pitman: Y	Anderson: Y

21.  Which of the two outstanding array proposals (below) shall be adopted?
	(s) the "simple" proposal
	(r) the "RPG memorial" proposal
	(m) the "simple" proposal as amended by Moon

Issue 21: *** M? ***
Hedrick: M	Wholey: -	Fahlman: M	Weinreb: M	Killian: M
Zubkoff: M	Moon: M		van Roggen: M	Masinter: -	RMS: M
Dyer: -		Bawden: M	Feinberg: M	Ginder: M	Burke et al.: M
Brooks: R	Gabriel: X	DECLISP: M	Steele: M	Dill: M
Scherlis: M	Pitman: M	Anderson: M

Brooks: if not "r" then I prefer "m".

Gabriel: I prefer the "RPG memorial", but I do not feel so strong
about this that I would sink the Common Lisp effort over it.

22.  Shall the following proposal for the OPTIMIZE declaration be adopted?
	(y) yes   (n) no

Issue 22: *** Y ***
Hedrick: Y	Wholey: -	Fahlman: Y	Weinreb: Y	Killian: N
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: N	RMS: -
Dyer: Y		Bawden: -	Feinberg: N	Ginder: Y	Burke et al.: Y
Brooks: -	Gabriel: Y	DECLISP: Y	Steele: -	Dill: X
Scherlis: N	Pitman: X	Anderson: X

Pitman: I would vote YES except:
The use of numbers instead of keywords bothers me. The section saying
which numbers can be which values and how those values will be interpreted
seems to FORTRANesque to me. I think these values should be just keywords
or the tight restrictions on their values should be lifted. The only use
for numbers would be to allow users a fluid range of possibilities.

Feinberg: Keywords instead of numbers would be nicer.  How about
:dont-care, :low, :medium, :high?

Dill: I don't think that we need an optimize declaration in common lisp.
It's not necessary for portability, and intensely dependent on compiler
implementations.  If we must have one, I strongly favor the Fahlman proposal
over proposals that would have symbolic specifications.

23.  Shall it be permitted for macros calls to expand into DECLARE forms
and then be recognized as valid declarations?
This would not allows macros calls *within* a DECLARE form, only allow
macros to expand into a DECLARE form.
	(y) yes   (n) no

Issue 23: *** Y ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: -
Dyer: Y		Bawden: Y	Feinberg: Y	Ginder: -	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: Y	Pitman: Y	Anderson: Y

Pitman: I also support allowing multiple declare forms at the top of
a bind form. ie,
   (LAMBDA (X Y) (DECLARE (SPECIAL X)) (DECLARE (SPECIAL Y))
for ease in macros. Steele's proposed evaluator did this and it wasn't
notably expensive.

24.  Shall there be printer control variables ARRAY-PRINLEVEL and
ARRAY-PRINLENGTH to control printing of arrays?  These would not
limit the printing of strings.
	(y) yes   (n) no

Issue 24:  Controversial
Hedrick: N	Wholey: Y	Fahlman: N	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: N	RMS: -
Dyer: Y		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: -	Gabriel: N	DECLISP: Y	Steele: Y	Dill: X
Scherlis: Y	Pitman: N	Anderson: Y

25.  Shall lambda macros, as described below, be incorporated into
the language, and if so, shall they occupy the function name space
or a separate name space?
	(f) function name space   (s) separate name space   (n) no lambda macros

Issue 25: Controversial
Hedrick: N	Wholey: -	Fahlman: N	Weinreb: Y	Killian: F
Zubkoff: -	Moon: S		van Roggen: S	Masinter: D	RMS: S
Dyer: S		Bawden: S	Feinberg: N	Ginder: -	Burke et al.: S
Brooks: N	Gabriel: F	DECLISP: S	Steele: N	Dill: N
Scherlis: -	Pitman: S	Anderson: N

Fahlman: I seem to be unable to locate any explanation of why Lambda macros
are useful enough to be worth the bother.  Looks like needless hair to
me, but I seem to dimly recall some arguments for why they were needed.
I'm not passionately opposed, but every page full of hairy stuff in the
manual hurts us.

Masinter: Spec here not consistent with MACROEXPAND proposal.

Feinberg: Once again, hair that I don't think needs to be standardized on.
I think most users would never need this, and perhaps a better 
way to do this can be thought of.

26.  Shall the floating-point manipulations described below be adopted?
	(y) as described by MOON
	(a) as amended (FLOAT-SIGN changed) by GLS
	(n) do not adopt them

Issue 26: *** A ***
Hedrick: A	Wholey: A	Fahlman: A	Weinreb: A	Killian: A
Zubkoff: A	Moon: Y		van Roggen: A	Masinter: -	RMS: -
Dyer: -		Bawden: -	Feinberg: -	Ginder: A	Burke et al.: -
Brooks: -	Gabriel: A	DECLISP: A	Steele: A	Dill: X
Scherlis: -	Pitman: -	Anderson: Y

Killian: Since TRUNC was renamed TRUNCATE at the last meeting, the
FTRUNC in this proposal would have to become FTRUNCATE.

27.  Shall DEFMACRO, DEFSTRUCT, and other defining forms also be
allowed to take documentation strings as possible and appropriate?
	(y) yes   (n) no

Issue 27: *** Y ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: Y
Dyer: Y		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: Y	Pitman: Y	Anderson: Y

28.  Shall the following proposed revision of OPEN keywords be accepted?
	(y) yes   (n) no

Issue 28: *** Y ***
Hedrick: Y	Wholey: Y	Fahlman: Y	Weinreb: Y	Killian: Y
Zubkoff: Y	Moon: Y		van Roggen: Y	Masinter: Y	RMS: -
Dyer: Y		Bawden: Y	Feinberg: Y	Ginder: Y	Burke et al.: Y
Brooks: Y	Gabriel: Y	DECLISP: Y	Steele: Y	Dill: X
Scherlis: -	Pitman: X	Anderson: Y

DECLISP: Either READ-ALTER, READ-WRITE OR UPDATE should replace the :OVERWRITE
keyword for :DIRECTION.  Overwrite suggests that an existing file will be
destroyed by having new data written into the same space.
-------

Then, any calls to the functional argument that
3fun-with-functional-arg* executes will pass arguments as if the
number of arguments did not matter.
enddefspec

defspec deflambda-macro
3deflambda-macro* is like 3defmacro*, but defines a lambda macro
instead of a normal macro.
enddefspec

defspec deflambda-macro-displace
3deflambda-macro-displace* is like 3defmacro-displace*, but defines
a lambda macro instead of a normal macro.
enddefspec

defspec deffunction function-spec lambda-macro-name lambda-list &body body 
3deffunction* defines a function with an arbitrary lambda macro
instead of 3lambda*.  It takes arguments like 3defun*, expect that
the argument immediatly following the function specifier is the name of
the lambda macro to be used.  3deffunction* expands the lambda macro
immediatly, so the lambda macro must have been previously defined.

For example:

lisp
(deffunction some-interlisp-like-function ilisp (x y z)
  (list x y z))
endlisp

would define a function called 3some-interlisp-like-function*, that
would use the lambda macro called 3ilisp*.  Thus, the function would
do no number of arguments checking.
enddefspec
----------------------------------------------------------------

26.  Shall the floating-point manipulations described below be adopted?
	(y) as described by MOON
	(a) as amended (FLOAT-SIGN changed) by GLS
	(n) do not adopt them
----------------------------------------------------------------
Date: Thursday, 30 September 1982  05:55-EDT
From: MOON at SCRC-TENEX

I am not completely happy with the FLOAT-FRACTION, FLOAT-EXPONENT, and
SCALE-FLOAT functions in the Colander edition.  At the meeting in August I
was assigned to make a proposal.  I am slow.

A minor issue is that the range of FLOAT-FRACTION fails to include zero (of
course it has to), and is inclusive at both ends, which means that there
are two possible return values for some numbers.  I guess that this ugliness
has to stay because some implementations require this freedom for hardware
reasons, and it doesn't make a big difference from a numerical analysis point
of view.  My proposal is to include zero in the range and to add a note about
two possible values for numbers that are an exact power of the base.

A more major issue is that some applications that break down a flonum into
a fraction and an exponent, or assemble a flonum from a fraction and an
exponent, are best served by representing the fraction as a flonum, while
others are best served by representing it as an integer.  An example of
the former is a numerical routine that scales its argument into a certain
range.  An example of the latter is a printing routine that must do exact
integer arithmetic on the fraction.

In the agenda for the August meeting it was also proposed that there be
a function to return the precision of the representation of a given flonum
(presumably in bits); this would be in addition to the "epsilon" constants
described on page 143 of the Colander.

A goal of all this is to make it possible to write portable numeric functions,
such as the trigonometric functions and my debugged version of Steele's
totally accurate floating-point number printer.  These would be portable
to all implementations but perhaps not as efficient as hand-crafted routines
that avoided bignum arithmetic, used special machine instructions, avoided
computing to more precision than the machine really has, etc.

Proposal:

SCALE-FLOAT x e -> y

  y = (* x (expt 2.0 e)) and is a float of the same type as x.
  SCALE-FLOAT is more efficient than exponentiating and multiplying, and
  also cannot overflow or underflow unless the final result (y) cannot
  be represented.

  x is also allowed to be a rational, in which case y is of the default
  type (same as the FLOAT function).

  [x being allowed to be a rational can be removed if anyone objects.  But
   note that this function has to be generic across the different float types
   in any case, so it might as well be generic across all number types.]

UNSCALE-FLOAT y -> x e
  The first value, x, is a float of the same type as y.  The second value, e,
  is an integer such that (= y (* x (expt 2.0 e))).

  The magnitude of x is zero or between 1/b and 1 inclusive, where b is the
  radix of the representation: 2 on most machines, but examples of 8 and
  16, and I think 4, exist.  x has the same sign as y.

  It is an error if y is a rational rather than a float, or if y is an
  infinity.  (Leave infinity out of the Common Lisp manual, though).
  It is not an error if y is zero.

FLOAT-MANTISSA x -> f
FLOAT-EXPONENT x -> e
FLOAT-SIGN x -> s
FLOAT-PRECISION x -> p
  f is a non-negative integer, e is an integer, s is 1 or 0.
  (= x (* (SCALE-FLOAT (FLOAT f x) e) (IF (ZEROP S) 1 -1))) is true.
  It is up to the implementation whether f is the smallest possible integer
  (zeros on the right are removed and e is increased), or f is an integer with
  as many bits as the precision of the representation of x, or perhaps a "few"
  more.  The only thing guaranteed about f is that it is non-negative and
  the above equality is true.

  f is non-negative to avoid problems with minus zero.  s is 1 for minus zero
  even though MINUSP is not true of minus zero (otherwise the FLOAT-SIGN function
  would be redundant).

  p is an integer, the number of bits of precision in x.  This is a constant
  for each flonum representation type (except perhaps for variable-precision
  "bigfloats").

  [I am amenable to converting these four functions into one function that
  returns four values if anyone can come up with a name.  EXPLODE-FLOAT is
  the best so far, and it's not very good, especially since the traditional
  EXPLODE function has been flushed from Common Lisp.  Perhaps DECODE-FLOAT.]

  [I am amenable to adding a function that takes f, e, and s as arguments
   and returns x.  It might be called ENCODE-FLOAT or MAKE-FLOAT.  It ought to
   take either a type argument or an optional fourth argument, the way FLOAT
   takes an optional second argument, which is an example of the type to return.]

FTRUNC x -> fp ip
  The FTRUNC function as it is already defined provides the fraction-part and
  integer-part operations.

These functions exist now in the Lisp machines, with different names and slightly
different semantics in some cases.  They are very easy to write.

Comments?  Suggestions for names?

Date:  4 October 1982 2355-EDT (Monday)
From: Guy.Steele at CMU-10A

I support Moon's proposal, but would like to suggest that FLOAT-SIGN
be modified to
	(FLOAT-SIGN x &optional (y (float 1 x)))
	returns z such that x and z have same sign and (= (abs y) (abs z)).
In this way (FLOAT-SIGN x) returns 1.0 or -1.0 of the same format as x,
and FLOAT-SIGN of two arguments is what the IEEE proposal calls COPYSIGN,
a useful function indeed in numerical code.
--Guy
----------------------------------------------------------------

27.  Shall DEFMACRO, DEFSTRUCT, and other defining forms also be
allowed to take documentation strings as possible and appropriate?
	(y) yes   (n) no

28.  Shall the following proposed revision of OPEN keywords be accepted?
	(y) yes   (n) no
----------------------------------------------------------------
Date: Monday, 4 October 1982, 17:08-EDT
From: Daniel L. Weinreb <dlw at SCRC-TENEX at MIT-MC>

OPEN takes a filename as its first argument.  The rest of its arguments
are keyword/value pairs.

WITH-OPEN-STREAM's first subform is a list of a variable (to be bound to
a stream), a filename, and the rest of the elements are keyword/value
pairs.

The keywords are as follows, with their possible values and defaults:

:DIRECTION	:INPUT (the default), :OUTPUT, :APPEND, :OVERWRITE, :PROBE
	:INPUT - The file is expected to exist.  Output operations are not allowed.
	:OUTPUT - The file is expected to not exist.  A new file is created.  Input
			operations are not allowed.
	:APPEND - The file is expected to exist.  Input operations are not allowed.
			New characters are appened to the end of the existing file.
	:OVERWRITE - The file is expected to exist.  All operations are allowed.
			The "file pointer" starts at the beginning of the file.
	:PROBE - The file may or may not exist.  Neither input nor output operations
			are allowed.  Furthermore, it is not necessary to close the stream.

:CHARACTERS	T (the default), NIL, :DEFAULT
	T - Open the file for reading/writing of characters.
	NIL - Open the file for reading/writing of bytes (non-negative integers).
	:DEFAULT - Let the file system decide, based on the file it finds.

:BYTE-SIZE	a fixnum or :DEFAULT (the default)
	a fixnum - Use this byte size.
	:DEFAULT - Let the file system decide, based on the file it finds.

:IF-EXISTS	:ERROR (the default), :NEW-VERSION, :RENAME,
		:RENAME-AND-DELETE, :OVERWRITE, :APPEND, :REPLACE
	Ignored if direction is not :OUTPUT.  This tells what to do if the file
	that you're trying to create already exists.
	:ERROR - Signal an error.
	:NEW-VERSION - Create a file with the same filename except with "latest" version.
	:RENAME - Rename the existing file to something else and proceed.
	:RENAME-AND-DELETE - Rename the existing file and delete (but don't expunge,
		if your system has undeletion) it, and proceed.
	:OVERWRITE - Open for :OVERWRITE instead.  (If your file system doesn't have
		this, use :RENAME-AND-DELETE if you have undeletion and :RENAME otherwise.)
	:APPEND - Open for :APPEND instead.
	:REPLACE - Replace the existing file, deleting it when the stream is closed.

:IF-DOES-NOT-EXIST	:ERROR (the default), :CREATE
	Ignored if direction is neither :APPEND nor :OVERWRITE
	:ERROR - Signal an error.
	:CREATE - Create the file and proceed.


Notes:

I renamed :READ-ALTER to :OVERWRITE; :READ-WRITE might also be good.

The :DEFAULT values are very useful, although some systems cannot figure
out this information.  :CHARACTERS :DEFAULT is especially useful for
LOAD.  Having the byte size come from the file only when the option is
missing, as the latest Common Lisp manual says, is undesirable because
it makes things harder for programs that are passing the value of that
keyword argument as computed from an expression.

Example of OPEN:
     (OPEN "f:>dlw>lispm.init" :DIRECTION :OUTPUT)

Example of WITH-OPEN-FILE:
     (WITH-OPEN-FILE (STREAM "f:>dlw>lispm.init" :DIRECTION :OUTPUT) ...)

OPEN can be kept Maclisp compatible by recognizing whether the second
argument is a list or not.  Lisp Machine Lisp does this for the benefit
of old programs.  The new syntax cannot be mistaken for the old one.

I removed :ECHO because we got rid of MAKE-ECHO-STREAM at the last
meeting.

Other options that the Lisp Machine will probably have, and which might
be candidates for Common Lisp, are: :INHIBIT-LINKS, :DELETED,
:PRESERVE-DATES, and :ESTIMATED-SIZE.
----------------------------------------------------------------
-------

14-Aug-83  1216	·······@CMU-CS-C.ARPA 	Things to do
Received: from CMU-CS-C by SU-AI with TCP/SMTP; 14 Aug 83  12:16:28 PDT
Received: ID <·······@CMU-CS-C.ARPA>; Sun 14 Aug 83 15:16:47-EDT
Date: Sun, 14 Aug 1983  15:16 EDT
From: Scott E. Fahlman <·······@CMU-CS-C.ARPA>
To:   common-lisp @ SU-AI.ARPA
Subject: Things to do


A bunch of things were put off without decisions or were patched over in
the effort to get agreement on the first edition.  Most of the people
who have been intensively involved in the language design will be tied
up for another couple of months getting their implementations up to spec
and tweaking them for performance.  However, it is perhaps not too soon
to begin thinking about what major additions/changes we want to get into
the second edition, so that those who want to make proposals can begin
preparing them and so that people can make their plans in light of what
is likely to be coming.

Here's a list of the major things that I see on the agenda for the next
year or so.  Some are yellow-pages packages, some have deep roots
and require white-pages support, and some are so pervasive that they
will probably migrate into the white pages after a probationary period
in yellow-land.  I'm sure I'm forgetting a few things that have already
been suggested.  I'm also sure that people will have some additional
proposals to make.  I am not including very minor and trivial changes
that we might want to make in the language as we gain some experience
with it.

1. Someone needs to implement the transcendental functions for complex
numbers in a portable way so that we can all use these.  The functions
should be parameterized so that they will work for all the various
floating-point precisions that implementations might offer.  The design
should be uncontroversial, since it is already specified in the manual.
I don't think we have any volunteers to do this at present.

2. We need to re-think the issue of function specs, and agree on what
should go into the white pages next time around.  Moon's earlier
proposal, or some subset of it, is probably what we want to go with.

3. At one point HIC offered to propose a minimal set of white-pages
support for efficient implementation of a portable flavor system, and to
supply the portable part.  The white-pages support would also be usable
by other object-oriented paradigms with different inheritance schemes
(that's the controversial part).  After a brief exchange of messages,
HIC got super-busy on other matters and we haven't heard much since
then.  Either HIC or someone else needs to finish this proposal, so that
we can put in the low-level support and begin playing with the portable
implementation of flavors.  Only after more Common Lisp users have had
some opportunity to play with flavors will it make sense to consider
including them (or some variation) in the white pages.  There is a lot
of interest in this out in user-land.

4. We need some sort of iteration facility more powerful than DO.  The
existing proposals are some extensively cleaned-up revision of LOOP and
Dick Waters' LETS package.  There may be some other ideas out there as
well.  Probably the best way to proceed here is for the proponents of
each style to implement their package portably for the yellow pages and
let the customers decide what they like.  If a clear favorite emerges,
it will probably be absorbed into the white pages, though this would not
preclude personal use of the other style.  None of these things requires
white-pages support -- it is just a matter of what we want to encourage
users to use, and how strongly.

5. A good, portable, user-modifiable pretty printer is needed, and if it
were done well enough I see no reason not to put the user-visible
interface in the white pages next time around.  Waters' GPRINT is one
candidate, and is being adopted as an interim pretty-printer by DEC.
The last time I looked, the code for that package was impenetrable and
the interface to it was excessively hairy, but I've heard that it has
been simplified.  Maybe this is what we want to go with.  Other options?

6. We need to work out the business of taxonomic error-handling.  Moon
has a proposal in mind, I believe.  A possible problem is that this
wants to be white-pages, so if it depends on flavors it gets tied up
with the issue of making flavors white-pages as well.

7. The Hemlock editor, a public-domain Emacs-clone written in portable
Common Lisp, is now running on the Perq and Vax implementations.  We
have a lot of additional commands and modes to implement and some tuning
to do, but that should happen fairly rapidly over the next few months.
Of course, this cannot just be moved verbatim to a new implementation
and run there, since it interacts closely with screen-management and
with the file system.  Once Hemlock is polished, it will provide a
reasonable minimum editing/top-level environment for any Common Lisp
implementation that takes the trouble to adapt it to the local system.
This should eliminate the need for hairy rubout-handlers, interlispy
top-levels, S-expression editors, and some other "environment" packages.
We plan to add some version of "info mode" at some point and to get the
Common Lisp Manual and yellow pages documents set up for tree-structured
access by this package, but that won't happen right away.

8. Someone ought to put together a reasonable package of macro-writer's
aids: functions that know which things can be evaluated multiple times
without producing side-effects, type-analysis hacks, and other such
goodies.

If you have items to add to this list, let me know.

-- Scott

18-Aug-83  1006	@·············@SCRC-TENEX 	What to do next   
Received: from MIT-MC by SU-AI with TCP/SMTP; 18 Aug 83  10:06:04 PDT
Date: Thursday, 18 August 1983  11:54-EDT
From: dlw at SCRC-TENEX, benson at SCRC-TENEX
Subject: What to do next
To:   fahlman at cmuc
Cc:   common-lisp at su-ai


Scott, I appreciated your summary of pending issues in Common Lisp, and
I certainly think we should proceed to work on these things.  However, I
think that the "next things to do", after we get out the initial real
Common Lisp manual, are:

(1) Create a Common Lisp Virtual Machine specification, and gather a
body of public domain Lisp code which, when loaded into a proto-Lisp
that meets the spec, produces a complete Common Lisp interpreter that
meets the full language spec.  (This doesn't address the portable
compiler problem.)

(2) Establish an official Common Lisp subset, suitable for
implementation on 16-bit microcomputers such as the 68000 and the 8088.
I understand that Gabriel is interested in 68000 implementations, and I
am trying to interest Bob Rorscharch (who implemented IQLISP, which is
an IBM PC implementation of a UCILISP subset) in converting his product
into a Common Lisp implementation.

There are a lot of problems with subsetting.  You can't leave out
multiple values, beacuse several primitives return multiple values and
you don't want to omit all of these primitives (and you don't want to
discourage the addition of new primitives that return multiple values,
in future versions of Common Lisp).  You can't leave out packages, at
least not entirely, because keywords are essential to many functions.
And many things, if removed, would have to be replaced by something
significantly less clean.  We'd ideally like to remove things that (a)
can be removed without creating the need for an unclean simpler
substitute, and (b) aren't used by the rest of the system.  In other
words, we have to find modular chunks to break off.  And, of course,
any problem that works in the subset has to work and do exactly the
same thing in full Common Lisp, unless the program has some error
(in the "it is an error" sense).  The decision as to what goes
in and what goes out should be made in light of the fact that
an implementation might be heavily into "autoloading".

Complex numbers can easily be omitted.

The requirement for all the floating point precisions can be
omitted.  Of course, Common Lisp is flexiable in this regard anyway.

Rational numbers could be left out.  They aren't hard, per se, but
they're just another thing to do.  The "/" function on two integers
would have to signal an error.

Packages could be trimmed down to only be a feature that supplies
keywords; most of the package system might be removable.

Lexical scoping might possibly be removable.  You could remove support
for LABELS, FLET, and MACROLET.  You can't remove internal functions
entirely (i.e. MAPCAR of a lambda-expression can't be removed) but they
might have some restrictions on them.

Adjustable arrays could be removed.  Fill pointers could go too,
although it's not clear that it's worth it.  In the extreme, you could
only have simple arrays.  You could even remove multi-D arrays
entirely, or only 1-D and 2-D.

Several functions look like they might be big, and aren't really
required.  Some candidates: COERCE, TYPE-OF, the hard version
of DEFSETF (whatever you call it), LOOP, 

TYPEP and SUBTYPEP are hard to do, but it's hard to see how
to get rid of the typing system!  SUBTYPEP itself might go.

Multiple values would be a great thing to get rid of in the subset, but
there are the Common Lisp primitives that use multiple values.  Perhaps
we should add new primitives that return these second values only, for
the benefit of the subset, or something.

Catch, throw, and unwind-protect could be removed, although they're
sure hard to live without.

Lots of numeric stuff is non-critical:  GCD, LCM, CONJUGATE, the
exponentials and trascendentals, rationalize, byte manipulation, random
numbers.

The sequence functions are a lot of work and take a lot of room in your
machine.  It would be nice to do something about this.  Unfortunately,
simply omitting all the sequence functions takes away valuable basic
functionality such as MEMQ.  Perhaps the subset could leave out some of
the keywords, like :test and :test-not and :from-end.

Hash tables are not strictly necessary, although the system itself
are likely to want to use some kind of hash tables somewhere,
maybe not the user-visible ones.

Maybe some of the defstruct options could be omitted, though I don't
think that getting rid of defstruct entirely would be acceptable.

Some of the make-xxx-stream functions are unnecessary.

Some of the hairy reader syntax is not strictly necessary.  The circular
structure stuff and load-time evaluation are the main candidates.

The stuff to allow manipulation of readtables is not strictly necessary,
or could be partially restricted.

Some of the hairy format options could be omitted.  I won't go into
detail on this.

Some of the hairy OPEN options could go, although I'd hate to be the one
to decide which options are the non-critical ones.  Also some of the
file operations (rename, delete, attribute manipulation) could go.

The debugging tools might be optional although probably they just
get autoloaded anyway.

23-Mar-84  2248	····@CMU-CS-A 	Common Lisp Reference Manual  
Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 23 Mar 84  22:48:17 PST
Date: 24 Mar 84 0130 EST (Saturday)
From: ··········@CMU-CS-A
To: ···········@SU-AI
Subject: Common Lisp Reference Manual

The publisher of the Common Lisp Reference Manual is Digital Press.  I
understand that they may be negotiating with manufacturers to allow
them to reprint the manual in various ways as part of their product
documentation.  I am leaving the business and legal aspects of this up
to Digital Press, and referring all corporate inquiries to them.  My
goal is primarily to ensure that (a) no one publishes a manual that
claims to be about Common Lisp when it doesn't satisfy the Common Lisp
specifications, and (b) to make sure that everyone involved in the
Common Lisp effort is properly credited.  I certainly do not want to
block anyone from implementing or using Common Lisp, or even a subset,
superset, or side-set of Common Lisp, as long as any differences are
clearly and correctly stated in the relevant documentation, and as long
as the Common Lisp Reference Manual is recognized and credited as the
definitive document on Common Lisp.  This requires a certain balance
between free permission and tight control.  This is why I am letting
the publisher handle it; they almost certainly have more experience
with such things than I do.

I have asked the editor at Digital Press to arrange for complimentary
copies to be sent to everyone who has contributed substantially to the
Common Lisp effort.  This will include most of the people on this
mailing list, I imagine.  The set of people I have in mind is listed in
the acknowledgements of the book--seventy or eighty persons
altogether--so if you see a copy of the book and find your name in that
list, you might want to wait a bit for your complimentary copy to show
up before buying one.  (Because of the large number of copies involved,
they aren't really complimentary, which is to say the publisher isn't
footing the cost:  the cost of them will be paid out of the royalties.
I estimate that the royalties from the entire first print run will just
about cover these free copies.  It seems only fair to me that everyone
who contributed to the language design should get a copy of the final
version!)

The nominal schedule calls for the typesetter to take about five weeks
to produce camera-ready copy from the files I sent to them on magnetic
tape.  The process of printing, binding, and distribution will then take
another four to five weeks.  So at this point we're talking availability
at about the end of May.  This is a tight and optimistic schedule; don't
blame Digital Press if it slides.  (I'm certainly not about to cast any
stones!)  Unless you're an implementor wanting to order a thousand
copies to distribute with your system, please don't bother the folks at
Digital Press until then; they've got enough problems.  I'll send more
information to this mailing list as the date approaches.

One last note.  The book is about 400 pages of 8.5" by 11" Dover output.
Apparently the publisher and typesetter decided that this made the lines
too wide for easy reading, so they will use a 6" by 9" format.  This
will make the shape of the book approximately cubical.  Now, there are
26 chapters counting the index, and a Rubik's cube has 26 exterior cubies.
I'll let you individually extrapolate and fantasize from there.
--Guy

20-Jun-84  2152	····@CMU-CS-A.ARPA 	"ANSI Lisp" rumor   
Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 20 Jun 84  21:52:02 PDT
Date: 21 Jun 84 0050 EDT (Thursday)
From: ··········@CMU-CS-A.ARPA
To: ···········@XEROX.ARPA
Subject: "ANSI Lisp" rumor
CC: ···········@SU-AI.ARPA
In-Reply-To: ············@XEROX.ARPA's message of 2 Jun 84 20:55-EST"

I do not know of any official effort within ANSI to do anything at all
about LISP.  Here is what I do know:  I have been told that a group
in China has suggested that perhaps an ISO standard for LISP should
be promulgated.  I know nothing more about it than that.  However,
at the request of David Wise and J.A.N. Lee, I have sent a copy of
the Common LISP Manual to J.A.N. Lee, who has been involved with
standards of various kinds at ACM.  (David Wise is a member of the SIGPLAN
council, or whatever it is called, and is the LISP expert within that
body.)  The idea is that if either an ISO or ANSI standards effort
were to be undertaken, Wise and Lee feel that such an effort should
certainly take the work done on Common LISP into account, and they want
people in the standards organizations to be aware of the Common LISP
work.  I sent a copy of the Table of Contents to Lee several months
ago, and it was my understanding that he might choose to circulate
copies of it to, for example, members of the SIGPLAN council.
That's where it stands.  I repeat, I know of no effort actually to
start a standards effort; Wise and Lee merely want certain people to
be prepared by already having information about Common LISP if any of
a number of possible developments should unfold.
--Guy


--
---------------------------------------------------------------------------
Arun Welch					2455 Northstar Rd
Network Engineer				Columbus, OH 43221
OARnet						·····@oar.net
From: Scott Fahlman
Subject: Re: CL History (was Re: Why do people like C?)
Date: 
Message-ID: <386vm7$b80@cantaloupe.srv.cs.cmu.edu>
In article <···················@thor.oar.net> ·····@thor.oar.net (Arun Welch) writes:

    Scott, you're the only one of the "gang of
    five" still following this, you might have some insights...

I didn't spot anything in Jonl's old message that is factually
incorrect.  It is, of course, a view of those parts of the Common Lisp
startup that Jonl was most involved in.  My account would emphasize
other things, but I don't have time to write a Common Lisp history
(and have probably forgotten too much to do this without digging
through a lot of old documents).  I'm busy with Dylan now, plus
several other projects.

I think this old account does show that there were several overtures
to get the Xerox/Interlisp community aboard the Common Lisp bandwagon
early on, though these were unsuccessful -- neither side was willing
to give up certain well-loved features of their preferred Lisp
environments.

Note that all of this occurred LONG before the X3J13 group was set up,
which was the subject of the earlier message.

-- Scott

===========================================================================
Scott E. Fahlman			Internet:  ····@cs.cmu.edu
Principal Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 268-5576 (new!)
Carnegie Mellon University		Latitude:  40:26:46 N
5000 Forbes Avenue			Longitude: 79:56:55 W
Pittsburgh, PA 15213			Mood:      :-)
===========================================================================
From: Jeff Dalton
Subject: Re: CL History (was Re: Why do people like C?)
Date: 
Message-ID: <Cy1IEM.63I@cogsci.ed.ac.uk>
In article <··········@cantaloupe.srv.cs.cmu.edu> ···@CS.CMU.EDU (Scott Fahlman) writes:
>
>In article <···················@thor.oar.net> ·····@thor.oar.net (Arun Welch) writes:
>
>    Scott, you're the only one of the "gang of
>    five" still following this, you might have some insights...
>
>I didn't spot anything in Jonl's old message that is factually
>incorrect.  It is, of course, a view of those parts of the Common Lisp
>startup that Jonl was most involved in. 

Could someone send me this message of Jonl's?  I haven't seen it.

There are BTW substantial on-line archives of the e-mail discussion
that took place while CL was being designed.  I'm not sure exactly
where they are these days...

Anyone interested in the history of Lisp around CL and in how
Cl relates to other Lisps should look at MacLisp, Franz Lisp,
Lisp Machine Lisp (various editions) and VAX NIL.  Common Lisp
did not appear out of the blue with some wild and crazy
semantics designed to wipe out entire Lispish forms of life.

Moreover, Common Lisp is in many ways a substantial cleanup
and improvement over earlier Lisps.  Consider, for instance,
unwind-protect vs the several mechanisms of InterLisp.
Consider full lexical scoping vs the partly lexical and
different-in-interpreter-and-compiler rules of MacLisp
and related Lisps such as LM Lisp and Franz and the
variety of options in InterLisp.

There's no doubt that people would do things differently these
days, which is why Dylan and EuLisp and ISLisp have some things
in common with Common and not others.  But if you look at the
other Lisps around at the time and at where Lisp seemed to be
headed, Common Lisp does make sense and it was in many ways better
than the alternatives.

-- jeff
From: Jeff Dalton
Subject: Re: Bias in X3J13? (long and possibly boring)
Date: 
Message-ID: <Cy6yuG.9n3@cogsci.ed.ac.uk>
In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:
>In article <··········@cantaloupe.srv.cs.cmu.edu>,
>Scott Fahlman <···@CS.CMU.EDU> wrote:
>>
>>Simon,
>>
>>This pile of conspiracy theories about Lisp (or "LisP" as you call it)
>>is truly amazing.  
>
>Scott
>
>You were very close to the centre of these events, whereas I was the
>other side of the Atlantic ocean and *very* much a bit player -- not
>involved at all until after the gunsmoke had started to clear.  [...]

>If you say that Symbolics, TI and Lucid all had a larger share of the
>LisP market than Xerox, I am sure that you must be right. Certainly,
>you have better information on this point than I. In this country,
>during the middle eighties, I was aware of probably 70 Xerox machines
>in active use in academia and industry, two Symbolics, and no
>Explorers (I always wanted a Symbolics, but could never pursuade
>anyone to buy me one). Were the ratios greatly different in the States?

We had a micro-Explorer in Edinburgh, as well as a Symbolics machine.
There were more Xerox machines chiefly because of the influence of
Henry Thompson.  If you subtract the Edinburgh D-machines from the
Uk total, there's a noticeable drop.

>I had not remembered, or had not appreciated, the fact that Xerox had
>been invited early, and had declined. It is certainly true that Danny
>Bobrow and Mark Stefik were involved in the CLOS specification
>process. The impression we got this side of the pond (and very far
>from the protagonists) was that this was not entirely a happy
>business.

Please: the impression *you* got on this side of the pond.
Maybe others had the same impression, but not everyone.

>>   This allegation explains both the comments system and the choice of
>>   LISP2, two decisions each of which are otherwise inexplicable. 

They are easily otherwise explicable, as I explained in an earlier
article.

>That isn't really fair. There are lots of technical points I disagree
>with in Common LISP. These two are particularly peculiar. Can you name
>one person currently writing on the design of functional languages who
>would defend the double name space? See for example Gabriel & Pitman
>in L&SC 1,1, in whose acknowlegements you are yourself credited.

You will note that points in favor of Lisp-2 are given in that
paper.

>Similarly, a comment system which effectively prevents in-core working
>*must* have been deliberately provocative. So many other potential
>solutions would have allowed both in-core and file-based development
>styles to co-exist.  This one did not. 

I think this is completely off the wall!  I don't believe it prevents
in-core working, but even if it does I don't believe this claim of
deliberate provocation.  Can it not be nailed once and for all?

Now, if Common Lisp's treatment of comments were a major problem,
it could have been raised at many points.  I don't recall it ever
being a significant issue in X3J13, and it certainly could have
been changed by X3J13 just as case-insensitivity was.

>>The case-insensitive reader has nothing to do with DoD requirments.
>
>If you are sure of this, you must be right.

If the Dod required case-insensitivity, how come we were able to
add case-sensitivity in X3J13?

-- jeff
From: Richard A. O'Keefe
Subject: Re: Bias in X3J13? (long and possibly boring)
Date: 
Message-ID: <38idqo$1ls@goanna.cs.rmit.oz.au>
····@aiai.ed.ac.uk (Jeff Dalton) writes:
>>Similarly, a comment system which effectively prevents in-core working
>>*must* have been deliberately provocative. So many other potential
>>solutions would have allowed both in-core and file-based development
>>styles to co-exist.  This one did not. 

>I think this is completely off the wall!  I don't believe it prevents
>in-core working, but even if it does I don't believe this claim of
>deliberate provocation.  Can it not be nailed once and for all?

Yes, it can.  Interlisp-D replaced DEDIT with SEDIT.  SEDIT used
Common Lisp syntax for comments, yet it *did* allow in-core working.
When I was working on Xerox Quintus Prolog, I was quite impressed by
the way that SEDIT gave you "the best of both worlds", but not quite
impressed enough to stop doing all my serious editing in Emacs.

I am puzzled that anyone flaming the Common Lisp developers for spiting
Xerox would be so ignorant of Xerox's accomplishments in implementing a
very decent CL1 environment.

-- 
"The complex-type shall be a simple-type."  ISO 10206:1991 (Extended Pascal)
Richard A. O'Keefe; http://www.cs.rmit.edu.au/~ok; RMIT Comp.Sci.
From: Jason Trenouth
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <JASON.94Oct20132954@wratting.harlqn.co.uk>
On Thu, 13 Oct 1994, Simon Brooke wrote:

Simon> .... This was just about the time when X3J13 were driving their
Simon> nails into the coffin of LisP, ....

In article
<········································@swim5.eng.sematech.org>
"William D. Gooch" <······@swim5.eng.sematech.org> writes:

William> This seems to me to be extremely unfair to those who worked
William> hard to put together a comprehensive and IMO high-quality
William> standard for Common Lisp.  Do you have any justification for
William> this slam?  Did you offer your help?

>>>>> "Simon" == Simon Brooke <·····@rheged.dircon.co.uk> writes:

Simon>  To deal with your questions in reverse order:
Simon> 
Simon> (i) yes, I served on the British Standards Institution LisP
Simon> committee for a time (I'm not pretending my contribution was
Simon> particularly valuable).
Simon> 
Simon> (ii) There are a number of specific arguments I would advance
Simon> (a) as to the flawed nature of the Common LISP design, and (b)
Simon> to defend the claim that these flaws were a consequence of the
Simon> commercial/political axes being ground in X3J13.
Simon> 
Simon> (iia: Flaws in the language design)

Hi Simon,

I just thought I'd criticise your list of flaws in the language:

Simon> (iia1) Prior to the definition of Common LISP, many LisP
Simon> programmers used an in-core development style. This style of
Simon> development has significant advantages: the development cycle
Simon> is edit/test rather than edit/load/test. More significantly,
Simon> the code of a function actually on the stack can be edited. By
Simon> definition (CLtL 1, p347) Common LISP comments are not
Simon> read. Consequently, code edited in core loses its
Simon> documentation.

Many Lisp programmers did use in-core editing, but probably most did
not. The divide reflected the InterLisp vrs MacLisp-family divide.

In-core editing has disadvantages too. An entertaining snapshot of the
arguments between the two communities is captured in [1] and repeated
in [2].

Simon> Richard Barbour's Procyon workaround, and the Envos Medley
Simon> workaround, are technically in breach of the standard
Simon> definition, which effectively prevents in-core development. A
Simon> language definition which prevents established practitioners
Simon> working in their preferred manner is broken.

What are their workarounds and why do they breach the standard?

Simon> (iia2) I don't think anyone any longer defends the decision to
Simon> choose a LISP2. It is especially true in LisP that code *is*
Simon> data.

I don't think that is true. There are LISP2 advocates and their
reasoning centers around ease of macrology. I believe KMP (Kent
Pitman) has co-authored a paper on the topic.

Simon> (iia3) Before the development of Common LISP a number of
Simon> typographical tricks were widespread in the LisP community
Simon> which greatly assisted the readbility of code. As examples, it
Simon> was common to capitalise theInitialLettersOfEmbbeddedWords in
Simon> names. Similarly, we used to have a convention that functions,
Simon> methods etc had names Which Started With a Capital Letter,
Simon> whereas other variables had names which started in lower
Simon> case. INCOMMONLISPTHISISNOLONGERPOSSIBLE. Any language which
Simon> cannot tell the difference between an atom and AN ATOM is
Simon> broken.

In think the case conventions were only widespread in the InterLisp
community.  

BTW case conventions for multi-word tokens are more often seen in
infix languages because it can be confusing (for the human reader) to
allow hyphens in tokens.

Personally, I find:

	lots-of-words-in-a-token

much much easier to read than:

	lotsOfWordsInAToken

If you really want to develop code in this style then CL provides
portable functions and parameters you can use: e.g. READTABLE-CASE and
*PRINT-CASE*.

Simon> To put it another way, Portable Standard LisP (for example) is
Simon> a language for poets, but in Common LISP you can only shout.

Not true. By default, CL ignores the case that developers type and
shouts back at you.  Users source code is generally written in _lower
case_. Uncontrolled output is generally in upper case. But as I said
case I/O is user definable in CL.

Simon> (iia4) The implementation of the sequence functions is a
Simon> mess. It's a shame, because one can admire the o'erleaping
Simon> ambition, but such a huge monolithic structure is required to
Simon> make it work that any Common LISP environment has to be
Simon> unwieldy. If Common LISP had been object-oriented from the
Simon> bottom up, a la EuLisP, it would have worked; but given that
Simon> decision wasn't taken (and that really isn't X3J13's fault --
Simon> O-O was too new at the time), it would have been better to
Simon> admit that lists and vectors are fundamentally different
Simon> things.

It is true that sequence functions should probably be generic but that
is more of an implementation detail. CL vendors could use GF's under
the hood to do the dispatching.

In what way to sequence functions require Common LLisp to be large and
unwieldy? I've not heard this argument before.

Simon> (iia5) I remain unconviced that keywords in lambda-lists are a
Simon> good idea. A number of points here: it is fundamental to the
Simon> nature of LisP that it is syntax-less and keyword-less --
Simon> that's a lot of what gives it it's elegance, and what allows a
Simon> LisP interpreter to be so small and simple. A Common LISP
Simon> interpreter must include a parser to handle lambda lists, and
Simon> once again is neither small nor simple.

Keyword arguments can make code that is evolving quickly much more
maintainable. CL is a great language for doing research and
development.

Simon> (iia6) I have complained often enough before about the
Simon> abhomination, SETF. I rehearse my objections
Simon> briefly. Destructively altering lists may be dangerous, and
Simon> should always be done consciously and with care.  If you use
Simon> RPLAC, you know you what you are doing. SETF makes destructive
Simon> change invisible to the naiive user: it says 'take this bit of
Simon> memory, I don't know where it is, I don't know who owns it, I
Simon> don't know who else is holding pointers to it, and trample all
Simon> over it'.  It's direct equivalent is the BASIC keyword, POKE. I
Simon> *shudder*.

SETF is not only used for modifying list structure. It introduces an
abstract notion of "places". E.g. SETF is also used for setting CLOS
object slots.

The idea that potentially dangerous operations (like destructively
modifying list structure) should be protected through obfuscation is
itself dangerous.

SETF is not the direct equivalent of POKE. POKE is at the byte level
and doesn't respect HLL data structures.

Simon> Note that in critiqueing the language definition, I have not
Simon> reiterated Henry Baker's objections (see message id
Simon> <················@netcom.com>) to the sheer size of the
Simon> language as defined, although I share them, and for very much
Simon> the reasons he states.

Common LISP is indeed a large language with some spurious stuff in it.

Simon> (iib) These flaws are consequences of political goals
Simon> 
Simon> At the time of the formation of X3J13, overwhelmingly the
Simon> largest company seriously involved in the LisP business was
Simon> Xerox. Xerox's InterLISP was big and bloated and inconsistent
Simon> enough, God knows, but it was nevertheless a wonderful
Simon> programmers toolkit. Furthermore, the Xerox D series machine,
Simon> although expensive in itself, was very substantially cheaper
Simon> than competing LisP machines.

The largest body of Lisp users was probably the MacLisp crowd.

Simon> Given this circumstance, I am convinced by and happy to repeat
Simon> publicly the allegation that has been made frequently in the
Simon> past that the essential aim of a substantial number of the
Simon> participants in X3J13 was to make Common LISP as different as
Simon> possible from InterLISP, in order to make it more difficult for
Simon> Xerox to compete.

There was certainly some rivalry, but I think conscious sabotage is
stretching things a little.

Simon> This allegation explains both the comments system and the
Simon> choice of LISP2, two decisions each of which are otherwise
Simon> inexplicable. I am prepared to believe the claim that the
Simon> case-insensitive reader was a requirement of the United States
Simon> Department of Defense.

I don't think either decision is inexplicible. NB Case insensitivity
is just the default.

Simon> In summary, I claim that in a number of instances, X3J13
Simon> deliberately and knowingly chose the less good of technical
Simon> choices, in order to produce a language closer to that of the
Simon> smaller vendors, and further from that of Xerox.

The MacLisp family had a large number of users.

William> I don't think the X3J13 work in any way contributed to the
William> slump in the Lisp market, which was well underway before the
William> result of their efforts became widely available.

Simon> I hope that you may be right, but do not myself agree. I
Simon> believe, and I guess that you do, that languages within the
Simon> LisP family offer at the very least a better programming idiom
Simon> than C++. Ten years ago, machines which could run a big LisP
Simon> system were at last becoming cheap, and all the major computer
Simon> vendors were showing a considerable interest in the language.
Simon> 
Simon> Five years later that interest had waned. It's no co-incidence,
Simon> I think, that this was contemporaneous with the introduction of
Simon> a new, 'standard' version of the language so complex that
Simon> compilers were exceedingly difficult to develop, optimise and
Simon> maintain, and so different from existing, well established
Simon> variants of the language that experienced programmers were at a
Simon> loss.

I think this last bit is untrue. Common Lisp is very much in the
tradition of the MacLisp family.

Simon> Although, as I say, I believe that LisP is at least as good as
Simon> the languages in common use today as a language for the
Simon> development of complex software systems, I doubt whether its
Simon> decline can now be arrested.  I do not believe your claim that
Simon> '...the slump in the LisP market...'  was '...well underway...'
Simon> in 1984, when the aluminium book was published. Remember, the
Simon> middle eighties were the period of the Japanese '5th Generation
Simon> Project', the British Alvey Programme, and a number of similar
Simon> initiatives througout the world. The same period saw the
Simon> founding of Harlequin and the commercialisation of
Simon> POPLOG. These things taken together seem  to me to indicate
Simon> considerable strength in the LisP market.

And Harlequin has been growing strongly since its foundation!

	Regards -- Jason

[1] R. M. Stallman and E. Sandewall, (1978), "Structured editors with
    a LISP", ACM Computing Surveys, vol 10, no. 4, December, pages
    505-508.

[2] Barstow, D. R. Shrobe, H. E. and Sandewall, E., (eds) (1984),
    "Interactive Programming Environments", McGraw Hill.
--
_____________________________________________________________________________
| Jason Trenouth,                        | EMAIL: ·····@uk.co.harlequin     |
| Harlequin Ltd, Barrington Hall,        | TEL:   (0223) 872522             |
| Barrington, Cambridge CB2 5RG, UK      | FAX:   (0223) 872519             |
From: Lou Steinberg
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <LOU.94Oct20144446@atanasoff.rutgers.edu>
In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:


   Given this circumstance, I am convinced by and happy to repeat
   publicly the allegation that has been made frequently in the past that
   the essential aim of a substantial number of the participants in X3J13
   was to make Common LISP as different as possible from InterLISP, in
   order to make it more difficult for Xerox to compete. 

I was an active Xerox d-machine user and AI researcher during the time
period you are talking about, although I was not part of any "insider"
group on any side.  I saw absolutely no evidence of the kind of
motivation you claim.  Just the opposite - it appeared to be Xerox
that at first was hanging back from participation in the Common Lisp
effort.  I remember only one issue in which it was clear that the
Xerox folks really were pushing for one side of an issue: they (or at
least Larry Masinter) felt strongly that CLOS should take a
generic-function approach, rather than a message-sending approach.  If
I remember correctly they had already developed a language (Portable
Common Loops??) which took the generic function approach.  Guess what
the decision for Common Lisp was?  Generic functions.
From: Henry G. Baker
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <hbakerCy17CC.vx@netcom.com>
In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:
>   In article
>   <········································@swim5.eng.sematech.org>
>   "William D. Gooch" <······@swim5.eng.sematech.org> writes:
>   >On Thu, 13 Oct 1994, Simon Brooke wrote:
>   >
>   >> .... This was just about the time
>   >> when X3J13 were driving their nails into the coffin of LisP, ....
>   >
>   >This seems to me to be extremely unfair to those who worked hard to put 
>   >together a comprehensive and IMO high-quality standard for Common Lisp.  
>   >Do you have any justification for this slam?  Did you offer your help?  
>
>(iia: Flaws in the language design)
>
>(iia1) Prior to the definition of Common LISP, many LisP programmers
>used an in-core development style. This style of development has
>significant advantages: the development cycle is edit/test rather than
>edit/load/test. More significantly, the code of a function actually on
>the stack can be edited. By definition (CLtL 1, p347) Common LISP
>comments are not read. Consequently, code edited in core loses its
>documentation.

As I have said elsewhere, the substitution of the Maclisp model of
Lisp program as a "character string" instead of the Interlisp model of
Lisp program as an S-expression made up of cons cells was a major step
backwards.  I apologize to the extent that Symbolics helped to push
things in this direction.

Many people were apparently turned off by the lack of sophistication
of the in-core Interlisp editors, and their inability to deal with
multiple fonts, better looking comments, programmer hints for pretty
printing, etc.  However, I think that this was due primarily to
address space limitations on the PDP-10/20 and not to any lack of
interest on the part of the Interlisp people.

Stallman's Emacs became so popular, and had enough Lisp-ish features,
that it seemed silly at the time not to take advantage of it.

The proper step forwards would have been to make S-expressions
_persistent_, instead of bowing to the Fortran/C/Ada model of programs
as character strings.  It's a real shame that Symbolics spent so much
time developing a character-based file system, instead of going
directly to a Statice-like persistent object system.  (Hindsight is
20/20.)

>(iia4) The implementation of the sequence functions is a mess. It's a
>shame, because one can admire the o'erleaping ambition, but such a
>huge monolithic structure is required to make it work that any Common
>LISP environment has to be unwieldy. If Common LISP had been
>object-oriented from the bottom up, a la EuLisP, it would have worked;
>but given that decision wasn't taken (and that really isn't X3J13's
>fault -- O-O was too new at the time), it would have been better to
>admit that lists and vectors are fundamentally different things.

I agree with this wholeheartedly, and have said so elsewhere.

>(iia5) I remain unconviced that keywords in lambda-lists are a good
>idea. A number of points here: it is fundamental to the nature of LisP
>that it is syntax-less and keyword-less -- that's a lot of what gives
>it it's elegance, and what allows a LisP interpreter to be so small
>and simple. A Common LISP interpreter must include a parser to handle
>lambda lists, and once again is neither small nor simple.

Two words: PL/I envy.

>(iia6) I have complained often enough before about the abhomination,
>SETF. I rehearse my objections briefly. Destructively altering lists
>may be dangerous, and should always be done consciously and with care.
>If you use RPLAC, you know you what you are doing. SETF makes
>destructive change invisible to the naiive user: it says 'take this bit
>of memory, I don't know where it is, I don't know who owns it, I don't
>know who else is holding pointers to it, and trample all over it'.
>It's direct equivalent is the BASIC keyword, POKE. I *shudder*.

I agree that SETF is a real kludge, because it is trying to make up
for the lack of 1st class 'references'.  The Lisp machine provided for
invisible pointers, but they don't take care of the case of 1 bit
within a word.  A run-time implementation would have to cons a 'dope
vector' or equivalent to achieve the same effect.

Today, if someone wanted a truly _clean_ implementation of SETF-like
constructs, I would advise a more object-like approach using closures
like crazy.  By using traditional closures, _standard_ inlining
optimizations could then achieve the same optimized code that most
SETF macros achieve.  Furthermore, the non-optimized cases would
provide for the equivalent of runtime SETF's which are sometimes
sorely needed.

>Given this circumstance, I am convinced by and happy to repeat
>publicly the allegation that has been made frequently in the past that
>the essential aim of a substantial number of the participants in X3J13
>was to make Common LISP as different as possible from InterLISP, in
>order to make it more difficult for Xerox to compete. 

If you would change this to simply say "as much like the MIT Lisp
Machine as possible", I might agree.  I don't think that knocking
Interlisp was as much an issue as lessening the change for Lisp
Machine developers and users.  (After all, the Lisp Machine developers
thought that they had already done the 'right thing', so that any
changes would make the language worse in their minds.)

>I am
>prepared to believe the claim that the case-insensitive reader was a
>requirement of the United States Department of Defense.

It certainly was for Ada, but I seriously doubt it in the case of
Lisp.  A Common Lisp reader _could_ have preserved the case of symbol
spellings, but hashed and compared them on the basis of upper-case
only, as do some file systems.  The printer would then look to see
what the preferred spelling(s) were to print something.
Unfortunately, this would lead to other problems.

-------

I don't think that the standardization committee was trying to keep
different vendors from making improvements to their versions of the
language, so much as making it possible for a 3rd party SW vendor to
run their code on multiple platforms without a huge maintenance
problem.

In retrospect, however, that attitude was naive, since the sheer
volume of work which was caused by the mass of other changes,
precluded any effort on vendor-specific 'improvements'.

The Lisp community also ran head-on into the 'the minimum is the
maximum' problem which is characteristic of all government edicts.
Whenever a standard of behavior is set, and there are a number of
competitors, the weakest competitor is now in a position to hold back
progress of everyone else in the name of 'compatibility'.  Any
competitor which attempts to move out in front runs the very real risk
of the other competitors ganging up on him in the standards committee
to ensure that his 'improvements' will have to be retracted, or at the
very least, undergo significant and expensive changes.  In such a
situation, the pioneer is quickly identified by the number of arrows
in his back.

Standards committees should _follow_, not lead.  The best standards
are _de facto_ standards, which have already developed as a result of
consensus.

Languages which don't change are dead.  Latin has been standardized
for years.  (Doesn't it bother anyone else that people who are really
good at Latin tend to gravitate to standards committees?)  Lisp grew
and prospered _because_ it was able to quickly change and adopt good
ideas from other languages.  The whole point of standards committees
are to _freeze_ a language at a certain point in time -- e.g.,
Fortran-66, Fortran-77, etc.  This guarantees that all new ideas will
now have to come from _outside_ that community -- e.g., all of
Fortran's ideas are now stolen from C, Lisp, Ada, FP, etc.  Lisp was,
and should remain, a _leader_ in exploring new ideas, and traditional
language standards are incompatible with this goal.

In my mind, probably the biggest _disservice_ that (D)ARPA did to the
programming language community was to try to force-feed 'standards'.
It is now impossible to get research funds to do programming
_language_ research, unless you give the language an entirely new
name, and hide it in new syntax.  There's plenty of money for
compiling old languages, or for 'application-specific' languages (when
else am I going to get to use that neat lex/yacc stuff that they
taught me in CS301?), but not for new ideas in existing 'standard'
languages.

After nearly 50 years of software, we have obviously already found
_all_ of the important techniques, C++ and Smalltalk are the solution
for all computing problems, and the only things remaining are a bit of
mopping up.  Dijkstra/Hoare/Goldberg/Stroustrup have found all there is
to find, and the rest of us need look no farther.

This was also approximately the prevailing attitude in physics circa
1890, and I hope that this ARPA attitude is proven just as
spectacularly wrong as that physics attitude.

      Henry Baker
      Read ftp.netcom.com:/pub/hbaker/README for info on ftp-able papers.
From: Simon Brooke
Subject: SETF (was Re: Why do people like C? (Was: Comparison: Beta - Lisp))
Date: 
Message-ID: <Cy3E67.42K@rheged.dircon.co.uk>
In article <···············@netcom.com>,
Henry G. Baker <······@netcom.com> wrote:
>In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:
>
>>(iia6) I have complained often enough before about the abhomination,
>>SETF. I rehearse my objections briefly. Destructively altering lists
>>may be dangerous, and should always be done consciously and with care.
>>If you use RPLAC, you know you what you are doing. SETF makes
>>destructive change invisible to the naiive user: it says 'take this bit
>>of memory, I don't know where it is, I don't know who owns it, I don't
>>know who else is holding pointers to it, and trample all over it'.
>>It's direct equivalent is the BASIC keyword, POKE. I *shudder*.
>
>I agree that SETF is a real kludge, because it is trying to make up
>for the lack of 1st class 'references'.  The Lisp machine provided for
>invisible pointers, but they don't take care of the case of 1 bit
>within a word.  A run-time implementation would have to cons a 'dope
>vector' or equivalent to achieve the same effect.
>
>Today, if someone wanted a truly _clean_ implementation of SETF-like
>constructs, I would advise a more object-like approach using closures
>like crazy.  By using traditional closures, _standard_ inlining
>optimizations could then achieve the same optimized code that most
>SETF macros achieve.  Furthermore, the non-optimized cases would
>provide for the equivalent of runtime SETF's which are sometimes
>sorely needed.
>

I'm sorry, I don't think this addresses the real problem. There's a
fundamental difference between a top-level object which exists in the
name space, and an anonymous cons cell. It makes a great deal of sense
to have a homogenous way of changing:

	* the value of a variable;
	* the value of a property of some name;
	* the value of an instance variable of some object;
	* the value of a class variable of some class.

All these things are in some sense first class. If anything is hanging
on to a pointer to the value of a variable, it presumably *wants* to
know when that variable changes. Cons cells in arbitrary list
structures are a very different matter. One of the (few) space
efficiencies of LisP is that you don't have to copy until you want to
make changes, and even then you may only have to make a partial copy
of what you're pointing at.

Consequently, in real LisP programs, it's perfectly normal and
legitimate for different objects to hold pointers to the same piece of
structure, in the belief that each 'owns' it. When they want to make a
change, they can make non-destructive changes by CONSing. Only what
is actually changed need be copied. 

If they want to communicate that change, *and under the design
protocols of the system this is legitimate*, they can make destructive
change by RPLAC.

My objection to SETF is that the way it alters list structure is
destructive. As it has become/was intended to become the default way
of changing values (aluminium book p94), LisP beginners will no longer
learn the fundamental software engineering principle outlined above,
so that the perceived power and expressiveness of the language is
impoverished.


-- 
··············@rheged.dircon.co.uk
From: Jeff Dalton
Subject: Re: SETF (was Re: Why do people like C? (Was: Comparison: Beta - Lisp))
Date: 
Message-ID: <Cy5Bv7.JDy@cogsci.ed.ac.uk>
In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:
>I'm sorry, I don't think this addresses the real problem. There's a
>fundamental difference between a top-level object which exists in the
>name space, and an anonymous cons cell. It makes a great deal of sense
>to have a homogenous way of changing:
>
>	* the value of a variable;
>	* the value of a property of some name;
>	* the value of an instance variable of some object;
>	* the value of a class variable of some class.

How about a slot in a modifiable object?  That's a paradigm
case, and yet it's not clear that you list it at all.

In any case, the things you list are not all "top-level object[s]
which exist in the name space".  For instance, instance vars are
local to the object.

>My objection to SETF is that the way it alters list structure is
>destructive. As it has become/was intended to become the default way
>of changing values (aluminium book p94), LisP beginners will no longer
>learn the fundamental software engineering principle outlined above,
>so that the perceived power and expressiveness of the language is
>impoverished.

Bull.
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Cy5C00.JHF@cogsci.ed.ac.uk>
In article <···············@netcom.com> ······@netcom.com (Henry G. Baker) writes:
>As I have said elsewhere, the substitution of the Maclisp model of
>Lisp program as a "character string" instead of the Interlisp model of
>Lisp program as an S-expression made up of cons cells was a major step
>backwards. 

This is a religious position, pure and simple.
From: Harley Davis
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <DAVIS.94Oct24112349@passy.ilog.fr>
In article <··············@netcom.com> ····@netcom.com (Christopher J. Vogt) writes:

   I disagree that standardizing Lisp will inhibit it's usefulness for exploring 
   new ideas.  In terms of syntactic issues, Lisp macros make it easy
   to continue to explore, and present readily portable new ideas.  I think
   the bigger risk is that no standard is forthcoming, and the language dies
   due to lack of portability.  If all you want to do is research hacking it's
   fine to do without standardization, but if you want to develop and deliver
   applications it is an entirely different matter.

Actually, an ISO ISLisp standard is forthcoming.  There is currently a
Committee Draft for ISO ISLisp (CD 13816 ISlisp), and work is
proceeding steadily toward a real international standard for Lisp.
It's slow because of all the politics, but it is progressing.  And
there will be commercial implementations.  (Indeed, Ilog Talk is
nearly there already and we will be 100% when the standard arrives.)

You can perhaps find out more by writing to the committee convenor,
Christian Queinnec (··················@inria.fr).

-- Harley Davis
-- 

------------------------------------------------------------------------------
motto: Use an integrated object-oriented dynamic language today.
       Write to ····@ilog.com and ask about Ilog Talk.
------------------------------------------------------------------------------
Harley Davis                            net: ·····@ilog.fr
ILOG S.A.                               tel: +33 1 46 63 66 66
2 Avenue Galli�ni, BP 85                fax: +33 1 46 63 15 82
94253 Gentilly Cedex, France            url: http://www.ilog.fr/
From: Henry G. Baker
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <hbakerCy6H13.Bv@netcom.com>
In article <·················@athos.rutgers.edu> ···@cs.rutgers.edu (Lou Steinberg) writes:
>In article <···············@netcom.com> ······@netcom.com (Henry G. Baker) writes:
>   As I have said elsewhere, the substitution of the Maclisp model of
>   Lisp program as a "character string" instead of the Interlisp model of
>   Lisp program as an S-expression made up of cons cells was a major step
>   backwards.  [...]
>   The proper step forwards would have been to make S-expressions
>   _persistent_, instead of bowing to the Fortran/C/Ada model of programs
>   as character strings
>
>In other words, a good editor for lisp will act at times like an
>s-expr editor and at times like a character string editor, and you can
>build such an editor on what is "really" either a character editor or
>an s-expr editor.  And either way the editor can be closely tied to the
>running Lisp core image - this is obvious for "residential" systems,
>but the interface between Allegro and Emacs shows that it can be done
>for non-residential editors as well.  (E.g., esc-A shows the arguments
>of a function, based on a query to a running Lisp.)
>
>Furthermore, programmers edit things besides programs and data -
>things like reports, email, newsgroup postings, etc.  The overhead of
>having two different editors, one for lisp and one for C, English,
>etc., is something you really do have to take into account.

I guess the problem with WYSIWYG editors is that computer people have
worked with character strings so long that they have forgotten that
they are not the only things humans ever deal with.  Non-alphabetic
languages like Chinese are actually much easier for most people to
deal with, but are harder for typesetters and computers.

So, 'what you SEE is what you get' depends a lot on what you SEE, or
want to SEE.

A student of mine did a WYSIWYG BCPL PROGRAM editor for the Xerox Alto
called 'Flash' that stored things internally in a way very similar to
S-expressions, although you didn't see parentheses on the screen.  His
major reason for doing it was that this internal form saved 50-60% of
the space in main memory, which was a critical issue on this machine.
So his editor is proof that one can do decent WYSIWYG editing on more
symbolic representations.

      Henry Baker
      Read ftp.netcom.com:/pub/hbaker/README for info on ftp-able papers.
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Cy1H5H.5I8@cogsci.ed.ac.uk>
In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:

>(iia: Flaws in the language design)
>
>(iia1) Prior to the definition of Common LISP, many LisP programmers
>used an in-core development style. This style of development has
>significant advantages: the development cycle is edit/test rather than
>edit/load/test.

This "load" is something like two characters (C-z e) in the Emacs
mode I use.  I'd much rather have that than DEDIT!

I don't think the differences between these two styles are all
that great.  Obviously, some people disagree with this ...

> More significantly, the code of a function actually on
>the stack can be edited. By definition (CLtL 1, p347) Common LISP
>comments are not read. Consequently, code edited in core loses its
>documentation.
>
>Richard Barbour's Procyon workaround, and the Envos Medley workaround,
>are technically in breach of the standard definition, which
>effectively prevents in-core development.

As far as I can tell, there's nothing on page 347 or anywhere else
that prevents a Common Lisp system from retaining comments in various
ways.  Perhaps the particular workarounds you mention violate
something, but I don't know what it is.  (But then I don't know
what those workarounds do.)

>  A language definition which
>prevents established practitioners working in their preferred manner
>is broken.

They were not established practitioners of Common Lisp in any case.  :-)

The idea that Lisp is a single language has done -- and is doing --
tremendous damage.  Common Lisp is just one Lisp, not all Lisps.


>(iia2) I don't think anyone any longer defends the decision to choose
>a LISP2. It is especially true in LisP that code *is* data.

What does that have to do with Lisp-1 vs Lisp-2?


>(iia3) Before the development of Common LISP a number of typographical
>tricks were widespread in the LisP community which greatly assisted
>the readbility of code. As examples, it was common to capitalise
>theInitialLettersOfEmbbeddedWords in names. 

Good riddance!  :-)

>  Similarly, we used to have
>a convention that functions, methods etc had names Which Started With
>a Capital Letter, whereas other variables had names which started in
>lower case. INCOMMONLISPTHISISNOLONGERPOSSIBLE.

False.

It's in part because of these wild attacks on Common Lisp in the UK
that I decided to do something about case-sensitivity, BTW.  Hence
readtable-case and the end of this particular target of opportunity.


>(iia4) The implementation of the sequence functions is a mess. It's a
>shame, because one can admire the o'erleaping ambition, but such a
>huge monolithic structure is required to make it work that any Common
>LISP environment has to be unwieldy. 

Just how "huge" is this "monolithic" structure, anyway?


>(iia5) I remain unconviced that keywords in lambda-lists are a good
>idea. A number of points here: it is fundamental to the nature of LisP
>that it is syntax-less and keyword-less -- that's a lot of what gives
>it it's elegance, and what allows a LisP interpreter to be so small
>and simple. A Common LISP interpreter must include a parser to handle
>lambda lists, and once again is neither small nor simple.

Franz Lisp handled all that (except &KEY) in a not-very-large macro.


>(iia6) I have complained often enough before about the abhomination,
>SETF. I rehearse my objections briefly. Destructively altering lists
>may be dangerous, and should always be done consciously and with care.
>If you use RPLAC, you know you what you are doing. SETF makes
>destructive change invisible to the naiive user: it says 'take this bit
>of memory, I don't know where it is, I don't know who owns it, I don't
>know who else is holding pointers to it, and trample all over it'.
>It's direct equivalent is the BASIC keyword, POKE. I *shudder*.

It is not any equivalent of POKE.  It is very like assignment in
most languages.  You can assign to variables and to slots of
structs and to array elements.  If conses were the one exception,
people would complain about non-orthogonality.

This kind of exaggerated criticism substantially weakens your
case, in my opinion.


>(iib) These flaws are consequences of political goals


>Given this circumstance, I am convinced by and happy to repeat
>publicly the allegation that has been made frequently in the past that
>the essential aim of a substantial number of the participants in X3J13
>was to make Common LISP as different as possible from InterLISP, in
>order to make it more difficult for Xerox to compete. 

But they didn't make it very different from MacLisp.

>This allegation explains both the comments system and the choice of
>LISP2, two decisions each of which are otherwise inexplicable. 

They are *easily* explained in other ways.  Just like MacLisp,
for instance.  A number of other Lisps are also Lisp-2s.

Of course, there may have been several reasons to want to follow
MacLisp rather than InterLisp.  Perhaps making things more difficult
for Xerox was one of them.  Preventing Interlisp from being imposed
as a standard (ARPA was wanting a single Lisp, I seem to recall)
was very likely a factor.

But there's nothing especially evil about basing a Lisp on
MacLisp.  I'd prefer that to many alternatives.


>I am prepared to believe the claim that the case-insensitive reader was a
>requirement of the United States Department of Defense.

See above.  There was no opposition to readtable-case that had
any DOD component I could detect.


>In summary, I claim that in a number of instances, X3J13 deliberately
>and knowingly chose the less good of technical choices, in order to
>produce a language closer to that of the smaller vendors, and further
>from that of Xerox.

That may be, but your evidence is not very convincing.


>I hope that you may be right, but do not myself agree. I believe, and
>I guess that you do, that languages within the LisP family offer at
>the very least a better programming idiom than C++. Ten years ago,
>machines which could run a big LisP system were at last becoming
>cheap, and all the major computer vendors were showing a considerable
>interest in the language.
>
>Five years later that interest had waned. It's no co-incidence, I
>think, that this was contemporaneous with the introduction of a new,
>'standard' version of the language so complex that compilers were
>exceedingly difficult to develop, optimise and maintain, and so
>different from existing, well established variants of the language
>that experienced programmers were at a loss. 

I had no trouble adapting, but I was using Franz Lisp which is
very like MacLisp.

-- jeff
From: Arun Welch
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <welch.17.001648E1@anzus.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:

>In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:

>>(iia1) Prior to the definition of Common LISP, many LisP programmers
>>used an in-core development style

>This "load" is something like two characters (C-z e) in the Emacs
>mode I use.  I'd much rather have that than DEDIT!

Geez, DEDIT hasn't been part of the main system for at least 5 releases, it 
was replaced almost 8 years ago (the only reason it's still around as a 
library entry is because there might be someone somewhere who really feels 
compelled to use it). Comparing emacs to it is as silly as comparing emacs to 
ed(1)... 


>As far as I can tell, there's nothing on page 347 or anywhere else
>that prevents a Common Lisp system from retaining comments in various
>ways.

There is, however, something on page 526: "The semicolon and all characters up 
to and including the next newline are ignored".

>  Perhaps the particular  workarounds you mention violate
>something, but I don't know what it is.  (But  then I don't know
>what those workarounds do.)

What those workarounds do is fail to ignore the semicolon and all characters 
up to and including the next newline.

...arun 
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Cy6wIH.8DD@cogsci.ed.ac.uk>
In article <·················@anzus.com> ·····@anzus.com (Arun Welch) writes:
>In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:
>
>>>(iia1) Prior to the definition of Common LISP, many LisP programmers
>>>used an in-core development style
>
>>This "load" is something like two characters (C-z e) in the Emacs
>>mode I use.  I'd much rather have that than DEDIT!
>
>Geez, DEDIT hasn't been part of the main system for at least 5 releases, it 
>was replaced almost 8 years ago (the only reason it's still around as a 
>library entry is because there might be someone somewhere who really feels 
>compelled to use it). Comparing emacs to it is as silly as comparing emacs to 
>ed(1)... 

I was aware that it was no longer the usual editor.

But the people who flame about representing programs as text should
consider that more than one factor is involved.  If you accept that
Emacs is better than DEDIT or that any text-based editor could be
better than DEDIT, then you agree with me on this.

However, a number of people don't.  They think the text-based
approach is inherently wrong.  The very same "arguments" were
being made when DEDIT was the usual editor, you know.

In short, I mention DEDIT to heighten contradictions.

BTW, in my view, both approaches can be sufficiently good that what
remains is just a matter of personal preference.  

>>As far as I can tell, there's nothing on page 347 or anywhere else
>>that prevents a Common Lisp system from retaining comments in various
>>ways.
>
>There is, however, something on page 526: "The semicolon and all
>characters up to and including the next newline are ignored".

I knew of that and I disagree with your interpretation.

Sure, if I call READ nothing can appear in the result as a product
of the comment.  But programming environments are free to retain
whatever information they want.  Moreover, the information can be
made accessible through language extensions.

>>  Perhaps the particular  workarounds you mention violate
>>something, but I don't know what it is.  (But  then I don't know
>>what those workarounds do.)
>
>What those workarounds do is fail to ignore the semicolon and all characters 
>up to and including the next newline.

How so?

-- jeff
From: Mike Haertel
Subject: Lisp: A tower of babble?
Date: 
Message-ID: <MIKE.94Oct23195425@majestix.cs.uoregon.edu>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>The idea that Lisp is a single language has done -- and is doing --
>tremendous damage.

However, one could also argue that the absence of a single "Lisp"
has done a tremendous amount of damage.

But then, of course, the argument rages: Whose lisp?
From: Erik Naggum
Subject: Re: Lisp: A tower of babble?
Date: 
Message-ID: <19941024T035011Z.enag@naggum.no>
[Jeff Dalton]

|   The idea that Lisp is a single language has done -- and is doing --
|   tremendous damage.

[Mike Haertel]

|   However, one could also argue that the absence of a single "Lisp" has
|   done a tremendous amount of damage.
|   
|   But then, of course, the argument rages: Whose lisp?

Jeff has repeatedly commented that Common LISP is not the only LISP, and he
argues very strongly that one should not transfer arguments on Common LISP
to "LISP" in general.  I'm a little unclear about the characteristics of
these non-Common LISPs.

Jeff, it would be enlightening if you could list the alternatives you have
in mind.  if you could make two sub-groups, one for general-purpose,
portable and widely supported LISPs, and one for special-purpose or
embedded LISPs, I think that would be very helpful.

#<Erik>
--
Microsoft is not the answer.  Microsoft is the question.  NO is the answer.
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <783016568snz@wildcard.demon.co.uk>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

> It is not any equivalent of POKE.  It is very like assignment in
> most languages.  You can assign to variables and to slots of
> structs and to array elements.  If conses were the one exception,
> people would complain about non-orthogonality.

Agreed. SETF allows lvalues in CL to resemble rvalues, and to add
a few bits of "syntactic sugar", like a left to right evaluation
order, and evaluating subexpressions only once. It's a lot more
effort to do this without generalised variables. Plus, you have
to remember extra function names, like ASTORE.
 
> This kind of exaggerated criticism substantially weakens your
> case, in my opinion.

It falls apart! POKE was an untyped memory "assignment". It was
a very dangerous primitive to use, unless you knew what you were
doing. I only ever saw it used for very hairy code, like stuffing
machine code into comments, or accessing memory mapped devices.
SETF should be perfectly safe, unless you consider all assignments
to be unsafe, in which case you wouldn't be using most available
languages. I have great trouble avoiding assignments in C and
Basic, and even Lisp. That could just be because of the dialects
that I use...

Martin Rodgers
-- 
Please vote for moderation in comp.lang.visual
http://cyber.sfgate.com/examiner/people/surfer.html
From: Richard A. O'Keefe
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <38icti$132@goanna.cs.rmit.oz.au>
····@aiai.ed.ac.uk (Jeff Dalton) writes:
>As far as I can tell, there's nothing on page 347 or anywhere else
>that prevents a Common Lisp system from retaining comments in various
>ways.  Perhaps the particular workarounds you mention violate
>something, but I don't know what it is.  (But then I don't know
>what those workarounds do.)

In support of Jeff Dalton's point, I note that ``Edinburgh'' Prologs
have also taken the attitude that comments textually embedded in the
source representation of a term are not part of the term itself, so
	p(X, /*equality by another name*/, X).
	p(Y, % another version
	     Y).
	p(Z, Z).
are all processed by read/1 as (alphabetic variants of) the "same" term
p(_1,_1).  This is usually what you want, but it is not so good for a
symbolic debugger, nor is it wonderful for source-to-source transformer.
Bearing in mind the comments *must* not have any effect during computing,
you don't want to burden the normal read/1 with them.  One solution that
has been adopted is to have another input routine
	read_with_comments(-Term, -Dictionary, -CommentMap)
where the CommentMap is a set of (comment text, comment location in Term)
pairs.  The only problem I can see with doing this in Common Lisp is
read macros, but on the other hand they are also the means by which one
can do this.

To be frank, when I hack Scheme on the 1Mb Mac Plus at home,
I'm _glad_ the comments stay in the source files.

>>(iia3) Before the development of Common LISP a number of typographical
>>tricks were widespread in the LisP community which greatly assisted
>>the readbility of code. As examples, it was common to capitalise
>>theInitialLettersOfEmbbeddedWords in names. 

>Good riddance!  :-)

I used to think that I was a member of the Lisp community.  Perhaps it's
because I was never a member of the "LisP" community that I never saw
these styles.  The Lisp systems I used before I went to Edinburgh didn't
_have_ lower case.  The Lisp systems I used on the DEC-10 at EdAI ignored
case.  Kaisler's book on Interlisp says, on p255, "I have found that it
helps me read my programs by breaking up the literal atom names with
periods (.) or dashes (-).  The latter works most of the time, but CLISP
does have a tendency to interpret such names as the subtraction of two
variables.  ... Consider some of the following atom names: ...
	sentence.scanner
	number.of.characters
It is unfortunate that most of the atom and function names used in Interlisp
do not follow this philosophy."  I think Kaisler is a member of the Lisp
(but not "LisP") community in good standing.  I know that in Interlisp I
tended to use dots or underscores.  I for one regard
	a-name-with-several-parts
	a.name.with.several.parts
	a_name_with_several_parts
as far more readable than
	aNameWirthSeveralParts

In any case, what an absurd stick to beat Common Lisp with, seeing that
a CLtL2 reader _can_ preserve alphabetic case!

>>(iia5) I remain unconviced that keywords in lambda-lists are a good
>>idea. A number of points here: it is fundamental to the nature of LisP
>>that it is syntax-less and keyword-less -- that's a lot of what gives
>>it it's elegance, and what allows a LisP interpreter to be so small
>>and simple. A Common LISP interpreter must include a parser to handle
>>lambda lists, and once again is neither small nor simple.

>Franz Lisp handled all that (except &KEY) in a not-very-large macro.

I myself have reservations about keyword arguments, and think that in
a language with (let (--) --) there is no excuse for &aux.  Optional and
Rest are nothing new.  I was an Interlisp enthusiast.  Interlisp handled
- Optional by allowing a function to by called with any number of
  arguments:  excess arguments being evaluated and then ignored,
  missing arguments defaulting to NIL.
- Rest by means of NOSPREAD functions, defined by NLAMBDA.
I think one can argue with a clear conscience that &optional and &rest
are no harder to implement, and allow one to express ones intentions
more clearly.

>>This allegation explains both the comments system and the choice of
>>LISP2, two decisions each of which are otherwise inexplicable. 

>They are *easily* explained in other ways.  Just like MacLisp,
>for instance.  A number of other Lisps are also Lisp-2s.

Can someone explain what the references to Lisp-2 are about?
I have seen some of the old Lisp-2 documents, and don't see any
interesting resemblance between Common Lisp and Lisp-2.  Can it be
a reference to the separate namespaces for functions and variables,
one of the things I like least about CL?  But in that CL resembles
Interlisp, and _doesn't_ resemble that other MIT product, Scheme.

-- 
"The complex-type shall be a simple-type."  ISO 10206:1991 (Extended Pascal)
Richard A. O'Keefe; http://www.cs.rmit.edu.au/~ok; RMIT Comp.Sci.
From: Blake McBride
Subject: Common Lisp's dual name space
Date: 
Message-ID: <38ofd8$set@edge.ercnet.com>
> In article <··········@goanna.cs.rmit.oz.au>,
> Richard A. O'Keefe <··@goanna.cs.rmit.oz.au> wrote:
> (see papers by eg Gabriel) as shorthand to distinguish those LisPs
> (e.g. PSL, Interlisp, Scheme, EuLisp) which have a single namespace
> for code and data, and those (e.g. Common LISP) which have separate
> function and value namespaces. 

This dual name space aspect of Common Lisp is my no. 1 complaint
of Common Lisp.  Why the heck would someone do such a thing?

One of the main things which makes Lisp attractive is its ability
to allow data and programs to look alike.  The dual name space bit
makes taking advantage of this fact very kludgy (syntacticly)!
From: Barry Margolin
Subject: Re: Common Lisp's dual name space
Date: 
Message-ID: <38otf8$7tj@tools.near.net>
In article <··········@edge.ercnet.com> Blake McBride <·····@edge.ercnet.com> writes:
>This dual name space aspect of Common Lisp is my no. 1 complaint
>of Common Lisp.  Why the heck would someone do such a thing?

Because it's common to give different meanings to a word when it's used as
a noun versus a verb.  For instance, as a noun, "list" means a collection,
but as a verb it means "to enumerate".  From the little I've read about the
brain, this noun/verb distinction is inherent in how we process language
(e.g. there are separate parts of the brain that handle each), so it makes
sense to reflect it in new languages that we define.

If a function takes a list as an argument, I'm inclined to name the
variable LIST, and I have no intent to redefine the homologous function.

>One of the main things which makes Lisp attractive is its ability
>to allow data and programs to look alike.  The dual name space bit
>makes taking advantage of this fact very kludgy (syntacticly)!

Funny, but the archetype one-namespace Lisp dialect is Scheme, and it
doesn't even provide a way to convert lisp structure into the corresponding
procedure; i.e. EVAL isn't a standard part of the language.  The only
place in which Scheme programs and data look alike is in printed
representation -- to take advantage of this, you have to write a data
structure out to a file and then LOAD it.  That seems much more kludgey
than Common Lisp's FUNCALL and FUNCTION.

Returning to my above analogy with natural language, I'd say that #'<name>
is analogous to adding the "-er" suffix to a verb to turn it into a noun
(e.g. "lister").  Again, this is something that comes natural to us as a
result of the structure of the language centers of the brain.
-- 

Barry Margolin
BBN Internet Services Corp.
······@near.net
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <CyCCEE.4DK@cogsci.ed.ac.uk>
In article <···················@passy.ilog.fr> ·····@ilog.fr (Harley Davis) writes:
>
>In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:
>
>   In article <··········@goanna.cs.rmit.oz.au>,
>   Richard A. O'Keefe <··@goanna.cs.rmit.oz.au> wrote:
>   >····@aiai.ed.ac.uk (Jeff Dalton) writes:

But I did not write any of the quoted text.

>   >Can someone explain what the references to Lisp-2 are about?
>   >I have seen some of the old Lisp-2 documents, and don't see any
>   >interesting resemblance between Common Lisp and Lisp-2.  Can it be
>   >a reference to the separate namespaces for functions and variables,
>   >one of the things I like least about CL?  But in that CL resembles
>   >Interlisp, and _doesn't_ resemble that other MIT product, Scheme.
>   >
>
>   I've sworn off writing anything in the least controversial for a
>   fortnight, but I thought I'd answer this point. LISP1 and LISP2 (with
>   the numbers normally subscripted, but that is hard to do in a
>   plain-text mail message) are terms which have been fairly widely used
>   (see papers by eg Gabriel) as shorthand to distinguish those LisPs
>   (e.g. PSL, Interlisp, Scheme, EuLisp) which have a single namespace
>   for code and data, and those (e.g. Common LISP) which have separate
>   function and value namespaces. 

-- jeff
From: Thomas M. Breuel
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <TMB.94Oct23212650@arolla.idiap.ch>
In article <··········@rheged.dircon.co.uk> ·····@rheged.dircon.co.uk (Simon Brooke) writes:
|Note that in critiqueing the language definition, I have not
|reiterated Henry Baker's objections (see message id
|<················@netcom.com>) to the sheer size of the language as
|defined, although I share them, and for very much the reasons he
|states.

I used to think that CommonLisp was big, that its library was
non-orthogonal, and that in several areas, the designers got carried
away with providing too much unnecessary functionality (CLOS,
sequences).

However, compared to modern languages like C++ or Ada 9x, CommonLisp
is downright simple.  Most of the bulk is in the standard datatypes,
which would be considered part of the library in other languages.

I think the most important problems with CommonLisp are still that

 (1) there are no widely-accepted standards for how to add declarations
     so that code is guaranteed to run fast

 (2) and the lack of a standard C and Fortran interface.

 (3) the lack of user control over storage use and layout in data
     structures

Well, maybe these will get fixed sooner or later.  Of course, once
there is only one CommonLisp vendor left, at least (1) and (2) will
have been solved...

				Thomas.
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <782421305snz@wildcard.demon.co.uk>
In article <··········@rheged.dircon.co.uk>
           ·····@rheged.dircon.co.uk "Simon Brooke" writes:

> the reference (anybody know it?); (2) This was just about the time
> when X3J13 were driving their nails into the coffin of LisP, so modern
> LisP programmers (if forced to use the aluminium book) would probably
> be slower; (3) I doubt whether C++ would have been considered at the
> time (too new).

Are you saying that some programmers try to Learn CL by reading CLtL?
When was this, by the way? I read about CL in the mid 80s, but it
was a few years more before I learned of C++.

Thanks,
Martin Rodgers
-- 
"Internet? What's that?" -- Simon "CompuServe" Bates
http://cyber.sfgate.com/examiner/people/surfer.html
From: Peter Ward
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <782459931snz@mondas.demon.co.uk>
(This thread will probably never end, but I can't prove it.)

I program professionally in C++ and, yes, I think it a terrible
language. It is hard to believe that it was _designed_.

Having said that, I can knock out (for example) simple file filters
in no time. At this level it seems fine -- don't even need to
open files, just stream in a few bytes, do something and stream
'em out again.

Recently I was playing with some crossword-like logic puzzles in
the Sunday paper and wanted to work out a reasonable algorithm.
I started to sketch out the data structures in C++ but the language
overhead was swamping my thinking.

I had recently ftp'd CLISP and thought I'd give it a try. I had
a quick and dirty solution coded in an afternoon because I could
focus on the problem. Sure, I had to fit my data into lists, but
then I have powerful manipulation routines at my disposal.

Having not written any Lisp since college some 15 years ago, my
program was not valid Common Lisp. So this took about 30mins to
fix. My C++ version never got further than the first typedef.

The program runs like sh*t off a blanket. I can optimise it, and
would do so by looking at the _algorithm_ first, then the 
implementation. Actually it runs Ok if compiled but I just cant
leave things alone!

I have studied or worked with Cobol, Fortran, Algol 60+68,
BCPL, Lisp, C, C++, Actor, Smalltalk, Basic, Assemblers, Miranda. 
I find the more exposure I have to different approaches, the
more flexible my thought processes become. I like Lisp because
it is simple. Maybe I am too. Fine.
-- 

Pete Ward                   I know it's irrational but at times
Mondas IT Ltd               of stress I take great comfort from
                            the belief that somewhere out there
                            there really is a free lunch.
From: Tim Bradshaw
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <TFB.94Oct19091202@burns.cogsci.ed.ac.uk>
* Cyber Surfer wrote:
> ·····@rheged.dircon.co.uk "Simon Brooke" writes:
>> the reference (anybody know it?); (2) This was just about the time
>> when X3J13 were driving their nails into the coffin of LisP, so modern
>> LisP programmers (if forced to use the aluminium book) would probably
>> be slower; (3) I doubt whether C++ would have been considered at the
>> time (too new).

> Are you saying that some programmers try to Learn CL by reading CLtL?
> When was this, by the way? 

I learnt CL from CLtL.  Until I got the ANSI document I don't think
I'd read anything else that describes CL.  I had programmed in other
Lisps (Cambridge Lisp, some DOS Lisps) before but I wasn't really
fluent -- for instance I had lots of trouble with scope &c.  I didn't
find it too hard to read, but I often learn things from reference
books.  I did have access to an implementation, in fact to at least 2.

--tim
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <783016934snz@wildcard.demon.co.uk>
In article <·················@burns.cogsci.ed.ac.uk>
           ···@cogsci.ed.ac.uk "Tim Bradshaw" writes:

> I learnt CL from CLtL.  Until I got the ANSI document I don't think
> I'd read anything else that describes CL.  I had programmed in other
> Lisps (Cambridge Lisp, some DOS Lisps) before but I wasn't really
> fluent -- for instance I had lots of trouble with scope &c.  I didn't
> find it too hard to read, but I often learn things from reference
> books.  I did have access to an implementation, in fact to at least 2.

You have my sympathy, then! I wouldn't recommend that anyone learn
a language from a reference book. Mind you, I learned Basic from the
manual for the first machine I used. So I "learned" the language,
but knew little (practically nothing!) about programming. That was
well over 10 years ago. Today, you can still find books on programming
in bookshops, dispite the number of "I Hate XXX" type books.

Have you ever tried learning a language from a tutorial in a magazine?
I have, and I've tried implementing some, with disasterous results!
I can only blame myself for that.
-- 
Please vote for moderation in comp.lang.visual
http://cyber.sfgate.com/examiner/people/surfer.html
From: Jeff Dalton
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <Cy1E9K.47F@cogsci.ed.ac.uk>
In article <············@wildcard.demon.co.uk> ············@wildcard.demon.co.uk writes:
>In article <··········@rheged.dircon.co.uk>
>           ·····@rheged.dircon.co.uk "Simon Brooke" writes:
>
>> the reference (anybody know it?); (2) This was just about the time
>> when X3J13 were driving their nails into the coffin of LisP, so modern
>> LisP programmers (if forced to use the aluminium book) would probably
>> be slower; (3) I doubt whether C++ would have been considered at the
>> time (too new).
>
>Are you saying that some programmers try to Learn CL by reading CLtL?

That's how *I* learned Common Lisp.
From: Cyber Surfer
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Date: 
Message-ID: <783017794snz@wildcard.demon.co.uk>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk "Jeff Dalton" writes:

> That's how *I* learned Common Lisp.

Ouch. ;-) I'm glad I had an easier route. The bookshop that sold
me CTtL was suprised that I was buying it as an individual, as it
was using ordered by institutions.

Of course, it depends on how familiar you might be with Lisp, as
you might find it easier to get to grips with CL with some previous
experience with Lisp and Lisp concepts. I still wouldn't recommend
the book to a programmer new to CL, simply coz I know of books
that can do a better job of _teaching_ the language. If you want
the _full_ language (as I did) after learning CL from another
source, then I'd recommend it without reservation.
-- 
Please vote for moderation in comp.lang.visual
http://cyber.sfgate.com/examiner/people/surfer.html