From: Pascal Bourguignon
Subject: Lisp-2 or Lisp-1
Date: 
Message-ID: <87of23ll6n.fsf@thalassa.informatimago.com>
I was dubious about Lisp-2 at  first, but finally I've noticed that in
human languages,  there are  a lot of  instances that show  that we're
wired for a Lisp-2 rather than a Lisp-1:

    The fly flies.          (FLIES FLY)
    The flies fly.          (FLY FLIES)


;-)

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.

From: Franz Kafka
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <EMXwa.7304$Ky3.2740@news01.roc.ny.frontiernet.net>
Scheme has a few things that are nicer than CommonLisp:
continuations (are useful to implement language features.)
cleaner semantics (easier to write Functional code, and
more beautiful too,)


CommonLisp has a few things that are nicer than Scheme:
CLOS (Built-in Object system.)
defmacro/defstructure (was missing from standard Scheme
for way too long)
More tools for building large systems (more built-in functions.)
2 Name-spaces (No chance a symbol-name will conflict with
a function-name.)

Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
that combinds all the benefits of Scheme and CommonLisp.

Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
introspection/reflection, code-walking)

&& make a kick ass Lisp system.

Cheers for Paul Garham's ARC.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwptmjbo74.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
> and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
> that combinds all the benefits of Scheme and CommonLisp.
> 
> Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
> Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
> introspection/reflection, code-walking)
> 
> && make a kick ass Lisp system.
> 
> Cheers for Paul Garham's ARC.

First, the names Lisp1 and Lisp2 are from the paper RPG and I
wrote for X3J13 [1] when considering the namespace issue.  The debate was
originally over Scheme-style or CL-style, and I felt I was losing the
debate because Scheme has too much "affection" going for it.  I wanted
it to be clear that the only part of Scheme we were talking about was
the namespace part, so I concocted a family of language dialects
called Lisp1 which have a single namespace (and which include Scheme),
and another family that has dual namespaces (and which presumably
included CL).  The idea was that people should be able to conceive of
Scheme with 2 namespaces and a CL with 1 namespace, and so by talking
about Lisp1 and Lisp2 rather than Scheme and CL, we were being neutral
as to what other language features the two languages under discussion
had.  This brought balance back to the discussion.  Anyway, so the
digit counts namespaces, and it was an error in the paper not to call
CL a Lisp4, since there are also tagbody/go and block/return
namespaces.  You are, therefore, incorrect in assuming that Lisp3 has
no designation.  To the extent that Lisp1 and Lisp2 have any meaning,
Lisp3 means the family of languages with 3 namespaces.

But ok, so we know what you meant.  Here are my thoughts for what they
are worth:

Everyone wants something different in a Lisp.  So the more people
you involve, the more what you make will look like a big pile of things...
kind of like CL already does. :)

Nothing keeps you from making your own Lisp dialect, just as nothing
keeps you from starting your own political party [2], except the fact that
it's a lot of work and initially quite lonely.  It can either succeed
or fail spectacularly.

Personally, I think there are enough dialects about and that it's better
to just use one and extend it.  If you don't like CL or Scheme, then try
ISLISP or, as you say, work with PG on ARC before you just start your own
completely from scratch.  It will not only be less lonely, but it will
also mean that resourcewise you are adding to the energies of others rather
than dividing things up still further.

[1] "Technical Issues of Separation in Function Cells and Value Cells"
    http://www.nhplace.com/kent/Papers/Technical-Issues.html

[2] Parenthetically Speaking with Kent M. Pitman:
    "More Than Just Words: Lambda The Ultimate Political Party"
    http://www.nhplace.com/kent/PS/Lambda.html
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ba2aoa$13vi$1@f1node01.rhrz.uni-bonn.de>
Franz Kafka wrote:

> Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
> and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
> that combinds all the benefits of Scheme and CommonLisp.
> 
> Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
> Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
> introspection/reflection, code-walking)
> 
> && make a kick ass Lisp system.

Because it's extremely hard to do. In general, language features don't 
combine very well.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Jim Bender
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <cGYwa.1730$dE.536635489@newssvr12.news.prodigy.com>
At last I understand what all those yellow-colored "Dummy's Guide to
[whatever]" books
are really about. The only thing I am puzzled about is whether this is from
the "Dummy's
Guide to Lisp and Scheme" or from the "Dummy's Guide to Linguistics" ;)

"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message
···················@thalassa.informatimago.com...
>
> I was dubious about Lisp-2 at  first, but finally I've noticed that in
> human languages,  there are  a lot of  instances that show  that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
>     The fly flies.          (FLIES FLY)
>     The flies fly.          (FLY FLIES)
>
>
> ;-)
>
> --
> __Pascal_Bourguignon__                   http://www.informatimago.com/
> ----------------------------------------------------------------------
> Do not adjust your mind, there is a fault in reality.
From: Franz Kafka
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <uV5xa.7443$4t.6008@news01.roc.ny.frontiernet.net>
"Jim Bender" <···@benderweb.net> wrote in message
····························@newssvr12.news.prodigy.com...
> At last I understand what all those yellow-colored "Dummy's Guide to
> [whatever]" books
> are really about. The only thing I am puzzled about is whether this is
from
> the "Dummy's
> Guide to Lisp and Scheme"

David T's Common Lisp: A Gentile Introduction to Symbolic Computation avail.
for free on line.

But, don't expect it to teach you all of Lisp in 21 days. It'll prob.
take three-six months.

Then Read Sonya E. Keene's Object Oriented Programming in Common Lisp: A
Programmers Guide to CLOS. not avail. on Line.

After that you can glance at Peter Novig's Lisp-Bible Paradigms of
Artificial Intellignece Programming: Case-Studies in Common Lisp, and
understand some of it.

In about 6 months, or less if you are a fast reader, you should understand
CommonLisp.

Reading The Schemers Guide, from schemers.com should teach
people who are in high school or in gifted programs how to use
Scheme.

That will take a month or two to get through iff you really want to
understand it.

& The Little Lisper/The Little Schemer and The Seasoned Schemers
should help people who are not Comp. Sci. majors understand Lisp/Scheme.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwd6ij56dn.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> David T's Common Lisp: A Gentile Introduction to Symbolic Computation avail.
> for free on line.

Repeat after me...

  * Common Lisp, unlike _some_ languages, accepts and fosters _multiple_
    programming philosophies.

  * You know those s-expressions in Common Lisp? They're _secular_ expressions.

  * Touretzky's book is a "Gentle" introduction, not a "Gentile" introduction.

Thank you for your attention.
From: Bruce Lewis
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <nm9u1bvqako.fsf@buzzword-bingo.mit.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> I was dubious about Lisp-2 at  first, but finally I've noticed that in
> human languages,  there are  a lot of  instances that show  that we're
> wired for a Lisp-2 rather than a Lisp-1:
> 
>     The fly flies.          (FLIES FLY)
>     The flies fly.          (FLY FLIES)

Here are a couple of hints if you want to start a CL/Scheme flame war:

1) Timing.  There was just an extended discussion on lisp1 vs lisp2 in
   c.l.l, so weariness of the topic will likely cause the flame war to
   die out sooner than you intended.

2) Timing.  Flame wars that start on Fridays tend to die out over the
   weekend.  Do your incendiary crosspost early in the week for best
   results.
From: Joe Marshall
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <4r3v2cr7.fsf@ccs.neu.edu>
Bruce Lewis <·······@yahoo.com> writes:

> Here are a couple of hints if you want to start a CL/Scheme flame war:
> 
> 1) Timing.  There was just an extended discussion on lisp1 vs lisp2 in
>    c.l.l, so weariness of the topic will likely cause the flame war to
>    die out sooner than you intended.
> 
> 2) Timing.  Flame wars that start on Fridays tend to die out over the
>    weekend.  Do your incendiary crosspost early in the week for best
>    results.

  3) Attitude.  Assume that Common Lisp users are unaware of Scheme
     and that they would prefer it if they were not so obviously
     ignorant.

  4) Attitude.  Assume that use of Scheme is prima facie evidence of
     superior reasoning power.  Note that you yourself use Scheme.
     Use fallacious modus tollens to draw conclusions about CL users.

  5) Ad hominem resoning can be used to extend the thread.  Remember
     that we are all unfriendly savages.  Nazi's too.
From: Paul Wallich
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <pw-B1C252.09421516052003@reader1.panix.com>
In article <··············@thalassa.informatimago.com>,
 Pascal Bourguignon <····@thalassa.informatimago.com> wrote:

> I was dubious about Lisp-2 at  first, but finally I've noticed that in
> human languages,  there are  a lot of  instances that show  that we're
> wired for a Lisp-2 rather than a Lisp-1:
> 
>     The fly flies.          (FLIES FLY)
>     The flies fly.          (FLY FLIES)
> 
> 
> ;-)

More realistically, we're wired for a Lisp-N, where N is the number of 
part-of-speech roles that can be used in a single sentence. For example 
(excuse the bad attempt at urban slang): "Fly flies fly fly." It's the 
parser technology combined with a desire for elegant syntax that makes 2 
a reasonable limit.

paul
From: Matthias Blume
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <m24r3uj51f.fsf@localhost.localdomain>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> I was dubious about Lisp-2 at  first, but finally I've noticed that in
> human languages,  there are  a lot of  instances that show  that we're
> wired for a Lisp-2 rather than a Lisp-1:
> 
>     The fly flies.          (FLIES FLY)
>     The flies fly.          (FLY FLIES)

So what?  We are also "wired" for all sorts of misunderstandings,
ambiguities, cross-talk, etc.  And these are just the difficultis that
*humans* have with natural language; computers are much, much worse
still.  In other words, programming languages should *not*(!!)  be
like natural languages.

(Note that this is not really an argument which directly applies to
the Lisp-1 vs. Lisp-2 debate.  All I'm saying is that anyone who
defends a particular programming language design because of how it
resembles natural language is seriously confused.)

Matthias
From: Eli Barzilay
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <skwugq6fas.fsf@mojave.cs.cornell.edu>
Matthias Blume <····@me.else.where.org> writes:

> So what?  We are also "wired" for all sorts of misunderstandings,
> ambiguities, cross-talk, etc.  And these are just the difficultis
> that *humans* have with natural language; computers are much, much
> worse still.  In other words, programming languages should *not*(!!)
> be like natural languages.
> 
> (Note that this is not really an argument which directly applies to
> the Lisp-1 vs. Lisp-2 debate.  All I'm saying is that anyone who
> defends a particular programming language design because of how it
> resembles natural language is seriously confused.)

Sorry for the AOL-replay but... *Exactly*!  Any kind of such
comparison between formal and natural languages leads to confusion and
problems.  If anyone really wants to get better unification between
these two extremes (making them "wired" in the same way) they better
be prepared to go the whole way...  For example, you'd use statistical
parsers to understand your code, resulting in programs with
probabilistic outcomes ("I wrote a program that solves your problem,
but it has a few bugs which makes it unreliable in bad weather") and
ambiguities -- (let ((let 1)) let) might give you 1, or it might
complain about a syntax error, or just give up and produce a "code too
confusing error".

Also, operational semantics will need to consider such things as the
local cultures, slang, and general current knowledge since they can
all change the way that a program runs.  And don't forget about many
other natural devices that will be available:

  loop with a-variable from 1 to the-length-of-that-array-we-just-read
    loop with a-different-variable from the-above-variable to the-same-length
      increment counter by the-multiplication-of-these-two-loop-variables

The results will definitely be interesting, but I think I'll stick
with formal languages for hacking.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwaddmq2ho.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Eli Barzilay <···@barzilay.org> writes:

> Matthias Blume <····@me.else.where.org> writes:

(Not that these guys are reading on this newsgroup, but I'm not going
to cross-post anyway.  Someone can tell them a "continuation" is available
on comp.lang.lisp.  (And they thought we had no continuations...))

> > (Note that this is not really an argument which directly applies to
> > the Lisp-1 vs. Lisp-2 debate.  All I'm saying is that anyone who
> > defends a particular programming language design because of how it
> > resembles natural language is seriously confused.)

I disagree.
 
> Sorry for the AOL-replay but... *Exactly*!  Any kind of such

ANY kind?  That seems a bit broad, to the point of useless.  And if
narrowed, your remarks here seem to reduce to statements that are
largely false or irrelevant, depending on your point of view.

> comparison between formal and natural languages leads to confusion and
> problems.

I disagree.

Natural language is a cue to how we organize our brains.

I see no reason not to organize a language in a way that bears a shape
resemblance to how we think we think.

I CERTAINLY see no reason not to organize a computer language, which is
after all a means of communication, in the way people innately desire
to communicate.  Most or all human languages exploit context and namespace;
I see no reason for programming languages not to follow their lead
provided no ambiguity results.  People have said many things about multiple
namespaces, but they have never said it results in ambiguity.

The simplest and most obvious example of basing programming languages on
human languages is that we try to make the nouns, verbs, prepositions,
conjunctions, etc. mean something like what they mean in human language.

> For example, you'd use statistical
> parsers to understand your code,

This is a bogus argument.  You're offering one possible way of choosing 
badly as an argument for saying that there exists no possible way of 
choosing well.  No one is suggesting taking ALL features of natural 
language into programming langauges.  To do that is simply to eliminate
programming languages.  But once one is talking about taking only some
features, I don't think you can credibly argue there are no features in
natural languages that ought to be deliberately mimicked by programming
languages.

> Also, operational semantics will need to consider such things as the
> local cultures, slang, and general current knowledge since they can
> all change the way that a program runs.

More of same.

> The results will definitely be interesting, but I think I'll stick
> with formal languages for hacking.

Use of multiple namespaces is not "unformal", even if it does come from
natural languages.
From: Eli Barzilay
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sksmre6cqj.fsf@mojave.cs.cornell.edu>
Kent M Pitman <······@world.std.com> writes:

> [ replying to comp.lang.lisp only
>   http://www.nhplace.com/kent/PFAQ/cross-posting.html ]
> 
> Eli Barzilay <···@barzilay.org> writes:
> 
> > Matthias Blume <····@me.else.where.org> writes:
> 
> (Not that these guys are reading on this newsgroup,

You'd be surprised...  *Posting* to *this* newsgroup is a different
issue, as well as choosing to reply to a cross-posted message to just
one of the groups (especially when you have that assumption...).


> > Sorry for the AOL-replay but... *Exactly*!  Any kind of such
> 
> ANY kind?  That seems a bit broad, to the point of useless.  And if
> narrowed, your remarks here seem to reduce to statements that are
> largely false or irrelevant, depending on your point of view.

Well, OK, "any" is too broad -- there are some shared concepts like
"communication" (but hey, I'm using natural language here so vagueness
is a tool I can use).


> > comparison between formal and natural languages leads to confusion
> > and problems.
> 
> I disagree.
> 
> Natural language is a cue to how we organize our brains.

I agree to that, but that still object to this fact making natural
languages a role model for formal languages.  Ambiguity is the most
obvious example of a tool that we explictly want to have in a NL, yet
a FL even if you want to talk about ambiguity you should do it in
unmbiguous terms.


> [...]  Most or all human languages exploit context and namespace; I
> see no reason for programming languages not to follow their lead
> provided no ambiguity results.

So the fact that such things as the standard example of "time flies
like an arrow" are part of natural languages, means that you would
like to have such features in such a programming language?


> People have said many things about multiple namespaces, but they
> have never said it results in ambiguity.

Ambiguity was just an example of my argument -- which is *not* in
favor or agains the double namespace.  My argument is against using NL
analogies in the discussion (and I try to keep on this meta level
since the argument itself leads to very well-known results in this
context).


> The simplest and most obvious example of basing programming
> languages on human languages is that we try to make the nouns,
> verbs, prepositions, conjunctions, etc. mean something like what
> they mean in human language.

I'm sorry, but I don't see it that way.  I can certainly see that some
objects in computer languages might resemble objects in natural ones,
but this all goes down to describing what a machine should do.  If I
see a computer language that allows me to do get such descriptions
better while not having such analogies of verbs and nouns, I would not
have problems using it.  (But obviously that would not happen -- it is
the domain of making computers do things which have verb/noun/etc-like
concepts.)


> [...] No one is suggesting taking ALL features of natural language
> into programming langauges.  [...]

Right, -- which is what makes it a bad example to borrow from.  I have
seen many arguments go by:

* Start with PL-feature X,
* Observe that feature X has an analogy in (unrelated) domain D,
* Observe that domain D has feature Y,
* Translate feature Y back into PL.

This, and some variants, are the sort of arguments which I object to
and want to avoid.  The Lisp1/2 and verbs/nouns thing just happens to
be a popular instantiation.  (Or, the fact that I might live in a
place with no theaters should not make me choose sides in the
continuation thread.)

BTW, there are natural language with seperate constructs for verbs and
nouns.  If I use such a language, should I prefer Lisp1?  Should I be
utterly confused on any arguments based on that feature of English?
Should I deteriorate arguments by sticking my language in and leading
the discussion into a language discussion (ie, cll and Latin)?


> > The results will definitely be interesting, but I think I'll stick
> > with formal languages for hacking.
> 
> Use of multiple namespaces is not "unformal", even if it does come
> from natural languages.

It's the arguments used which are unformal.


[I will try to not followup.]

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwissa6bqp.fsf@shell01.TheWorld.com>
Eli Barzilay <···@barzilay.org> writes:

> It's the arguments used which are unformal.

In my personal experience, for whatever that's worth, I've found that
more often than not, requiring formal arguments is used as a means of
excluding people who are not prepared to offer them.  It's often no
different than having a public servant who you try to ask a simple 
question and they won't deal with it unless you fill out a standardized
form in triplicate.

There's a difference between requiring a sound argument and requiring
a formal one.

It's also the case that peoples' intuitions about what they want are
often well-founded even when they are unable to articulate a coherent
argument.

I personally do accept arguments that are unformal, both because it makes
me feel less like I dismiss people out of hand and also because I sometimes
learn things about the world that dismissing things on the basis of form
would not allow me to learn.

You're welcome to do otherwise.

My intuition is that the reason that people from the Scheme community feel
(incorrectly) like there's some ambiguity in a Lisp2 is that they do not
see it following the rules they are familiar with and so they feel it must
not be following rules at all.  The problem is then compounded because they
are intent on believing that it's not a natural way to think, and so they
refuse to learn the rule, and then in their mind I think they start to 
believe that their unwillingness to learn the rule is a proof that people 
can't learn the rule.  I certainly am willing to believe that people who are
determined not to learn something will not learn it.  That's a general truth
about the world.  Beyond that, though, there's overwhelming evidence that
people can disambiguate even ACTUAL ambiguities.  I see no reason to believe
they will have any trouble "disambiguating" (if that's what they like to 
call it) the unambiguous notation offered by Common Lisp.  And given that
they can do this, I have no shame about having been among those championing
the continued inclusion of this natural and useful feature (multiple 
namespaces) in the language.

Incidentally, Java has this feature and no one makes noise about it at all.
Not only may the same name may have multiple casifications, and this 
apparently causes no confusion, but in fact they have a separation of 
function and value namespaces, which also causes no confusion.  I think
the reason it causes confusion in our community is that a few people have
elected themselves to teach people the confusion, just as racism persists
because people elect themselves to teach hatred and fear rather than 
tolerance.  If those in the Scheme community taught simply that there was
a choice between a single namespace and multiple namespaces, and that Scheme
has made the choice one way for sound reasons and that CL has made the choice
the other way for equally sound reasons, the issue would die away.  But 
because many in the Scheme community insist on not only observing the 
difference (which surely objectively exists) but also claiming it is a Wrong
decision (in some absolutist sense, as if given by some canonically
designated God) (which is surely a mere subjective judgment), the problem
persists.  The problem is not a technical one, but a social one.
And it will not be fixed by technical means, since nothing is broken.
It will be fixed by social means, that of tolerance.

I can live with Scheme people programming somewhere in a single namespace
without going out of my way to criticize them.  I raise the criticisms I
do ONLY in the context of defending myself from someone's active attack
that claims I am using a multiple namespace language out of ignorance,
poverty, wrongheadedness, or some other such thing.  I am not.  I am using
it out of informed preference.
From: pentaside asleep
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <7d98828b.0305161451.e1a422b@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@shell01.TheWorld.com>...
> Incidentally, Java has this feature and no one makes noise about it at all.
> Not only may the same name may have multiple casifications

Whatever the merits of separate namespaces, I'm not a fan of this
argument because people don't expect that much from Java's syntax. 
Other than not being too shocking.  (Anyway, it's a static language,
and so the criticism until 1.5 is the need for casts.)
From: Bruce Lewis
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <nm9ptmir5fu.fsf@buzzword-bingo.mit.edu>
Kent M Pitman <······@world.std.com> writes:

> If those in the Scheme community taught simply that there was a choice
> between a single namespace and multiple namespaces, and that Scheme
> has made the choice one way for sound reasons and that CL has made the
> choice the other way for equally sound reasons, the issue would die
> away.  But because many in the Scheme community insist on not only
> observing the difference (which surely objectively exists) but also
> claiming it is a Wrong decision (in some absolutist sense, as if given
> by some canonically designated God) (which is surely a mere subjective
> judgment), the problem persists.

I only started following c.l.l again recently, so I was unaware that
folks here had quelled the voices claiming Lisp2 to be superior in some
absolutist sense.  If the Scheme community has fallen behind in terms of
making sure everyone understands the valid reasons on both sides, then
I'm certainly happy to do my part bringing Schemers up to the level of
maturity you've achieved here.  I certainly wouldn't want us to be the
sole cause of the problem.
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <costanza-8B66EC.23100616052003@news.netcologne.de>
I have thought about this whole issue a little, and at the moment I 
think it could probably be possible to integrate Lisp-2 and Lisp-1 into 
one system.

The idea would be to integrate this in the package system such that you 
can choose between Lisp-2 and Lisp-1 semantics when defining a new 
package.

Problems should only occur when interfacing between Lisp-2 and Lisp-1 
packages.

- When a Lisp-2 package imports a symbol from a Lisp-1 package, it would 
see the same definition for both value cell and function cell. (If the 
value cell doesn't contain a closure, the function cell could be 
(constantly (symbol-value sym)), or something along these lines.)

- When a Lisp-1 package imports a symbol from a Lisp-2 package, either 
the value cell gets priority over the function cell, or vice versa. This 
could perhaps be configured at package definiton time.

- When a Lisp-2 package modifies either the value cell or the function 
cell of a Lisp-1 symbol, the other cell should be modified accordingly, 
or this signals a warning/error, or?

Would this be a reasonable approach, or am I missing something very 
important?


Pascal
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw7k8qpg20.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> I have thought about this whole issue a little, and at the moment I 
> think it could probably be possible to integrate Lisp-2 and Lisp-1 into 
> one system.
> 
> The idea would be to integrate this in the package system such that you 
> can choose between Lisp-2 and Lisp-1 semantics when defining a new 
> package.

I've done serious work on this but ran out of time/resources in the
middle so my half-done project is languishing...  It's more
complicated than I originally thought.  That's not to say it's not
doable, but it's non-trivial.

> Problems should only occur when interfacing between Lisp-2 and Lisp-1 
> packages.

Lisp1 and Lisp2-ness should be an attribute of identifiers, not of 
symbols and packages.  And certainly it should not be aggregated.
... well, I dunno about should not, but certainly _I_ wouldn't...
I can see where you're going here but I'm doubting I'm going to like
the solution much.  You're basically just agreeing to disagree and solving
the problem at a package-to-package interface level, but that's in my mind
not that much different than what you can already do by linking a Lisp
and Scheme together and using an FFI to communicate. A real solution would
confront the problem of tighter integration rather than sweep that under
the rug.  If you're going to sweep it under the rug, there are already good
solutions to that, like the ones we use to insulate Lisp from C or Java.

> - When a Lisp-2 package imports a symbol from a Lisp-1 package, it would 
> see the same definition for both value cell and function cell. (If the 
> value cell doesn't contain a closure, the function cell could be 
> (constantly (symbol-value sym)), or something along these lines.)
>
> - When a Lisp-1 package imports a symbol from a Lisp-2 package, either 
> the value cell gets priority over the function cell, or vice versa. This 
> could perhaps be configured at package definiton time.
> 
> - When a Lisp-2 package modifies either the value cell or the function 
> cell of a Lisp-1 symbol, the other cell should be modified accordingly, 
> or this signals a warning/error, or?
> 
> Would this be a reasonable approach, or am I missing something very 
> important?

This is not how I'd do it, but I don't have the energy today to
explain how I'd do it differently.  The short form of the answer is,
though, that the words "symbol" and "import" would not occur anywhere
in my explanation of how to do it.  These are, IMO, at the wrong level
of abstraction.

I also think it's a dreadful mistake to limit the nature of any serious
solution to merely CL and Scheme or merely Lisp1/Lisp2...  I'd generalize
the result as much as possible once I was going to the work of doing it
at all.
From: Eli Barzilay
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <skk7cq5kvq.fsf@mojave.cs.cornell.edu>
Kent M Pitman <······@world.std.com> writes:

> [...]  I can live with Scheme people programming somewhere in a
> single namespace without going out of my way to criticize them.  I
> raise the criticisms I do ONLY in the context of defending myself
> from someone's active attack that claims I am using a multiple
> namespace language out of ignorance, poverty, wrongheadedness, or
> some other such thing.  [...]

For the record, there was *no* such context here or in the other
subthread (your reply to bear).  In both cases Scheme was only
mentioned by yourself, and and no criticism of Lisp-n for any value of
n was given.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
From: Bill Richter
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <fqr86yqvj2.fsf@artin.math.northwestern.edu>
   In other words, programming languages should *not*(!!)  be like
   natural languages.

Right, Matthias.  As Noam Chomsky says, natural languages are all
a zillion times more complicated than computer languages.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw3cjepf4c.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Bill Richter <·······@artin.math.northwestern.edu> writes:

>    In other words, programming languages should *not*(!!)  be like
>    natural languages.
> 
> Right, Matthias.  As Noam Chomsky says, natural languages are all
> a zillion times more complicated than computer languages.

And as the late Prof. Bill Martin, a professor in the Lab for Computer
Science, some of whose many specialties were computational linguistics
and knowledge representation, said in a class I took from him [I'm
paraphrasing from a 20+ year old memory, but I think I've got the
basic sense of the remark right]: `People designed natural language in
order to be possible to learn.'

I took this statement of the seemingly obvious to be a form of
reassurance to those of us toying with getting computers to learn
language, like when I was working with Rubick's cube and it helped me
to see that at least _someone_ had screwed up a cube and then later
solved it ... so that I would know it was worth persisting to find a
solution because there _was_ a solution waiting to be had, and it was
known to be tractable. (After you've gotten it sufficiently mixed up,
the fact that you could just 'invert what you've done' seems about as
promising as thinking that 'inverting what you've done' would work to
reassemble a sandcastle you've just kicked.)

But in the context I mention it here, it has the same meaning, just a
different spin: If people can understand a language that is a
"zillion" times more complicated than computer languages, then stop
telling me that CL is so much more complicated than Scheme that no one
will ever understand it.  We have wetware that is field-tested for
much worse and is known to work just fine on that.

What _is_ a barrier to learning is putting fingers into your ears
and saying "I won't, I won't, I won't" or "I can't, I can't, I can't"
over and over.
From: Coby Beck
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ba4lve$2euh$1@otis.netspace.net.au>
"Matthias Blume" <····@me.else.where.org> wrote in message
···················@localhost.localdomain...
> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
>
> > I was dubious about Lisp-2 at  first, but finally I've noticed that in
> > human languages,  there are  a lot of  instances that show  that we're
> > wired for a Lisp-2 rather than a Lisp-1:
> >
> >     The fly flies.          (FLIES FLY)
> >     The flies fly.          (FLY FLIES)
>
> So what?  We are also "wired" for all sorts of misunderstandings,
> ambiguities, cross-talk, etc.  And these are just the difficultis that
> *humans* have with natural language; computers are much, much worse
> still.  In other words, programming languages should *not*(!!)  be
> like natural languages.

I don't think I agree with much of the above really.  Firstly, computer
languages, despite the name, should be design for human understanding so I
don't think the fact that it is hard for a computer (read compiler writer)
should be a major factor in programming language design.

To your first point, and more off-topic, I don't know what you mean that we
are wired for misunderstanding.  Don't we humans overcome these difficulties
for the most part?  You express yourself very well in English.  Some truly
great works of literature have been expressed in many different natural
languages.  Do you think you could design a better language?  I don't mean
just fix a few ambiguous words, I mean start from scratch.  What fundamental
things about natural language would you change?

I don't mean to suggest this is impossible, I'm really curious if you have
some concrete ideas for improvement.

> (Note that this is not really an argument which directly applies to
> the Lisp-1 vs. Lisp-2 debate.  All I'm saying is that anyone who
> defends a particular programming language design because of how it
> resembles natural language is seriously confused.)

I disagree because again, a computer language is for a human programmer.

-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305201429.4d159aab@posting.google.com>
"Coby Beck" <·····@mercury.bc.ca> wrote in message news:<·············@otis.netspace.net.au>...
> Some truly
> great works of literature have been expressed in many different natural
> languages.  Do you think you could design a better language?  I don't mean
> just fix a few ambiguous words, I mean start from scratch.  What fundamental
> things about natural language would you change?
> 
> I don't mean to suggest this is impossible, I'm really curious if you have
> some concrete ideas for improvement.

It's even been argued, by the biological anthropologist Terry Deacon,
that our cognitive abilities have forced the form of natural
languages. In other words, since neurobiology changes fairly slowly
compared to language, natural languages have been selected for easy
learning and comprehension by *all* human beings, not just the best
and the brightest. What has survived is a range of natural languages
that take the same basic range of syntactical forms, i.e., those that
people can learn easily. Deacon's point is that the selective
bottleneck is childhood language acquisition. Any syntactic feature
too difficult for children to master will not survive as part of that
language into the next generation.

This co-evolution of language and human neurobiology has led to a
range of syntax that is easy for humans to learn, and to understand
relatively unambiguously. Stray outside this range, and you start
having difficulty mastering the syntax, and much greater chances of
misunderstanding and error.

This would suggest that computer languages should hew to the common
patterns and elements of natural languages in order to assure easier,
and clearer, comprehension by human programmers.

Viewed in this light, Larry Wall's views on natural language features
in Perl seem somewhat less idiosyncratic. I suspect that when the
history of computer languages comes to be written in a century, the
surviving languages will follow the natural language view much more
closely than the lisp view that everything can and should be rendered
as an s-expression. Under the covers, maybe, but not the syntax meant
to be read by human programmers.

I've always agreed with the position Coby is taking here - let the
comiler writers worry about how to make the language work. Especially
with ever increasing hardware resources, there's little point in
languages forcing human users to wrap their minds around unnatural
syntactic constructs just to make compiler writers' lives, or academic
proofs of program correctness, easier.

Raf
From: Matthew Danish
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <20030520192720.E11522@mapcar.org>
On Tue, May 20, 2003 at 03:29:53PM -0700, Raffael Cavallaro wrote:
> It's even been argued, by the biological anthropologist Terry Deacon,
> that our cognitive abilities have forced the form of natural
> languages. In other words, since neurobiology changes fairly slowly
> compared to language, natural languages have been selected for easy
> learning and comprehension by *all* human beings, not just the best
> and the brightest.

That still leaves a gigantic range of possibilities.

> This co-evolution of language and human neurobiology has led to a
> range of syntax that is easy for humans to learn, and to understand
> relatively unambiguously. Stray outside this range, and you start
> having difficulty mastering the syntax, and much greater chances of
> misunderstanding and error.

References would be nice, please.

> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

``Please close the window when it is raining.''

Computer checks: raining?  No.  Then it moves on.

5 minutes later it rains.  Is the window going to be closed?  No.

Would a human know to close the window?  Yes.

So should the `when' operator imply some kind of constant background loop?  I
am really unclear on this.

> Viewed in this light, Larry Wall's views on natural language features
> in Perl seem somewhat less idiosyncratic. I suspect that when the
> history of computer languages comes to be written in a century, the
> surviving languages will follow the natural language view much more
> closely than the lisp view that everything can and should be rendered
> as an s-expression. Under the covers, maybe, but not the syntax meant
> to be read by human programmers.

This assumes that natural languages are inherently more readable.  Natural
language is optimized for the task of communicating with other human beings,
who are aware of context and can remember points and communicate back.

Even something as simple as nestable block structure is not well supported by
any natural language I can think of.  Sure you can get by without that, but I
thought those languages were part of the past?

> I've always agreed with the position Coby is taking here - let the
> comiler writers worry about how to make the language work. Especially
> with ever increasing hardware resources, there's little point in
> languages forcing human users to wrap their minds around unnatural
> syntactic constructs just to make compiler writers' lives, or academic
> proofs of program correctness, easier.

Here's a sample piece of code inspired by a natural language:

``2 parameter and 2 parameter add number parameter and 4 parameter multiply, in
my opinion''

Silly me, (2 + 2) * 4 is so much more readable, right?

Well mathematician's notation is, IMHO, a hodge-podge of heavily
context-optimized notepad sketching thrown together into a giant pile and
slowly fed to new people in bits and pieces.  I know that many people consider
it to be an emblem of higher learning, but I think that it's really just a
confusion of the syntax with the underlying concepts.  Plus it can get to be
a real pain to edit when dealing with larger expressions.

So, arguments that (* (+ 2 2) 4) is not ``natural'' or ``intuitive'' (oh no!)
don't really fly with me, since I don't think that any notation is natural or
intuitive.  And people have used many other notations in the past, in other
places, even though it is tempting to think that the status quo is somehow the
best.

I don't know about you, but I like having easy-to-write macros, and
code-as-data.  And the Lisp syntax is merely a reflection of the truth of the
underlying nested data structure.  Any other syntax is just an attempt to hide
that, and for what cause?  A primitive need to ``look like'' natural language
while not really being close to it?

Do human logicians fall back to writing their theorems entirely in natural
language because it is somehow ``more readable'' that way?  As much as logic
books can be dense with symbols, they are incredibly more readable than the
equivalent written out in some natural language.

Note that the above discussion is orthogonal to the Lisp-n issue, which is a
semantic one.  The relation to natural language there is the sharing of the
ability to handle separate contexts; a necessity to understand natural
language.  I believe Pascal is arguing that humans are able to do it for
natural language and therefore also for other languages.  Certainly,
mathematics is full of context-sensitive notation, and would be a pain without
it.  (I can see it now: how many different alphabets would have to be exhausted
so that every theorem can have it's own "greek" letter(s)?).

But it isn't that Lisp-n for n > 1 is better because it imitates natural
language.  There is, after all, no proof that imitating natural language
results in better computer language.  However, the fact that humans can
accomodate separate namespaces/contexts in natural languages does seem to
indicate that humans can accomodate separate namespaces/contexts in computer
languages too.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Daniel Barlow
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87k7ckec0q.fsf@noetbook.telent.net>
Matthew Danish <·······@andrew.cmu.edu> writes:

> ``Please close the window when it is raining.''
>
> Computer checks: raining?  No.  Then it moves on.
>
> 5 minutes later it rains.  Is the window going to be closed?  No.

I don't think I've ever seen a window rain, but I'd imagine that once
the glass is hot enough to melt there's not a lot of window left to
close anyway


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305211436.2c47dc73@posting.google.com>
Matthew Danish <·······@andrew.cmu.edu> wrote in message news:<·····················@mapcar.org>...
> On Tue, May 20, 2003 at 03:29:53PM -0700, Raffael Cavallaro wrote:
> > It's even been argued, by the biological anthropologist Terry Deacon,
> > that our cognitive abilities have forced the form of natural
> > languages. In other words, since neurobiology changes fairly slowly
> > compared to language, natural languages have been selected for easy
> > learning and comprehension by *all* human beings, not just the best
> > and the brightest.
> 
> That still leaves a gigantic range of possibilities.

Not really. The range of human grammars is actually quite limited. The
differences are largely superficial - e.g., different bindings for
different concepts, as it were, but still the same basic structures.
This whole issue is the basis for the now universally accepted view
that we have built in neurological "wiring" for language acquisition.
This would not be possible if the range of grammars were not extremely
limited - the language "instinct" wouldn't work with a sufficiently
different grammar.


> References would be nice, please.

<http://www.amazon.com/exec/obidos/tg/detail/-/0393317544/104-1144123-1355147?vi=glance>

(if the above gets split it should be one line)

_The Symbolic Species: The Co-Evolution of Language and the Brain_ by
Terry Deacon.


> ``Please close the window when it is raining.''
> 
> Computer checks: raining?  No.  Then it moves on.
> 
> 5 minutes later it rains.  Is the window going to be closed?  No.
> 
> Would a human know to close the window?  Yes.
> 
> So should the `when' operator imply some kind of constant background loop?  I
> am really unclear on this.


In brief, yes. An "if" construct suggests a single conditional check.
A "when," or "whenever" construct suggests a continuous background
polling. This is how GUIs are written. Whether the actual program text
uses the word "when," the semantics of the program are clear: "When
the user presses this button, execute this block of code." I am merely
suggesting that the program text should parallel the existing
structures in natural languages, i.e., the program text should read:
"when(button1.pressed?) exec(block1)" which would set up a continuous
background loop. The programmer could stop this loop with code like:
"stop-checking(button1.pressed?)"

In other words, we are continually forced to jump through unnatural
mental hoops to make our ideas take the form of a mathematical
algorithm (since this is what computer languages were originally
designed to execute). This may work well for scientific calculations
(hence the continued popularity of Fortran), but it really sucks for
most other types of information processing.

> This assumes that natural languages are inherently more readable.  Natural
> language is optimized for the task of communicating with other human beings,
> who are aware of context and can remember points and communicate back.

Then the compiler writers will need include the ability to be aware of
relevant context (i.e, different compiler behavior with the same
program text depending on context), and remember points (a good deal
more introspection and maintenance of program state), and communicate
back (greatly improved, and *context sensitive* compiler warnings and
error reporting). This is going to happen. It's just a question of
when, and by whom, not whether.

Why? Because the "Software Crisis" will only be solved by enabling
power users to write their own applications. I'm convinced that the
real scarcity is not competent programmers, but domain expertise. Many
people can learn to code. Very few people have the domain expertise to
code the right thing. Acquiring domain expertise in many fields that
need a great deal of software is far more difficult than learning to
program competently. How many professional software developers have
the equivalent domain knowledge of a Ph.D. in molecular biology, or a
professional options trader, etc. Wouldn't it make more sense to
develop compilers that were easier to work with, than to have coders
acquire a half baked, partly broken level of domain expertise for each
new project they undertake?



> Even something as simple as nestable block structure is not well supported by
> any natural language I can think of.  Sure you can get by without that, but I
> thought those languages were part of the past?

If you think that nestable block structure is necessary for the
communication of complex ideas then you're thinking in assember and
not in a natural language. The final compiler output may need to have
nested blocks, but that doesn't mean that the program text needs to be
expressed as nested blocks.

> Here's a sample piece of code inspired by a natural language:
> 
> ``2 parameter and 2 parameter add number parameter and 4 parameter multiply, in
> my opinion''

Or "(two plus two) times 4." Your suggestion above is laughably
contrived.


> Do human logicians fall back to writing their theorems entirely in natural
> language because it is somehow ``more readable'' that way?  As much as logic
> books can be dense with symbols, they are incredibly more readable than the
> equivalent written out in some natural language.


But most software is not needed by human logicians. It is needed by
human bankers, and human market traders, and human accountants, and
human molecular biologists, and they all communicate quite well in
natural language, modulo a sprinkling of domain specific notation.

> But it isn't that Lisp-n for n > 1 is better because it imitates natural
> language.  There is, after all, no proof that imitating natural language
> results in better computer language.

Better for whom? For ordinary people, there is ample proof that
computer languages that more closely resemble natural language are
"better" - they simply don't use languages that aren't sufficiently
like natural languages at all. But lay people do use more
natural-language-like languages such as AppleScript, and Smalltalk.
 
>  However, the fact that humans can
> accomodate separate namespaces/contexts in natural languages does seem to
> indicate that humans can accomodate separate namespaces/contexts in computer
> languages too.

Yes. WRT the thread topic, a lisp-n for n>1 (in fact, for unbounded n,
since new contexts can always arise) would be closer to natural
language.
From: Alexander Schmolck
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <yfsy90z517i.fsf@black132.ex.ac.uk>
·······@mediaone.net (Raffael Cavallaro) writes:
> Not really. The range of human grammars is actually quite limited. The
> differences are largely superficial - e.g., different bindings for
> different concepts, as it were, but still the same basic structures.
> This whole issue is the basis for the now universally accepted view
> that we have built in neurological "wiring" for language acquisition.

Although this is rather OT here, outside Chomskian linguistics this view is
certainly not universally accepted.

'as
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305220406.50caa88@posting.google.com>
Alexander Schmolck <··········@gmx.net> wrote in message news:<···············@black132.ex.ac.uk>...
> ·······@mediaone.net (Raffael Cavallaro) writes:
> > Not really. The range of human grammars is actually quite limited. The
> > differences are largely superficial - e.g., different bindings for
> > different concepts, as it were, but still the same basic structures.
> > This whole issue is the basis for the now universally accepted view
> > that we have built in neurological "wiring" for language acquisition.
> 
> Although this is rather OT here, outside Chomskian linguistics this view is
> certainly not universally accepted.
> 
> 'as

I think you misunderstand me. The Chomskian view is the extreme
position that *all* gramatical learning abilities are pre-wired. The
other extreme position is the classical "tabula rasa," or blank slate
position, i.e., that people are born with *no* cognitive instincts.

No linguist, indeed, almost no student of human cognition, now holds
the tabula rasa position, although it was widely held only a century
ago. This does not mean that all linguists hold the extreme Chomskian
position, and Deacon certainly does not.

Terry Deacon takes a more moderate position - specifically, that the
innate "language instinct" consists mostly of our inborn ability to
think symbolically, combined with some very well documented brain
specialization. Given the limited cognitive abilities of children
compared to adults, we get a range of human grammars limited by their
learnability by human children.

Raf
From: Tim Bradshaw
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ey3ptmbuppm.fsf@cley.com>
* Raffael Cavallaro wrote:

> No linguist, indeed, almost no student of human cognition, now holds
> the tabula rasa position, although it was widely held only a century
> ago. This does not mean that all linguists hold the extreme Chomskian
> position, and Deacon certainly does not.

As I understand it there are some very good arguments against the
tabula rasa position.  In particular you can look at the amount of
data a general grammar-learner needs to learn a grammar, and you find
that people get a small fraction of this.  So either they have
some special wiring, or they do magic.

--tim
From: Burton Samograd
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87u1bndqya.fsf@kruhft.vc.shawcable.net>
Tim Bradshaw <···@cley.com> writes:
> As I understand it there are some very good arguments against the
> tabula rasa position.  In particular you can look at the amount of
> data a general grammar-learner needs to learn a grammar, and you find
> that people get a small fraction of this.  So either they have
> some special wiring, or they do magic.

I just finished reading a very interesting book that covered this
subject called "Jungian Archetypes" (I forget the author's name
though).  It's a very interesting book for geek types and was written
by a mathematician and discusses the evolution of scientific and
mathematical thought over the centuries and how they lead to clinical
psychology.  The idea of "tabula rasa" is replaced by ingrained
archetypes which are carried in ourselves and the stories we are
exposed to (which make up part of the collective unconciousness).  It
also gives one of the best explanations of Godel's Law I've read
anywhere. It's some very interesting reading and a perfect geek
pychology book. 

-- 
burton samograd
······@kruhft.dyndns.org
http://kruhftwerk.dyndns.org
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305221546.5538b53b@posting.google.com>
Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...
 
> As I understand it there are some very good arguments against the
> tabula rasa position.  In particular you can look at the amount of
> data a general grammar-learner needs to learn a grammar, and you find
> that people get a small fraction of this.  So either they have
> some special wiring, or they do magic.

Indeed. This argument is known as "The Poverty of the Input," i.e.,
children are not exposed to enough examples to generate all of the
grammatical rules that they learn.

This is one of several reasons that no serious student of human
cognition holds the strong tabula rasa position any more.

Raf
From: Alexander Schmolck
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <yfsy90y3f6n.fsf@black132.ex.ac.uk>
Tim Bradshaw <···@cley.com> writes:

> * Raffael Cavallaro wrote:
> 
> > No linguist, indeed, almost no student of human cognition, now holds
> > the tabula rasa position, although it was widely held only a century
> > ago. This does not mean that all linguists hold the extreme Chomskian
> > position, and Deacon certainly does not.
> 
> As I understand it there are some very good arguments against the
> tabula rasa position.  In particular you can look at the amount of
> data a general grammar-learner needs to learn a grammar, and you find
> that people get a small fraction of this.  So either they have
> some special wiring, or they do magic.

This is the so-called "poverty of stimulus argument" and both Chomsky (1959)
[1] and particularly Gold (1967) are often cited as having formally
demonstrated that human languages are not learnable without, as you write
above, "some special wiring".

Alas, as is maybe not too surprisingly, there is some prominent disagreement
on the validity of assumptions of the underlying learning model that Gold's
learning theoretical treatment rests upon.

For example, Quartz and Sejnowksi (1997) conclude [2]:

 "Hence, the negative learnability results do not indicate anything about the
  learnability of human language as much as they do about the insufficiency of
  the particular learning model."

Although Sejnowski, Elman, McClelland, Rumelhart and other connectionists (who
have been challenging the established nativist position on language
acquisition since the late 80ies) might be dead wrong, they are certainly not
stupid or marginal. Indeed many of them have significantly contributed to both
psychology and AI/pattern recognition and baring some grave misunderstanding
on my part none of them seems to be particularly committed to the "universally
accepted view that we have built in neurological "wiring" for language
acquisition".

'as 

[1] http://cogprints.ecs.soton.ac.uk/archive/00001148/00/chomsky.htm
[2] http://citeseer.nj.nec.com/quartz97neural.html



post scriptum for the linguistically inclined:

Apart from learning theory and neurological studies, another nativist line of
defense is the demonstration of so called "universals" that hold across all
languages, many of which are deemed to be functionally arbitrary (and hence
neutral alternatives in a non-nativist framework). Although again there is
quite a bit of controversy about the validity and interpretation of much of
the data, I find the intellectual appeal of many of these arguments and
observations quite undeniable.

This is maybe not the best example, but pay attention to the referents:

  Mary is eager to please.
 
  vs.

  Mary is easy  to please.

  John promises  Bill to wash him.
  John promises  Bill to wash himself.

  vs.

  John persuades Bill to wash him.
  John persuades Bill to wash himself.

A nativist would say that the contrasted sentences are structurally
equivalent, so how is the learner supposed to implicitly derive when 'himself'
refers to the subject and when to the object of the sentence? Children are
never taught explicitly and yet never seem to make certain kinds of mistakes
one would expect on the basis of similar such examples.
From: Matthew Danish
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <20030522221812.GB17564@lain.cheme.cmu.edu>
On Thu, May 22, 2003 at 09:21:52PM +0100, Alexander Schmolck wrote:
> This is maybe not the best example, but pay attention to the referents:
> 
>   Mary is eager to please.
>  
>   vs.
> 
>   Mary is easy  to please.

I find 'eager' to be a more 'active' term than 'easy', something that
Mary is actively doing rather than a passive description.

>   John promises  Bill to wash him.
>   John promises  Bill to wash himself.
> 
>   vs.
> 
>   John persuades Bill to wash him.
>   John persuades Bill to wash himself.
> 
> A nativist would say that the contrasted sentences are structurally
> equivalent, so how is the learner supposed to implicitly derive when 'himself'
> refers to the subject and when to the object of the sentence? Children are
> never taught explicitly and yet never seem to make certain kinds of mistakes
> one would expect on the basis of similar such examples.

Because the difference lies deeper than structural.  It lies in the
meaning of the verbs.

  John promises Bill to wash him.
  
meaning, at some future time
  
  John washes him.

But if it were to be "John washes John" then you would normally use
'himself', so the ambiguity is resolved by choosing the other person.

Similarly,

  John promises Bill to wash himself.

to

  John washes himself.

Because the verb "to promise" implies that John will do something.

Whereas the verb "to persuade" implies that Bill is going to do
something.

  John persuades Bill to wash him.

means that

  Bill washes him.

And 'him' is resolved similarly to before.

I'm sure that linguists have thought of this difference before, and
there are probably better examples.  I doubt the nativists are that
naive.

There's always the learn-by-example mode too.  I think that I learned a
lot of English just by being exposed to it in books.  I never knew
anything formal about grammar until I learned Spanish, and I figured out
how to form sentences often times by picking out remembered phrases.
(This was called the "feels-right" school of grammar by one English
teacher).

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Alexander Schmolck
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <yfsllwy33hd.fsf@black132.ex.ac.uk>
I'm rather tired now, but I'll try to answer your points before I leave for a
couple of days. I make no claims to special expertise on the topic.

Matthew Danish <·······@andrew.cmu.edu> writes:
> >   John promises  Bill to wash him.
> >   John promises  Bill to wash himself.
> > 
> >   vs.
> > 
> >   John persuades Bill to wash him.
> >   John persuades Bill to wash himself.
> > 
> > A nativist would say that the contrasted sentences are structurally
> > equivalent, so how is the learner supposed to implicitly derive when 'himself'
> > refers to the subject and when to the object of the sentence? Children are
> > never taught explicitly and yet never seem to make certain kinds of mistakes
> > one would expect on the basis of similar such examples.
> 
> Because the difference lies deeper than structural.  It lies in the
> meaning of the verbs.

Yes -- but then it is precisely the meaning of the verb (promise/persuade)
that you are trying to learn and this is, I hope we will agree, not made
easier by the fact what that correct determination of referents is not
possible by just understanding the structure of or indeed all the other words
in the sentence.

> 
>   John promises Bill to wash him.
>   
> meaning, at some future time
>   
>   John washes him.
> 
> But if it were to be "John washes John" then you would normally use
> 'himself', so the ambiguity is resolved by choosing the other person.
> 
> Similarly,
> 
>   John promises Bill to wash himself.
> 
> to
> 
>   John washes himself.
> 
> Because the verb "to promise" implies that John will do something.
> 
> Whereas the verb "to persuade" implies that Bill is going to do
> something.
> 
>   John persuades Bill to wash him.
> 
> means that
> 
>   Bill washes him.
> 
> And 'him' is resolved similarly to before.
> 
> I'm sure that linguists have thought of this difference before, and there
> are probably better examples. I doubt the nativists are that naive.

While I am willing to take blame for my selection of the examples (and any
misrepresentation of their use in nativist arguments) I'd like to point out
that the the first example, if I am not mistaken, originates from Chomsky (the
source of citation is Ken Wexler's MIT Encyclopedia of Cognitive Science Entry
on Poverty of Stimulus A.). Since both examples are not of my own making and,
with slight variations, occur not infrequently in the literature any naivity
obvious from the examples alone is indeed shared by prominent nativists.

 
> There's always the learn-by-example mode too. I think that I learned a lot
> of English just by being exposed to it in books. I never knew anything
> formal about grammar until I learned Spanish, and I figured out how to form
> sentences often times by picking out remembered phrases. (This was called
> the "feels-right" school of grammar by one English teacher).

I am not quite sure what to make of this paragraph. The nativist argument is
precisely that you can't learn *just* by example (as the training input you
receive alone is by far not rich enough to deduce the rules that generated
this input; whether these rules be explict or not), thus the claim that there
is always learn-by-example mode, too, seems rather bizzarre to me in my tired
condition.

'as
From: Jeff Caldwell
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <J9hza.718$H84.322763@news1.news.adelphia.net>
Raffael Cavallaro wrote:
...

> In other words, we are continually forced to jump through unnatural
> mental hoops to make our ideas take the form of a mathematical
> algorithm (since this is what computer languages were originally
> designed to execute). This may work well for scientific calculations
> (hence the continued popularity of Fortran), but it really sucks for
> most other types of information processing.
...

It appears you agree that different languages are appropriate for 
different knowledge domains.

> ... the "Software Crisis" will only be solved by enabling
> power users to write their own applications. 

This is a re-hash of the rise of the spreadsheet and the appearance of 
PCs in the accounting department.  The statement is true, has been 
proven true, and the phenomenon will continue to evolve.  (Spreadsheet 
programs can be viewed as a different language with a different user 
interface, reinforcing the prior point.)

 > I'm convinced that the
> real scarcity is not competent programmers, but domain expertise....
> How many professional software developers have
> the equivalent domain knowledge of a Ph.D. in molecular biology, or a
> professional options trader, etc. 

How many Ph.D.'s in molecular biology have the equivalent computer 
science domain knowledge of a Ph.D. in computer science?  I think you 
are saying more that as spreadsheets became available for accountants, 
something else will become available for molecular biologists and a 
broad group of other people, and saying less that everyone must learn to 
hold Ph.D.'s in computer science as well as in their own domain.

The workload done with spreadsheets did in fact lessen the workload 
placed upon corporate IT departments. Those departments were overloaded 
far beyond their ability to respond when spreadsheets appeared, and 
spreadsheets and PCs were a good thing for the user departments. This 
also allowed the IT departments to begin focusing more upon projects 
affecting the larger enterprise rather than locally optimizing specific 
departments.

The real impact upon corporate IT departments came when standard 
packages, such as SAP, became widespread. IT became responsible more for 
implementation and less for development. To some extent, I think of 
SAP-like applications as a meta-super-spreadsheet language used by IT.

 > Wouldn't it make more sense to
> develop compilers that were easier to work with, than to have coders
> acquire a half baked, partly broken level of domain expertise for each
> new project they undertake?

Yes but spreadsheets can go only so far. I discuss more about this below.

> If you think that nestable block structure is necessary for the
> communication of complex ideas then you're thinking in assember and
> not in a natural language.

Most books have a table of contents. Most books are structured at least 
into chapters. Many chapters are divided into sections and most chapters 
and sections are divided into paragraphs. Most paragraphs are divided 
into sentences. These structures define the boundaries of contextual 
structures.

> But most software is not needed by human logicians. It is needed by
> human bankers, and human market traders, and human accountants, and
> human molecular biologists, and they all communicate quite well in
> natural language, modulo a sprinkling of domain specific notation.
> 

A programmer, or more likely a team consisting of project managers, 
software engineers, quality assurance personnel, documentation 
specialists, and programmers, will work with those with domain expertise 
to design and develop a language appropriate to a range of domains. 
Consider this the invention of the spreadsheet for that domain range. 
Only then will the banker/trader/accountant/biologist be empowered to 
use their spreadsheet equivalent.

Yes, the efforts may gain leverage from each other. Yes the domain 
ranges may be large or may grow over time.

...
> Better for whom? For ordinary people, there is ample proof that
> computer languages that more closely resemble natural language are
> "better" - they simply don't use languages that aren't sufficiently
> like natural languages at all. But lay people do use more
> natural-language-like languages such as AppleScript, and Smalltalk.
>  

I disagree that true natural language will produce the results you seem 
to claim. The law is written in natural language, with domain-specific 
extensions and idioms. Look at the practice of law, the number of 
lawyers and judges, and the disputes over fine points of meaning. How 
many lines in a nation's consitution have been questioned and 
reinterpreted, asking about the founder's intent and other factors? Is 
there unambiguous meaning to complex law? Are ambiguous software 
specifications to be trusted in domains such as banking?

Beyond a certain simplistic level, anything stated by a banker about 
software desired by the banker quickly exceeds the banker's domain 
knowledge about systems and system behavior. Locking strategies, 
replication mechanisms, performance bottlenecks, concurrent behaviors... 
the banker knows nothing about these. The banker must rely upon the 
default behaviors provided by the underlying software.  When more than 
that is required, people with domain knowledge about locking strategies 
and replication mechanisms must become involved. These people always 
will be needed, although a trend of many years has these people 
concentrated more in system-domain companies and less in 
application-domain companies.

The brain's basic wiring for language enabled communication between 
humans providing a competitive survival edge in a given natural 
environment. Saying that that mechanism is the ideal way to specify 
Ph.D.-level computer science thought about machine behavior does not 
seem like a strong claim to me.

To say that bankers can express application behaviors best through 
natural language is true only to the extent that what they express does 
not exceed their domain knowledge and to the extent that what they 
express in natural language is not subject to multiple interpretations 
such as we find even in the shortest legal documents such as 
constitutions.

A banker using natural language to express desired machine behaviors 
quickly will find repetitive expressions, begin to find them tedious, 
and begin to look for shorter methods of expressing common patterns. 
Over a period of years, if the compilers adapt to the banker's desire 
for these less-tedious means of expression, the banker will end up with 
a domain-specific language for expressing desired machine behaviors.

Ph.D. computer scientists, and others with machine and system domain 
knowledge, today can use languages such as Lisp to build domain-specific 
languages for expressing desired machine behaviors. These people build 
domain-specific languages up from languages built to express machine 
behaviors. Bankers ultimately may use natural language to build 
domain-specific languages for expressing desired machine behaviors. They 
will have built their domain-specific language down from natural language.

One idea is that when building large, complex financial systems, a 
banker can express all the proper machine behaviors in natural language 
without a systems person on the team. Another idea is that when building 
large, complex banking systems, a systems person can express all the 
proper financial behaviors in any language without a banker on the team. 
Both ideas seem equally untenable to me.  Sufficiently reduce the domain 
knowledge required, and a banker can write a spreadsheet application and 
a programmer can balance a checkbook.
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305230428.6eca9adb@posting.google.com>
Jeff Caldwell <·····@yahoo.com> wrote in message news:<····················@news1.news.adelphia.net>...

> How many Ph.D.'s in molecular biology have the equivalent computer 
> science domain knowledge of a Ph.D. in computer science?

You miss my point. With better (read, more natural-language-like)
computer languages, Ph.D.s in molecular biology wouldn't *need* the
equivalent science domain knowledge of a Ph.D. in computer science.
Only compiler writers would need this level of knowledge. Everyone
else would leverage it by using a better designed computer language.

>   I think you 
> are saying more that as spreadsheets became available for accountants, 
> something else will become available for molecular biologists and a 
> broad group of other people, and saying less that everyone must learn to 
> hold Ph.D.'s in computer science as well as in their own domain.

Yup, now you're arguing my point. We need a natural-language-like
computer language that is the next step beyond spreadsheets, as it
were.


> Most books have a table of contents. Most books are structured at least 
> into chapters. Many chapters are divided into sections and most chapters 
> and sections are divided into paragraphs. Most paragraphs are divided 
> into sentences. These structures define the boundaries of contextual 
> structures.

Exactly. These are natural language structures that could be
*compiled* into nested block structures. But no ordinary person lays
out paragraphs as nested blocks. Their nesting (or lack of nesting, as
not all sequential paragraphs correspond to nested blocks) is
determined by such cue phrases as "alternatively,"  (read, here now, I
present a different branch).

 
> A programmer, or more likely a team consisting of project managers, 
> software engineers, quality assurance personnel, documentation 
> specialists, and programmers, will work with those with domain expertise 
> to design and develop a language appropriate to a range of domains. 
> Consider this the invention of the spreadsheet for that domain range. 
> Only then will the banker/trader/accountant/biologist be empowered to 
> use their spreadsheet equivalent.

Or a more general purpose language will be developed that allows
people from different domains to write their own software. This will
be much more useful, affordable, and flexible than calling in a team
of software engineers, QA personnel, documentation specialists, and
programmers, for each new domain to receive its limited extent, domain
specific language. What's the failure rate of large, complex software
projects these days? And you expect domain experts to play those sorts
of odds just to get a limited use language?

Better for the people with the greatest computer science expertise to
write a compiler for a general purpose, natural-language-like computer
language.


> I disagree that true natural language will produce the results you seem 
> to claim. The law is written in natural language, with domain-specific 
> extensions and idioms. Look at the practice of law, the number of 
> lawyers and judges, and the disputes over fine points of meaning. How 
> many lines in a nation's consitution have been questioned and 
> reinterpreted, asking about the founder's intent and other factors? Is 
> there unambiguous meaning to complex law? Are ambiguous software 
> specifications to be trusted in domains such as banking?


Did you know that in Europe, most business that requires gangs of
lawyers and binders full of contracts here in the US, is transacted
with a two paragraph letter of intent, and a hand shake? The broken
complexity of the US legal system is not a necessary feature of legal
systems, nor of legal language. It is a product of a guild working to
make its services indispensable (remember, Congress is composed mostly
of lawyers, so all the laws are written by guild members). Rather like
IT people and programmers working to make computer use and computer
programming harder than it needs to be in order to maintain the IT
priesthood, to which all users must supplicate.

US law, and Common Law traditions in general, are *intentionally*
ambiguous, since they rely on precedent (i.e., previous case decisions
hold as much importance in how a judge will rule as what is actually
written in the legal code). I am not claiming that intentionally
ambiguous language can magically be made unambiguous. Merely that
ordinary domain experts can express themselves unambiguously when it
is needed, especially with the help of decent compiler warnings and
error messages.

> 
> Beyond a certain simplistic level, anything stated by a banker about 
> software desired by the banker quickly exceeds the banker's domain 
> knowledge about systems and system behavior. Locking strategies, 
> replication mechanisms, performance bottlenecks, concurrent behaviors... 
> the banker knows nothing about these. The banker must rely upon the 
> default behaviors provided by the underlying software.  When more than 
> that is required, people with domain knowledge about locking strategies 
> and replication mechanisms must become involved.

But at what level? I'm saying that they only need to be involved at
the compiler writing level. The banker simply specifies that he's
dealing with a transaction, and the compiler generates all the
necessary locking strategies, etc, from that context, namely, that of
a transaction.

Your only valid argument here is performance. But moore's law will
take care of that for most cases (no premature optimization please).
There will probably always exist domains where real programmers will
need to tune for performance, but this is much easier to do when the
specification is a *working program* written by the domain experts.

 
> The brain's basic wiring for language enabled communication between 
> humans providing a competitive survival edge in a given natural 
> environment. Saying that that mechanism is the ideal way to specify 
> Ph.D.-level computer science thought about machine behavior does not 
> seem like a strong claim to me.

Its not a claim I made. I claim that computer languages can be made
more like natural languages, and the result would broaden the range of
domain experts who could write functioning software systems for
themselves. Would these systems by optimized for best CPU/memory/mass
storage use? No, probably not. But in most cases, that wouldn't
matter. In those few cases where such performance issues did matter,
the domain expert could call in a real software engineer to tune the
already correct, but slow/memory pig/disk hog program for better
performance.

> To say that bankers can express application behaviors best through 
> natural language is true only to the extent that what they express does 
> not exceed their domain knowledge and to the extent that what they 
> express in natural language is not subject to multiple interpretations 
> such as we find even in the shortest legal documents such as 
> constitutions.


This is a red herring. Bankers, and other non-computer-scientists, are
perfectly capable of specifying things unambiguously when they know
that it is necessary. This process would be aided by *useful* compiler
messages, specifying what is ambiguous, and possible interpretations,
allowing the user to specify a specific, unambiguous alternative, that
would then become the saved version.


> A banker using natural language to express desired machine behaviors 
> quickly will find repetitive expressions, begin to find them tedious, 
> and begin to look for shorter methods of expressing common patterns. 

People already do this with spreadsheet formulas. There's no reason to
believe that they wouldn't generalize this to methods, and modules of
functionality which would be regularly re-used.

> Over a period of years, if the compilers adapt to the banker's desire 
> for these less-tedious means of expression, the banker will end up with 
> a domain-specific language for expressing desired machine behaviors.

This would be nice, but is a step beyond even what I am advocating.
Having an adaptive compiler would be nice, but lets get one that is
merely more natural-language-like first.

 
> One idea is that when building large, complex financial systems, a 
> banker can express all the proper machine behaviors in natural language 
> without a systems person on the team. Another idea is that when building 
> large, complex banking systems, a systems person can express all the 
> proper financial behaviors in any language without a banker on the team. 
> Both ideas seem equally untenable to me.  Sufficiently reduce the domain 
> knowledge required, and a banker can write a spreadsheet application and 
> a programmer can balance a checkbook.

But look at the historical trend that you yourself have acknowledged;
in the future, do you think that we'll be moving in the direction of
teams with fewer systems people (like a banker with a language that is
simpler, yet more flexible than current spreadsheets), or teams with
fewer domain experts (like a cube farm full of programmers trying to
implement a banking system with no bankers to guide them)?

It think its clear that the former scenario is one that I'll see in my
lifetime, and that if I ever hear about the latter, I'll know its time
to dump stock in that bank.
From: Karl A. Krueger
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bal9pd$4s4$1@baldur.whoi.edu>
Raffael Cavallaro <·······@mediaone.net> wrote:
> Jeff Caldwell <·····@yahoo.com> wrote in message news:<····················@news1.news.adelphia.net>...
>> How many Ph.D.'s in molecular biology have the equivalent computer 
>> science domain knowledge of a Ph.D. in computer science?
> 
> You miss my point. With better (read, more natural-language-like)
> computer languages, Ph.D.s in molecular biology wouldn't *need* the
> equivalent science domain knowledge of a Ph.D. in computer science.

Pardon my cluelessness, but it doesn't seem to me that spreadsheets (the
example being used of a domain-specific "programming language") are any
more akin to a natural language than are ordinary programming languages.

Spreadsheets don't have to be like -natural- languages to be easier for
accountants.  They have to be more like the notation that evolved
specifically to handle accountancy, the domain-specific conlang as it
were:  ledger books.  And so they are.  They visually resemble printed
ledgers, and easily support operations that make sense in a ledger, like
"sum this column" or "let these values here be 106.5% of those values
over there".

The analogue of spreadsheets in a given domain would be a programmable
system with support for that domain's specific notation and operations
-- as, say, computer algebra systems offer for mathematics.  This would
only resemble natural language insofar as the domain lends itself to
same:  accountants' columns of figures do not look much like English to
me.

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305231235.13ae47e5@posting.google.com>
"Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...


> Pardon my cluelessness, but it doesn't seem to me that spreadsheets (the
> example being used of a domain-specific "programming language") are any
> more akin to a natural language than are ordinary programming languages.

Which is why they are a domain specific solution, not a general
purpose one.

This is the big picture problem. Software engineers keep crafting
either:

1. domain specific, user friendly solutions, like spreadsheets, or CAD
packages, or...
2. general purpose languages that only software engineers can really
use effectively.

What we need are general purpose computer languages that are also user
friendly. When it comes to languages, "user friendly" means
natural-language-like.

> 
> Spreadsheets don't have to be like -natural- languages to be easier for
> accountants.

But not everyone who needs software is an accountant, or an architect
(CAD packages), etc.

> The analogue of spreadsheets in a given domain would be a programmable
> system with support for that domain's specific notation and operations
> -- as, say, computer algebra systems offer for mathematics.  This would
> only resemble natural language insofar as the domain lends itself to
> same:  accountants' columns of figures do not look much like English to
> me.

You're thinking in a domain-specific-solution way. This is bound to
fail, because each new domain will require its own unique, mutually
incompatible, domain specific language. Unless your needs fall
precisely into that particular realm, and do not extend beyond it in
any way, you lose. Better to craft a general purpose,
natural-language-like computer language, that all the specific domains
can use. As new application domains arise, a general purpose languge
can be turned to those tasks, but domain specific solutions are
unlikely to be flexible enough to be useful.
From: Matthew Danish
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <20030523205407.GE17564@lain.cheme.cmu.edu>
On Fri, May 23, 2003 at 01:35:00PM -0700, Raffael Cavallaro wrote:
> When it comes to languages, "user friendly" means
> natural-language-like.

Where has this been proven?  I don't think that this is the case at all.
What will end up happening is this: Joe User will type out a sentence
expecting it to have X behavior.  Jill User will type out the same
sentence expecting it to have Y behavior.  If X is not Y, then one of
them will be very surprised.  And natural language leaves so much
ambiguity, normally, that this is bound to happen.  And eliminating
ambiguity from natural language will just give you a stiff, difficult
language which is more akin to an overly verbose formal language than
anything a human might use for day-to-day conversation.

A true natural language interface requires, in my opinion, artificial
intelligence in order to be usable.  Without that, it will be entirely
too frustrating for any user who thinks that they can pretend to be
speaking to another human being.  And if they don't think that way, then
what is the point of being natural-language-like?

Logicians went through this over a century ago, when Frege published
`Begriffsschrift'.  They arrived at precisely the opposite conclusion that
you have.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305252015.384c1795@posting.google.com>
Matthew Danish <·······@andrew.cmu.edu> wrote in message news:<······················@lain.cheme.cmu.edu>...
> On Fri, May 23, 2003 at 01:35:00PM -0700, Raffael Cavallaro wrote:
> > When it comes to languages, "user friendly" means
> > natural-language-like.
> 
> Where has this been proven?  I don't think that this is the case at all.

Jochen Schmidt has your answer:

"Concepts like context allow humans to express programs with means
they
already understood in their wetware. We already paid the bill - the 
facility is already installed and gets used and trained on a daily
base..."

In other words, user friendly means leveraging facilities the user
already has. Human users already have an extraordinarily complex
natural language facility, evolved over tens of thousands (if not
hundreds of thousands) of years. It is foolish not to leverage it in
designing a general purpose computer language.
From: Karl A. Krueger
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ban2ta$nsp$1@baldur.whoi.edu>
Raffael Cavallaro <·······@mediaone.net> wrote:
> You're thinking in a domain-specific-solution way. This is bound to
> fail, because each new domain will require its own unique, mutually
> incompatible, domain specific language. Unless your needs fall
> precisely into that particular realm, and do not extend beyond it in
> any way, you lose.

I am clearly confused.  It seems to me, though, that every program is a
specificity, a selection of function for some purpose.  It also seems to
me that programs that are to serve a particular domain must of necessity
incorporate domain-specific knowledge.  They would not be very useful if
they did not.

The example of a spreadsheet reminds me of the warlord of Wu and his
question:

	http://www.canonical.org/~kragen/tao-of-programming.html#book3

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Tim Bradshaw
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ey3smr3nt87.fsf@cley.com>
* Raffael Cavallaro wrote:

> What we need are general purpose computer languages that are also user
> friendly. When it comes to languages, "user friendly" means
> natural-language-like.

It's interesting that in domains where precision in language is
required, languages which are more-or-less artificial are used, even
when they are not interpreted by machines.  For instance air traffic
control, legal terminology and so on.

--tim
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwu1bjrmct.fsf@shell01.TheWorld.com>
Tim Bradshaw <···@cley.com> writes:

> * Raffael Cavallaro wrote:
> 
> > What we need are general purpose computer languages that are also user
> > friendly. When it comes to languages, "user friendly" means
> > natural-language-like.
> 
> It's interesting that in domains where precision in language is
> required, languages which are more-or-less artificial are used, even
> when they are not interpreted by machines.  For instance air traffic
> control, legal terminology and so on.

It makes sense though.

Natural language is designed in distributed fashion and so it's hard
to make sure that it satisfies any particular requirement of
consistency or lack of ambiguity.  It is adapted dynamically by changing
bits and pieces in ways that suit individual speakers or sets of speakers
without regard to the parts of the language that are not used by those
speakers that might be adversely affected.

It also seems unlikely that this phenomenon could lead to other than chaos 
if it were not governed at least to some extent by underlying rules that
limit the  "possible languages" or provide incentives for converging on
certain features that work well for human listeners.

For example, I recall an linguistics class in which someone reported on a
paper about "infixing" in English.  Almost all of English compounding uses
suffixes and prefixes, and words are almost never inserted in the middle of
another word ("infixing").  The only exceptions are "a whole 'nother" which
is probably the result of some internal confusion about whether "another"
is a single word or two words and the infixing of foul language in words
(as in "in- f*cking -credible" or "fan- f*cking -tastic").  Curiously, 
though, study of these shows that even there, in the peak of anger, with
no grammar teachers looking over, people don't just arbitrarily inject 
these words into words.  They carefully place them befor the stressed 
syllable because they have intuitive senses of what works best and what
does not.

So when people using natural languages go and create several or even
dozens of meanings of the same word (according to context), especially
without a resultant "mass [webster.com entry #4, defintion 1b] public
[entry #1, def 1a] outcry [def 1b]", one is led to believe that this
is on the intuitively "approved list" of language building tools.

When people finally try to coordinate their actions, it makes sense that
the rules need to be tightened and _actual_ ambiguity goes out of the
system as much as possible.

The point, though, is that the potential for ambiguity can be removed
in any of several ways, but for some reason the first and only impulse
of certain people is to say that there is only one way.  A language
could have just one definition for each word (and so 3 or 10 times 
as many words if the language is as dense as English); one could have
separated namespaces as human languages do; or one could do like CLHS
and italicize/subscript words that are thought to be ambiguous to require
special meaning clarification.  There are lots of ways of doing it.

I claim it's not a coincidence that many of the champions of a Lisp1
are also the ones claiming there's only one way to do things.  That
is, the overarching assumption seems to me that you can have a single
right way to do things, and that you can get away without respecting
or tolerating others, and Lisp1 seems to be only one of many  
manifestations of this.  Lisp2 people not only tolerate a second 
namespace, but also are not troubled that Lisp1 people are off happily
programming in Lisp1.  We don't say that it's unnatural for them to do
as they're doing because we know there are multiple ways of doing things
and ours is just one way.  However, we routinely have to tolerate the
other camp nosing into our world and telling us that it's unnatural
becuase the missing skill is, I think, not one of a specific linguistic
pattern, but more generally one of just plain tolerance.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2505031457420001@192.168.1.51>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> we routinely have to tolerate the
> other camp [Lisp-1 advocates] nosing into our world
> and telling us that it's unnatural

I've been reading c.l.l. for a very long time, and my impression is that
the complaints about "the other camp nosing into our world" are much more
common than the actual event.  Since you say this is a routine occurrence,
would you mind citing a few recent examples?

E.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwu1bizbs1.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> I've been reading c.l.l. for a very long time, and my impression is that
> the complaints about "the other camp nosing into our world" are much more
> common than the actual event.  Since you say this is a routine occurrence,
> would you mind citing a few recent examples?

Fights over lisp1/lisp2 are common although this is a long-settled issue.
They usually come from people surprised that lisp2 is possible, and I 
perceive that such confusion must come from Scheme since a great many 
other common languages routinely tolerate multiple namespaces without 
comment.

By contrast, I've never been directed to a single message on comp.lang.lisp
agitating for it to be a lisp2.
From: Michael Park
Subject: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <ff20888b.0305252151.5456744e@posting.google.com>
Kent M Pitman <······@world.std.com> wrote:

> Fights over lisp1/lisp2 are common although this is a long-settled issue.


Saeed al-Shahhhaf <·····@information.gov.iq> wrote 

> We chased them... and they ran away...

I've been following this discussion very closely, but I haven't seen a
single valid argument in favor of Lisp2. Maybe I'm thick or something.
The only arguments I've seen were in the "it's not as bad as everyone
thinks", "you can get used to it if you really try" and "he made me do
it" categories.

P.S. This is a troll. Thanks for noticing.
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <6jnSPj2uvm56NDU3lPNT50yCc+6L@4ax.com>
[followup posted to comp.lang.lisp only]

On 25 May 2003 22:51:51 -0700, ···········@whoever.com (Michael Park)
wrote:

> Kent M Pitman <······@world.std.com> wrote:
> 
> > Fights over lisp1/lisp2 are common although this is a long-settled issue.
[...]
> I've been following this discussion very closely, but I haven't seen a
> single valid argument in favor of Lisp2. Maybe I'm thick or something.

In the early 1980s, a number of Lisp folks started gathering together for
designing a new Lisp dialect, which was later known as ANSI Common Lisp.

One of the most important things all those guys did, was agreeing on a
voting-based process for deciding which features would be included in the
language, and which should not. By using that process, they decided that
the new language would have been a Lisp-2 (more correctly, a Lisp-n)
dialect.

That's it. It's really that simple.

Everybody who doesn't like the outcome of this standardization process, or
who doesn't accept any kind of process for deciding which features should
go in a language, is welcome not to use ANSI Common Lisp, and maybe
design/use his favorite Lisp-1 dialect.


> P.S. This is a troll. Thanks for noticing.

You may want to listen "New Trolls" music:

  http://www.aldebarannet.com


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: pentaside asleep
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <7d98828b.0305262328.3c178701@posting.google.com>
Paolo Amoroso <·······@mclink.it> wrote in message news:<····························@4ax.com>...
> Everybody who doesn't like the outcome of this standardization process, or
> who doesn't accept any kind of process for deciding which features should
> go in a language, is welcome not to use ANSI Common Lisp, and maybe
> design/use his favorite Lisp-1 dialect.

To an outside observer, it appears that if flamewars concerning how
Lisp "should be" are unwanted, perhaps this newsgroup shouldn't be
named comp.lang.lisp.  No doubt this has come up before, but a quick
search turns up little.

This does not seem to be a matter of lisp-1 supporters staying on
c.l.scheme, since as I understand, Scheme was one of the first lisps
with lexical scoping, influencing CL.  Looks like there is a group of
people who are interested in discussing a lisp that builds on CL's
power, but with a modified syntax.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwbrxofuc2.fsf@shell01.TheWorld.com>
················@yahoo.com (pentaside asleep) writes:

> Paolo Amoroso <·······@mclink.it> wrote in message news:<····························@4ax.com>...
> > Everybody who doesn't like the outcome of this standardization process, or
> > who doesn't accept any kind of process for deciding which features should
> > go in a language, is welcome not to use ANSI Common Lisp, and maybe
> > design/use his favorite Lisp-1 dialect.
> 
> To an outside observer, it appears that if flamewars concerning how
> Lisp "should be" are unwanted, perhaps this newsgroup shouldn't be
> named comp.lang.lisp.  No doubt this has come up before, but a quick
> search turns up little.
> 
> This does not seem to be a matter of lisp-1 supporters staying on
> c.l.scheme, since as I understand, Scheme was one of the first lisps
> with lexical scoping, influencing CL.  Looks like there is a group of
> people who are interested in discussing a lisp that builds on CL's
> power, but with a modified syntax.

The world moves ahead only by agreeing not to make holy wars out of the
arbitrary. 

Were it the case that this was the only minorly controversial area in CL,
it might be worth discussing.  However, it's not.  There are a truckload
of stupid little issues that come up from time to time that were decided 
in an arbitrary way that suit some people and not others.

We could spend time talking about why strings are not null-terminated;
about why the anguage syntax is case-translating or about why the
internal canonical case for symbols and pathnames is uppercase; about
why there is WHEN and UNLESS instead of just IF and COND or about why
there is WHEN and UNLESS and COND instead of just IF or about why
there is WHEN and UNLESS and IF instead of just COND; about why we
have GO or why we don't have CALL/CC; about why missing multiple values
default or about why additional multiple values are ignored; about why
we allow left to right order of evaluation instead of leaving it 
unspecified; about why CLOS subclassing produces a total order instead
of merely a partial order in underspecified situatios; about why there
are hash tables for EQUALP but not for STRING-EQUAL nor =; about why there
are cartesian complexes but not polars; about why (COMPLEXP real) returns
true instead of false; about why we don't require IEEE arithmetic
or tail call elimination; about why SPECIAL variables are managed using
DECLARE instead of their own set of binding forms; and on and on.  Not
everything in CL is to everyone's liking.

But all we can really do is tell you why those things were decided, we 
cannot as a community go back and revisit each and every one of those
decisions.  To revisit only one is silly.  A language that was just like
CL but different in exactly one way would needlessly divide the community.
A language that was a lot different would just be another language, and
might appeal to people who liked a different design theory than CL has.

Telling someone to design their own language is NOT trying to put them off.
It is addressing the practical fact that every language has, at its core,
a political process [1] and a set of adherents who like its politics.  CL was
designed as an industrial strength language [2] for the purpose of people who
didn't want to see the language diddled to death, and its body of users is
best described (IMO) as people who care more about "having a way to do
things today" than "fighting forever about what the truly right way is" 
because they know in their hearts there never is any truly right way, and
that arguing is just a way of putting off an aribtrary decision anyway.
So matters are settled becuase it's more important to this community to
settle them than to bicker.

I cite in the real world the decisions to use 110V or 220V outlets as an
example of something relatively arbitrary where it was more important in a
given region to "just decide" and then build a lot of devices compatible
with the wall sockets than to have the "best" value and to hold off the
deployment of electricity everywhere until there was worldwide agreement.
I cite again the decision to go with VHS for video cassettes over the
(claimed to be more technically superior) BetaMax format.  What video store
owners REALLY wanted was to have a single inventory that would satisfy all
customers, not to have several unrelated stores for differing communities.

This is NOT to say that CL is the best language that could result from such
a process, but it is "good enough" and has done well.  The utility of having
the whole language turned upside down to satisfy some selfish (and that's
what it is) need of some person who wants to do one or two things differently
but otherwise likes the language is just not sufficient to justify doing it.

ON THE OTHER HAND, if that same person wants to go raise their own banner
and do their own work to make a language to whatever specs they like, I've
got no problem because then they are paying their dues by understanding and
taking on the true cost to the community of the major upheaval that results
from divergence.  Starting a new language is not easy, nor should it be.
Languages bring people together.  You can't have each person have a custom
language without either a common core and a system for organizing extensions
[in which case it's not a different language] or  else without losing the
advantages of having brought people together.

Whether or not such discussion is welcome on comp.lang.lisp is another
matter.  As I understand Usenet guidelines, ARC's a new language that should
use this list until it gets enough traffic to need to diverge.  That is, 
we shouldn't grow new newsgroups in anticipation of traffic that is not there.

But the spin on the message I see above is that we are being elitist in not
wanting to reconsider old decisions.  We are not.  We are simply (re)applying
the design guidelines that brought us the language at all.

CL resulted historically from people's (and DARPA)'s lack of desire to see
myriad little almost-the-same dialects changing all the time because it kept
large-sized projects from getting built.

Incidentally, I want to add my feeling that this whole conversational
problem is a result of the natural but peculiar consequence of the
particular place we are in history.  We are only far enough out from
the early design of Lisp (and other things) that we can still keep it
all in our head what would be involved to go back and do things over
again, and so it "feels possible".  And we are confronted with people
who feel "left out" by not having gotten to do many of the things they
realistically feel they could have done.  People often feel similarly
when confronted with history of other professions.  I mean, who cool
would it have been to get to be Newton (at least, the grade school
version of him that you learn before you realize he had to know some
math to do what he did) and to sit there and see an apple fall and say
"Wow. There must be gravity."  We feel like any of us could have done
the same, and we're annoyed that this person or that person got to do
it.  Once history has advanced far enough that there are hundreds of
years of computer science, especially given its rate of growth, I
suspect that the desire to start over will feel more like the choice
of an artist or a luddite than the choice of a serious engineer.  I
don't know if that's good or bad.  But I do know that we won't _get_
300 years out if we restart every decade because someone feels left
out and wants to tweak some matter of mere syntax.

Geez, I'd change almost everything about the Java language if I were
allowed to, and yet I think it's solid enough to be quite a stable
base for developing long-lived programs.  I see it as an assembly
language, not as a high-level language, but that's a detail that
doesn't affect its design, just how I would personally use it.  And
while I'm quite sure I could make some good suggestions about how to
change Java for the better, the true value of Java is not unlike the
value of CL -- it's stability.  It would do the community no favors to
see my suggestions implemented at the expense of community stability.
If someone wanted to make something that was not Java but was like it,
I'd have fun helping, because that was a different language and an
appropriate place to play, absent the negative effects on a user base.
I'm suggesting no less to others in the CL community who want a place
to play: Either learn how to use CL's extensive facilities for
modifying your programming environment to your liking, or make your
own language... But don't ask everyone else to take on enormous
instability and risk just to see your own petty pet peave soothed.

Lisp1/Lisp2 is not just an issue the CL community did not deal with.
We spawned a committee specifically to review the matter and we as a 
community voted we wanted to keep it as a Lisp2. [3]

Moroever, the ISO working group originally set out to change things in Lisp
it didn't like, and the Lisp1/Lisp2 issue again came up. Once again the 
issue resulted in Lisp2 semantics.

Scheme didn't start out by trying to change Lisp.  It made its own
community and attracted its own users.  But it didn't attract
everyone.  By culling the Lisp1 advocates, I think it's plain that
Scheme left a higher density of Lisp2 advocates in the CL camp.
You're welcome to make another dialect that does the same, and to hope
for a similar shift of user base.  But if you succeed wildly, don't be
surprised if a side-effect is that the CL camp will become even more
rigidly Lisp2. You're welcome to change your own users, but you shouldn't
expect to change someone else's users.  

And sometimes, you should notice that the language you're using isn't the
one you want to be using.

References:

 [1] "More Than Just Words / Lambda, the Ultimate Political Party" (1994)
     http://www.nhplace.com/kent/PS/Lambda.html

 [2] X3J13 Charter
     http://www.nhplace.com/kent/CL/x3j13-86-020.html

 [3] Technical Issues of Separation in Function Cells and Value Cells
     http://www.nhplace.com/kent/Papers/Technical-Issues.html
From: Michael Park
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <ff20888b.0305270852.677c58b9@posting.google.com>
I'm sorry, but I think your logic is faulty. This is like saying "We
voted for Bush, now you can not criticize him or his party. We need to
stay united. You are welcome to create your own country, however". As
a matter of fact, it's worse. Since Lisp <> ANSI CL, what you wrote is
logically akin to demanding that GOP should not be criticized on the
whole American continent.

P.S. Who voted for the guys who voted for Lisp2?

Kent M Pitman <······@world.std.com> wrote in message news:<···············@shell01.TheWorld.com>...
> ················@yahoo.com (pentaside asleep) writes:
> 
> > Paolo Amoroso <·······@mclink.it> wrote in message news:<····························@4ax.com>...
> > > Everybody who doesn't like the outcome of this standardization process, or
> > > who doesn't accept any kind of process for deciding which features should
> > > go in a language, is welcome not to use ANSI Common Lisp, and maybe
> > > design/use his favorite Lisp-1 dialect.
> > 
> > To an outside observer, it appears that if flamewars concerning how
> > Lisp "should be" are unwanted, perhaps this newsgroup shouldn't be
> > named comp.lang.lisp.  No doubt this has come up before, but a quick
> > search turns up little.
> > 
> > This does not seem to be a matter of lisp-1 supporters staying on
> > c.l.scheme, since as I understand, Scheme was one of the first lisps
> > with lexical scoping, influencing CL.  Looks like there is a group of
> > people who are interested in discussing a lisp that builds on CL's
> > power, but with a modified syntax.
> 
> The world moves ahead only by agreeing not to make holy wars out of the
> arbitrary. 
> 
> Were it the case that this was the only minorly controversial area in CL,
> it might be worth discussing.  However, it's not.  There are a truckload
> of stupid little issues that come up from time to time that were decided 
> in an arbitrary way that suit some people and not others.
> 
> We could spend time talking about why strings are not null-terminated;
> about why the anguage syntax is case-translating or about why the
> internal canonical case for symbols and pathnames is uppercase; about
> why there is WHEN and UNLESS instead of just IF and COND or about why
> there is WHEN and UNLESS and COND instead of just IF or about why
> there is WHEN and UNLESS and IF instead of just COND; about why we
> have GO or why we don't have CALL/CC; about why missing multiple values
> default or about why additional multiple values are ignored; about why
> we allow left to right order of evaluation instead of leaving it 
> unspecified; about why CLOS subclassing produces a total order instead
> of merely a partial order in underspecified situatios; about why there
> are hash tables for EQUALP but not for STRING-EQUAL nor =; about why there
> are cartesian complexes but not polars; about why (COMPLEXP real) returns
> true instead of false; about why we don't require IEEE arithmetic
> or tail call elimination; about why SPECIAL variables are managed using
> DECLARE instead of their own set of binding forms; and on and on.  Not
> everything in CL is to everyone's liking.
> 
> But all we can really do is tell you why those things were decided, we 
> cannot as a community go back and revisit each and every one of those
> decisions.  To revisit only one is silly.  A language that was just like
> CL but different in exactly one way would needlessly divide the community.
> A language that was a lot different would just be another language, and
> might appeal to people who liked a different design theory than CL has.
> 
> Telling someone to design their own language is NOT trying to put them off.
> It is addressing the practical fact that every language has, at its core,
> a political process [1] and a set of adherents who like its politics.  CL was
> designed as an industrial strength language [2] for the purpose of people who
> didn't want to see the language diddled to death, and its body of users is
> best described (IMO) as people who care more about "having a way to do
> things today" than "fighting forever about what the truly right way is" 
> because they know in their hearts there never is any truly right way, and
> that arguing is just a way of putting off an aribtrary decision anyway.
> So matters are settled becuase it's more important to this community to
> settle them than to bicker.
> 
> I cite in the real world the decisions to use 110V or 220V outlets as an
> example of something relatively arbitrary where it was more important in a
> given region to "just decide" and then build a lot of devices compatible
> with the wall sockets than to have the "best" value and to hold off the
> deployment of electricity everywhere until there was worldwide agreement.
> I cite again the decision to go with VHS for video cassettes over the
> (claimed to be more technically superior) BetaMax format.  What video store
> owners REALLY wanted was to have a single inventory that would satisfy all
> customers, not to have several unrelated stores for differing communities.
> 
> This is NOT to say that CL is the best language that could result from such
> a process, but it is "good enough" and has done well.  The utility of having
> the whole language turned upside down to satisfy some selfish (and that's
> what it is) need of some person who wants to do one or two things differently
> but otherwise likes the language is just not sufficient to justify doing it.
> 
> ON THE OTHER HAND, if that same person wants to go raise their own banner
> and do their own work to make a language to whatever specs they like, I've
> got no problem because then they are paying their dues by understanding and
> taking on the true cost to the community of the major upheaval that results
> from divergence.  Starting a new language is not easy, nor should it be.
> Languages bring people together.  You can't have each person have a custom
> language without either a common core and a system for organizing extensions
> [in which case it's not a different language] or  else without losing the
> advantages of having brought people together.
> 
> Whether or not such discussion is welcome on comp.lang.lisp is another
> matter.  As I understand Usenet guidelines, ARC's a new language that should
> use this list until it gets enough traffic to need to diverge.  That is, 
> we shouldn't grow new newsgroups in anticipation of traffic that is not there.
> 
> But the spin on the message I see above is that we are being elitist in not
> wanting to reconsider old decisions.  We are not.  We are simply (re)applying
> the design guidelines that brought us the language at all.
> 
> CL resulted historically from people's (and DARPA)'s lack of desire to see
> myriad little almost-the-same dialects changing all the time because it kept
> large-sized projects from getting built.
> 
> Incidentally, I want to add my feeling that this whole conversational
> problem is a result of the natural but peculiar consequence of the
> particular place we are in history.  We are only far enough out from
> the early design of Lisp (and other things) that we can still keep it
> all in our head what would be involved to go back and do things over
> again, and so it "feels possible".  And we are confronted with people
> who feel "left out" by not having gotten to do many of the things they
> realistically feel they could have done.  People often feel similarly
> when confronted with history of other professions.  I mean, who cool
> would it have been to get to be Newton (at least, the grade school
> version of him that you learn before you realize he had to know some
> math to do what he did) and to sit there and see an apple fall and say
> "Wow. There must be gravity."  We feel like any of us could have done
> the same, and we're annoyed that this person or that person got to do
> it.  Once history has advanced far enough that there are hundreds of
> years of computer science, especially given its rate of growth, I
> suspect that the desire to start over will feel more like the choice
> of an artist or a luddite than the choice of a serious engineer.  I
> don't know if that's good or bad.  But I do know that we won't _get_
> 300 years out if we restart every decade because someone feels left
> out and wants to tweak some matter of mere syntax.
> 
> Geez, I'd change almost everything about the Java language if I were
> allowed to, and yet I think it's solid enough to be quite a stable
> base for developing long-lived programs.  I see it as an assembly
> language, not as a high-level language, but that's a detail that
> doesn't affect its design, just how I would personally use it.  And
> while I'm quite sure I could make some good suggestions about how to
> change Java for the better, the true value of Java is not unlike the
> value of CL -- it's stability.  It would do the community no favors to
> see my suggestions implemented at the expense of community stability.
> If someone wanted to make something that was not Java but was like it,
> I'd have fun helping, because that was a different language and an
> appropriate place to play, absent the negative effects on a user base.
> I'm suggesting no less to others in the CL community who want a place
> to play: Either learn how to use CL's extensive facilities for
> modifying your programming environment to your liking, or make your
> own language... But don't ask everyone else to take on enormous
> instability and risk just to see your own petty pet peave soothed.
> 
> Lisp1/Lisp2 is not just an issue the CL community did not deal with.
> We spawned a committee specifically to review the matter and we as a 
> community voted we wanted to keep it as a Lisp2. [3]
> 
> Moroever, the ISO working group originally set out to change things in Lisp
> it didn't like, and the Lisp1/Lisp2 issue again came up. Once again the 
> issue resulted in Lisp2 semantics.
> 
> Scheme didn't start out by trying to change Lisp.  It made its own
> community and attracted its own users.  But it didn't attract
> everyone.  By culling the Lisp1 advocates, I think it's plain that
> Scheme left a higher density of Lisp2 advocates in the CL camp.
> You're welcome to make another dialect that does the same, and to hope
> for a similar shift of user base.  But if you succeed wildly, don't be
> surprised if a side-effect is that the CL camp will become even more
> rigidly Lisp2. You're welcome to change your own users, but you shouldn't
> expect to change someone else's users.  
> 
> And sometimes, you should notice that the language you're using isn't the
> one you want to be using.
> 
> References:
> 
>  [1] "More Than Just Words / Lambda, the Ultimate Political Party" (1994)
>      http://www.nhplace.com/kent/PS/Lambda.html
> 
>  [2] X3J13 Charter
>      http://www.nhplace.com/kent/CL/x3j13-86-020.html
> 
>  [3] Technical Issues of Separation in Function Cells and Value Cells
>      http://www.nhplace.com/kent/Papers/Technical-Issues.html
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2705031116200001@k-137-79-50-101.jpl.nasa.gov>
It's actually even worse than that.  Kent opposes even people who "support
the administration" but think that the community needs to pull together
for any sort of collective change.  It's more like, "We wrote the
Constitution [the ANSI standard], and now you cannot pass any more laws. 
If you think something needs to be done (like, say, repair a bridge)
you're welcome to go do it yourself, but don't bother us with it."

E.

In article <····························@posting.google.com>,
···········@whoever.com (Michael Park) wrote:

> I'm sorry, but I think your logic is faulty. This is like saying "We
> voted for Bush, now you can not criticize him or his party. We need to
> stay united. You are welcome to create your own country, however". As
> a matter of fact, it's worse. Since Lisp <> ANSI CL, what you wrote is
> logically akin to demanding that GOP should not be criticized on the
> whole American continent.
> 
> P.S. Who voted for the guys who voted for Lisp2?
> 
> Kent M Pitman <······@world.std.com> wrote in message
news:<···············@shell01.TheWorld.com>...
> > ················@yahoo.com (pentaside asleep) writes:
> > 
> > > Paolo Amoroso <·······@mclink.it> wrote in message
news:<····························@4ax.com>...
> > > > Everybody who doesn't like the outcome of this standardization
process, or
> > > > who doesn't accept any kind of process for deciding which features
should
> > > > go in a language, is welcome not to use ANSI Common Lisp, and maybe
> > > > design/use his favorite Lisp-1 dialect.
> > > 
> > > To an outside observer, it appears that if flamewars concerning how
> > > Lisp "should be" are unwanted, perhaps this newsgroup shouldn't be
> > > named comp.lang.lisp.  No doubt this has come up before, but a quick
> > > search turns up little.
> > > 
> > > This does not seem to be a matter of lisp-1 supporters staying on
> > > c.l.scheme, since as I understand, Scheme was one of the first lisps
> > > with lexical scoping, influencing CL.  Looks like there is a group of
> > > people who are interested in discussing a lisp that builds on CL's
> > > power, but with a modified syntax.
> > 
> > The world moves ahead only by agreeing not to make holy wars out of the
> > arbitrary. 
> > 
> > Were it the case that this was the only minorly controversial area in CL,
> > it might be worth discussing.  However, it's not.  There are a truckload
> > of stupid little issues that come up from time to time that were decided 
> > in an arbitrary way that suit some people and not others.
> > 
> > We could spend time talking about why strings are not null-terminated;
> > about why the anguage syntax is case-translating or about why the
> > internal canonical case for symbols and pathnames is uppercase; about
> > why there is WHEN and UNLESS instead of just IF and COND or about why
> > there is WHEN and UNLESS and COND instead of just IF or about why
> > there is WHEN and UNLESS and IF instead of just COND; about why we
> > have GO or why we don't have CALL/CC; about why missing multiple values
> > default or about why additional multiple values are ignored; about why
> > we allow left to right order of evaluation instead of leaving it 
> > unspecified; about why CLOS subclassing produces a total order instead
> > of merely a partial order in underspecified situatios; about why there
> > are hash tables for EQUALP but not for STRING-EQUAL nor =; about why there
> > are cartesian complexes but not polars; about why (COMPLEXP real) returns
> > true instead of false; about why we don't require IEEE arithmetic
> > or tail call elimination; about why SPECIAL variables are managed using
> > DECLARE instead of their own set of binding forms; and on and on.  Not
> > everything in CL is to everyone's liking.
> > 
> > But all we can really do is tell you why those things were decided, we 
> > cannot as a community go back and revisit each and every one of those
> > decisions.  To revisit only one is silly.  A language that was just like
> > CL but different in exactly one way would needlessly divide the community.
> > A language that was a lot different would just be another language, and
> > might appeal to people who liked a different design theory than CL has.
> > 
> > Telling someone to design their own language is NOT trying to put them off.
> > It is addressing the practical fact that every language has, at its core,
> > a political process [1] and a set of adherents who like its politics. 
CL was
> > designed as an industrial strength language [2] for the purpose of
people who
> > didn't want to see the language diddled to death, and its body of users is
> > best described (IMO) as people who care more about "having a way to do
> > things today" than "fighting forever about what the truly right way is" 
> > because they know in their hearts there never is any truly right way, and
> > that arguing is just a way of putting off an aribtrary decision anyway.
> > So matters are settled becuase it's more important to this community to
> > settle them than to bicker.
> > 
> > I cite in the real world the decisions to use 110V or 220V outlets as an
> > example of something relatively arbitrary where it was more important in a
> > given region to "just decide" and then build a lot of devices compatible
> > with the wall sockets than to have the "best" value and to hold off the
> > deployment of electricity everywhere until there was worldwide agreement.
> > I cite again the decision to go with VHS for video cassettes over the
> > (claimed to be more technically superior) BetaMax format.  What video store
> > owners REALLY wanted was to have a single inventory that would satisfy all
> > customers, not to have several unrelated stores for differing communities.
> > 
> > This is NOT to say that CL is the best language that could result from such
> > a process, but it is "good enough" and has done well.  The utility of having
> > the whole language turned upside down to satisfy some selfish (and that's
> > what it is) need of some person who wants to do one or two things
differently
> > but otherwise likes the language is just not sufficient to justify doing it.
> > 
> > ON THE OTHER HAND, if that same person wants to go raise their own banner
> > and do their own work to make a language to whatever specs they like, I've
> > got no problem because then they are paying their dues by understanding and
> > taking on the true cost to the community of the major upheaval that results
> > from divergence.  Starting a new language is not easy, nor should it be.
> > Languages bring people together.  You can't have each person have a custom
> > language without either a common core and a system for organizing extensions
> > [in which case it's not a different language] or  else without losing the
> > advantages of having brought people together.
> > 
> > Whether or not such discussion is welcome on comp.lang.lisp is another
> > matter.  As I understand Usenet guidelines, ARC's a new language that should
> > use this list until it gets enough traffic to need to diverge.  That is, 
> > we shouldn't grow new newsgroups in anticipation of traffic that is
not there.
> > 
> > But the spin on the message I see above is that we are being elitist in not
> > wanting to reconsider old decisions.  We are not.  We are simply
(re)applying
> > the design guidelines that brought us the language at all.
> > 
> > CL resulted historically from people's (and DARPA)'s lack of desire to see
> > myriad little almost-the-same dialects changing all the time because it kept
> > large-sized projects from getting built.
> > 
> > Incidentally, I want to add my feeling that this whole conversational
> > problem is a result of the natural but peculiar consequence of the
> > particular place we are in history.  We are only far enough out from
> > the early design of Lisp (and other things) that we can still keep it
> > all in our head what would be involved to go back and do things over
> > again, and so it "feels possible".  And we are confronted with people
> > who feel "left out" by not having gotten to do many of the things they
> > realistically feel they could have done.  People often feel similarly
> > when confronted with history of other professions.  I mean, who cool
> > would it have been to get to be Newton (at least, the grade school
> > version of him that you learn before you realize he had to know some
> > math to do what he did) and to sit there and see an apple fall and say
> > "Wow. There must be gravity."  We feel like any of us could have done
> > the same, and we're annoyed that this person or that person got to do
> > it.  Once history has advanced far enough that there are hundreds of
> > years of computer science, especially given its rate of growth, I
> > suspect that the desire to start over will feel more like the choice
> > of an artist or a luddite than the choice of a serious engineer.  I
> > don't know if that's good or bad.  But I do know that we won't _get_
> > 300 years out if we restart every decade because someone feels left
> > out and wants to tweak some matter of mere syntax.
> > 
> > Geez, I'd change almost everything about the Java language if I were
> > allowed to, and yet I think it's solid enough to be quite a stable
> > base for developing long-lived programs.  I see it as an assembly
> > language, not as a high-level language, but that's a detail that
> > doesn't affect its design, just how I would personally use it.  And
> > while I'm quite sure I could make some good suggestions about how to
> > change Java for the better, the true value of Java is not unlike the
> > value of CL -- it's stability.  It would do the community no favors to
> > see my suggestions implemented at the expense of community stability.
> > If someone wanted to make something that was not Java but was like it,
> > I'd have fun helping, because that was a different language and an
> > appropriate place to play, absent the negative effects on a user base.
> > I'm suggesting no less to others in the CL community who want a place
> > to play: Either learn how to use CL's extensive facilities for
> > modifying your programming environment to your liking, or make your
> > own language... But don't ask everyone else to take on enormous
> > instability and risk just to see your own petty pet peave soothed.
> > 
> > Lisp1/Lisp2 is not just an issue the CL community did not deal with.
> > We spawned a committee specifically to review the matter and we as a 
> > community voted we wanted to keep it as a Lisp2. [3]
> > 
> > Moroever, the ISO working group originally set out to change things in Lisp
> > it didn't like, and the Lisp1/Lisp2 issue again came up. Once again the 
> > issue resulted in Lisp2 semantics.
> > 
> > Scheme didn't start out by trying to change Lisp.  It made its own
> > community and attracted its own users.  But it didn't attract
> > everyone.  By culling the Lisp1 advocates, I think it's plain that
> > Scheme left a higher density of Lisp2 advocates in the CL camp.
> > You're welcome to make another dialect that does the same, and to hope
> > for a similar shift of user base.  But if you succeed wildly, don't be
> > surprised if a side-effect is that the CL camp will become even more
> > rigidly Lisp2. You're welcome to change your own users, but you shouldn't
> > expect to change someone else's users.  
> > 
> > And sometimes, you should notice that the language you're using isn't the
> > one you want to be using.
> > 
> > References:
> > 
> >  [1] "More Than Just Words / Lambda, the Ultimate Political Party" (1994)
> >      http://www.nhplace.com/kent/PS/Lambda.html
> > 
> >  [2] X3J13 Charter
> >      http://www.nhplace.com/kent/CL/x3j13-86-020.html
> > 
> >  [3] Technical Issues of Separation in Function Cells and Value Cells
> >      http://www.nhplace.com/kent/Papers/Technical-Issues.html
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfw4r3gjibv.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

···@jpl.nasa.gov (Erann Gat) writes:

> It's actually even worse than that.  Kent opposes even [...]

Erann,

The Scheme community is entitled to its own sensibilities, and as a
rule I try not to engage public debates there.

Cross-posting selects the union of people who care about two topics,
not the intersection.  I won't debate it out of context there.

I would appreciate it if you would confine your (mis)characterizations 
of my position to a forum that where I have elected to be a public persona.
I'm not going to go there to fix the mess you guys have made.  I guess
that means my name will remain undefended.  With luck, conscience will
strike and one or both of you will retract your remarks to that venue,
optionally also inviting anyone over there that cares about CL politics to 
come here to debate it.

As it is, I think a unified debate will just bring a lot of uninformed
people to the table and cause a lot of people from each community 
with no real stake in the other's community to whine uselessly with
little hope of positive outcome.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2705031240340001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> Cross-posting selects the union of people who care about two topics,
> not the intersection.  I won't debate it out of context there.

Sorry, I wasn't paying attention and so didn't notice that I was cross-posting.

> I would appreciate it if you would confine your (mis)characterizations
> of my position to a forum that where I have elected to be a public persona.

I don't believe I was mischaracterizing your position, and if you like I
will go back and look up specific quotes from you to back up my view, but
this is an orthogonal issue to the cross-posting.  What would you like me
to do about that at this point?

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwhe7gi27n.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> I don't believe I was mischaracterizing your position,

Well, you said the situation was "worse" than what the person before 
you said, who I'd already dismissed as just flamebait.

Why is it necessary to characterize my position at all given that I'm
already a participant in the discussion.

> I will go back and look up specific quotes from you to back up my view

That'd be helpful, if only so I can clarify if there's some misunderstanding.

> this is an orthogonal issue to the cross-posting.

Yes.

> What would you like me to do about that at this point?

Urge people not to cross-post replies but rather to just follow up to here
if they care, which I suspect most do not.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2705031335160001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > I don't believe I was mischaracterizing your position,
> 
> Well, you said the situation was "worse" than what the person before 
> you said, who I'd already dismissed as just flamebait.

I don't think Michael Park's comment was flamebait, or at least I didn't
think it was intended to be.  I thought it was a sober statement of a
defensible position, and one with which I happen to agree.

As for characterising the situation as "worse", I infered that Michael
wrote what he did because he thought it was a bad situation, not a good
one.  Insofar as I was saying that IMO the true situation was even more
extreme than he described it is indeed "worse".  But that is just a
statement of my opinion about the situation, not an attack on you.

BTW, I actually agree with most of what you wrote in the posting that
Michael was responding to, and that a lot of time gets wasted arguing over
settled issues.  However, I think a lot more time gets wasted trying to
quash those arguments when they arise, as they inevitably will regardless
of how much you try to put a stop to them.  I think a much more effective
strategy for making those sorts of things go away is to simply ignore
them, or write up a definitive FAQ about these things and just point
people to it.

> Why is it necessary to characterize my position at all given that I'm
> already a participant in the discussion.

As I said in another branch of this thread, this was not a statement about
your position, but rather a statement about your (past and ongoing)
actions.  The reason I brought it up is that 1) Michael didn't seem to be
aware of the true magnitude of the situation, and 2) I had a suspicion
that you might be unaware of how you were being perceived, and this
discussion just seemed like an opportune vehicle for pointing it out. 
(The fact that you reacted the way you did confirms my suspicions.)

> > I will go back and look up specific quotes from you to back up my view
> 
> That'd be helpful, if only so I can clarify if there's some misunderstanding.

See my other posting.

> > What would you like me to do about that at this point?
> 
> Urge people not to cross-post replies but rather to just follow up to here
> if they care, which I suspect most do not.

OK, everyone, you are hereby so urged.  And I hereby sentence myself to
writing C++ code for the rest of the day (Ooh! Harsh! ;-) for not paying
more attention to this myself.

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwznl8i3j0.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

···@jpl.nasa.gov (Erann Gat) writes:

> It's actually even worse than that.  Kent opposes even people who "support
> the administration" but think that the community needs to pull together
> for any sort of collective change.  It's more like, "We wrote the
> Constitution [the ANSI standard], and now you cannot pass any more laws. 
> If you think something needs to be done (like, say, repair a bridge)
> you're welcome to go do it yourself, but don't bother us with it."

I'd prefer to say what I oppose and don't, I.

I do think, though, that it's improper to propose changes in the language
without one or more implementations having implemented it.  In general, that
was a standard feature of the X3J13 process.  We tried not to make untested
changes.  That's what's meant by "codifying existing practice" in the X3J13
Charter.  http://www.nhplace.com/kent/CL/x3j13-86-020.html

The idea was that standards are not places for experimentation.  
Experimentation can go wrong, and standards are too hard to back out of.
Experiments should be run in implementations.  When multiple implementations
have a way of doing something, then there's motivation for a standard so that
everyone doesn't use gratuitously different ways of doing the same thing.
THEN a standard is needed.

And even then, standards don't have to start over from scratch.  They can
be layered atop.  I have plans for doing precisely that which I am actually
actively working on this week ... when not defending my good name.

Sigh.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2705031318270001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> [ replying to comp.lang.lisp only
>   http://www.nhplace.com/kent/PFAQ/cross-posting.html ]
> 
> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > It's actually even worse than that.  Kent opposes even people who "support
> > the administration" but think that the community needs to pull together
> > for any sort of collective change.  It's more like, "We wrote the
> > Constitution [the ANSI standard], and now you cannot pass any more laws. 
> > If you think something needs to be done (like, say, repair a bridge)
> > you're welcome to go do it yourself, but don't bother us with it."
> 
> I'd prefer to say what I oppose and don't, I.

That was a statement about your actions, not about your intentions, and as
such is a matter of objective verifiable fact:  I "support the
administration" in  that I am and have been a long-time supporter of
Common Lisp.  However, I also believe that Common Lisp has some problems
that require collective change to fix.  When I have proposed such changes,
you have opposed me, and not just the specific changes I was proposing,
but rather the whole idea of change, even going so far as to say that
Common Lisp needs to "protect itself against people like you", which is to
say (one could reasonably infer) people advocating change.

> I do think, though, that it's improper to propose changes in the language
> without one or more implementations having implemented it.

You are entitled to your opinion.  I am entitled to mine.  My opinion is
that it is entirely proper to propose anything that one wishes to propose
at any time under any circumstances.  I would agree with you that it would
be unwise to _enact_ a proposal that had not been implemented, but that is
not what you said.

> The idea was that standards are not places for experimentation.  

Yes, I agree.  But merely making proposals, even experimental ones, poses
no threat to the standard.

> And even then, standards don't have to start over from scratch.  They can
> be layered atop.  I have plans for doing precisely that which I am actually
> actively working on this week ...

That's wonderful, but how did you expect me to know that?

> when not defending my good name.

I regret that you saw it as an attack on your good name.  That was not my
intention.  (BTW, I am truly puzzled why you perceived this as an attack. 
Your position is arrived at through sound reasoning, but starting with
premises with which I happen to disagree, to wit, that CL as it stands is
healthy.  If one accepts your premise then your actions are not at all
unreasonable, and I am baffled why you would perceive a simple statement
about those actions as an attack.)

E.
From: Mario S. Mommer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fzznl7bk2u.fsf@cupid.igpm.rwth-aachen.de>
···@jpl.nasa.gov (Erann Gat) writes:

> It's actually even worse than that.  Kent opposes even people who "support
> the administration" but think that the community needs to pull together
> for any sort of collective change.  It's more like, "We wrote the
> Constitution [the ANSI standard], and now you cannot pass any more laws. 
> If you think something needs to be done (like, say, repair a bridge)
> you're welcome to go do it yourself, but don't bother us with it."

Yes. We need to change the constitution so we can build a bridge.

Incredible.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <861xyjcusf.fsf@bogomips.optonline.net>
Mario S. Mommer <········@yahoo.com> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > It's actually even worse than that.  Kent opposes even people who "support
> > the administration" but think that the community needs to pull together
> > for any sort of collective change.  It's more like, "We wrote the
> > Constitution [the ANSI standard], and now you cannot pass any more laws. 
> > If you think something needs to be done (like, say, repair a bridge)
> > you're welcome to go do it yourself, but don't bother us with it."
> 
> Yes. We need to change the constitution so we can build a bridge.
> 
> Incredible.

ok,

It seems that both of you do not understand how the Constitution works,
you should if you want to use it in an analogy.  The rules are pretty
simple:

1: the Constitution wins all disputes with laws
2: see 1

Now iff you want to use the above analogy then the ANSI standard
would be the Constitution.

The vendor extensions would be laws

And the proposed standards, when there are some, would be 
amendments in the process of being ratified.

Now the one condition on the layered standards that is not
on real amendments is that they do not conflict/replace the 
base standard.

Now on to the lets build a bridge idea.  When ever the
people on CLS and CLL get together sparks seem to fly.
So bringing the 2 camps closer does not seem like a smart
thing to do.  They are happy over there and we are happy 
over here, why do we have to be unhappy together?

Now on to lisp2 vs lisp1, if you want to do lisp1 in CL knock
your self out. Call all temporary lists lst, and so on, so as not 
to conflict with existing function names.  But getting rid of 
lisp2 as part of CL is not going to happen for 2 obvious reasons:
1: it would break a huge amount of code.
2: it would break a huge amount of code.  

This would tend to piss people way the fuck off and help
*KILL* the language.  

Whether or not lisp2 is the best answer to the problem is besides
the point currently.  The point is that it is good enough and that
changing it in a non backward compatible manner would break a huge
amount of code.

Remember that "perfection" and "done" are a contradiction.  But
you can accomplish a lot of stuff with "good enough" or "this is
as good as we can make it in the time/budget available".  

marc
From: Mario S. Mommer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fzr86jb96u.fsf@cupid.igpm.rwth-aachen.de>
Marc Spitzer <········@optonline.net> writes:
> Mario S. Mommer <········@yahoo.com> writes:
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > > It's actually even worse than that.  Kent opposes even people
> > > who "support the administration" but think that the community
> > > needs to pull together for any sort of collective change.  It's
> > > more like, "We wrote the Constitution [the ANSI standard], and
> > > now you cannot pass any more laws.  If you think something needs
> > > to be done (like, say, repair a bridge) you're welcome to go do
> > > it yourself, but don't bother us with it."
> > 
> > Yes. We need to change the constitution so we can build a bridge.
> > 
> > Incredible.
> 
> ok,
> 
> It seems that both of you do not understand how the Constitution works,
> you should if you want to use it in an analogy.  The rules are pretty
> simple:
[analysis snipped]

I was being sarcastic. Erann has been arguing with an incredibly
faulty logic. He is not dumb, so I cannot but see ill will in his
actions.

He just wants to get his pet idea into the standard, even though he is
the only one who cares about the issues he is rising, and is ready to
spill FUD and misrepresent what other people have said if he doesn't
get it his way (which he won't, afaict).

> Remember that "perfection" and "done" are a contradiction.  But
> you can accomplish a lot of stuff with "good enough" or "this is
> as good as we can make it in the time/budget available".  

I agree with you.

Regards,
        Mario.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805030821490001@192.168.1.51>
In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
<········@yahoo.com> wrote:

> I was being sarcastic. Erann has been arguing with an incredibly
> faulty logic. He is not dumb, so I cannot but see ill will in his
> actions.

Well, thanks for at least granting that I'm not dumb.

> He just wants to get his pet idea into the standard,

No, that is not true.  (And given that you are mistaken about this perhaps
you will extend some charity to my logic and my motivations.)

I want there to be a mechanism by which the community can act collectively
to effect change in the language.

I introduced a particular change as a test case mainly to take the issue
out of the abstract.  I don't care very much about that particular change
(and I have in fact stopped advocating it).  I do believe that some change
to how dynamic binding is done would be pedagogically useful, but that
issue is dominated by the fact that at the moment there seems to be no
mechanism for collectively managing change -- any change -- in the
language, except to resist it at all costs.

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwznl6uapo.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> I want there to be a mechanism by which the community can act collectively
> to effect change in the language.

This is the core of the problem in this discussion.

I want a mechanism by which the community can act collectively to solve
its perceived problems.

Changing the language is a possible solution, not a possible problem
description.  Once you admit "I can't change the language to do x"
as a possible problem description, you are forced to change the language
to satisfy any such person.  Problem descriptions should not be worded this 
way.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031359120001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > I want there to be a mechanism by which the community can act collectively
> > to effect change in the language.
> 
> This is the core of the problem in this discussion.
> 
> I want a mechanism by which the community can act collectively to solve
> its perceived problems.

Yes, this is indeed the core.

Some people in the community are well served by stability, and others
(like me) are harmed by it.  The issue is what do we do when different
people perceive different problems that need solving.

> Changing the language is a possible solution, not a possible problem
> description.

My problem description is: I can't use Lisp because my management
perceives it as too far out of the mainstream.  (And I don't think my
situation is unique.)

Is that OK as a problem description?

Your solution is (as I understand it): simply wait.  Lisp is out of the
mainstream because it was ahead of its time (and because of AI winter),
and sooner or later the world will realize this and Lisp's popularity will
increase as part of the natural course of things.

My position is: we have already waited ten years.  I see some indication
that things are changing, but not much.  The world seems destined to
reinvent Lisp badly (c.f. Java, XML, etc. etc.) rather than actually
convert to using Lisp.  I would rather try to win over the world by
reinventing Lisp well rather than pin my hopes on the rest of the world
coming around.

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwwugaof6i.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> My problem description is: I can't use Lisp because my management
> perceives it as too far out of the mainstream.  (And I don't think my
> situation is unique.)
> 
> Is that OK as a problem description?

No.

Because it puts the standard at the whim of people who don't like
Common Lisp.  Language design is not a popularity contest.

Moreever, from conversations I've had with you, I'm doubtful that
anything other that a seriously major overhaul of everything would do
much of any good for your situation.  Yet you are conveying your
proposals as if they were modest.  What I think you're really doing is
getting a foot in the door, and then planning to drive that wedge
wide.  I therefore think you're being disingenuous by suggesting small
changes--it suggests that you're just asking for something modest and
that I'm standing in your way, when in fact I think you want something
bigger than we can offer, and I see no reason therefore to even begin
down that path.  No, I don't control things; this is just a statement
of personal assessment.

Also, it seems to me that you could be substantially helped in your work
by compiling down to something smaller than full CL.  This could be done
by careful subsetting AND a pre-processor.  The advantage of that is that
you _could_ change the surface language by writing your own macros, that
were permitted to expand into non-forms such as ((f ...) ...) and that
were corrected by the preprocessor after expansion of your own macros.

> Your solution is (as I understand it): simply wait.  Lisp is out of the
> mainstream because it was ahead of its time (and because of AI winter),

I have never suggested this.  I don't know where you got that.
I think Lisp was knocked out of the mainstream by AI winter.
I think Lisp has struggled to make it back.
I have never advocated a policy of "just wait".

> and sooner or later the world will realize this and Lisp's popularity will
> increase as part of the natural course of things.

No, I have not said that.

What I have said is that none of the things you're suggesting is likely
to help.

What holds Lisp back is not its syntax or even its semantics.  It is its
history.

I think for some organizations it is not possible at all to gain
acceptance, period, because some people are permanently opposed to the
name Lisp.  Changing the name might not be a terrible idea.  Dylan and
Scheme did this with moderate success and were allowed a new lease on
life during which they have lived or died on their own merits rather than
the legacy of another language.

I wouldn't be surprised if NASA is in this class, in that it has
successfully used Lisp, yet the fact that it has done so does not seem
to guarantee Lisp's presence. I have to conclude, therefore, that the
problems are not technical but political.  Political problems will not be
fixed by making diddly changes to language semantics.  People who hate
Lisp will say it is still Lisp.

I think in other cases, I have said that maybe some day people will make
applications that demonstrate the power of Lisp.  This is also not the
same as me saying "wait".  It is me saying "stop whining and go do" (the
entity addressed in this imperative being the whole community generally,
not you personally).

If you have to sum up my position, please sum it up as "don't wait" not
"wait".

> My position is: we have already waited ten years.

If this were true, then you misunderstood me.  I didn't say "wait".

However, I don't think it's true that you have waited.  I think you 
misstate yourself here, too.  My understanding is that you have been
involved in the use of Lisp and that such use was successful.  It's
too bad that NASA itself apparently does not see the value in that 
success, but people outside of NASA were greatly heartened by that 
success and perhaps you are just in the wrong place now politically.

In these economic times, I'm not going to tell you to turn down a steady
paycheck nor the classy name of NASA in your email address, however I am
going to say that if a demonstrated success does not carry weight 
internally over what we'll call "other factors", then I think you've
mostly lost.  We can (probably best in another venue) strategize other
more low-tech approaches, but I honestly think that using this as a guide
for how to redesign the language is not worthwhile.

Moreover, if you really are going to redesign the language, I continue
to strongly suggest that you just design something from scratch, NOT
because I am casting you out, but because I am trying to streamline the
path between where you stand and success.  Please stop thinking that this
is a power play to get you to shut up.  This is an attempt to enable you.

> I see some indication
> that things are changing, but not much.

Not enough for you, perhaps.  I think others read it differently.
Not all others, but enough that I'm heartened.

I think your situation is not uncommon. There are many organizations
permanently prejudiced against Lisp.  However, in none of those
organizations do I expect to make headway by changing the language.
Why? Because the "Lisp" they are prejudiced against is probably "Franz
Lisp" or "Maclisp" or "Rutgers Lisp" or "VM Lisp" or "Lisp 1.5" or ...
and they don't even know that CLTL came along, much less ANSI CL.  And
they don't care.  All they care is that it still has the telltale
parentheses so they can recognize and sabotage it on sight (and on
site).  If the changes since those times when they developed their
prejudice haven't impressed them, I don't think there's any hope in
that organization other than to "wait" for the death of that or those
roadblock employees.  And even then I'm not going to advocate that
kind of "wait" just because I don't think it's proper to wish someone,
even someone you don't like, would hurry up and die (nor even just get
laid off, frankly).

> The world seems destined to reinvent Lisp badly (c.f. Java, XML,
> etc. etc.)  rather than actually convert to using Lisp.

Perhaps so.  Incidentally, the SGML community is saying the same about
the reinvention of SGML through XML, Multics said the same about the
reinvention of Multics--such as it was--through Unix, and so on.  This
is the way of the world.  Even within our own community, we reinvented
the Lisp Machine badly by taking only half of it into CL because it
was all some could tolerate at the time, even though now some of those
same people regret not bringing more.  When enlightenment strikes of
what was lost, don't expect XML fans to go back to SGML, CL programmers
to go back to LispMs, Unix folks to go back Multics.

>I would rather try to win over the world by
> reinventing Lisp well rather than pin my hopes on the rest of the world
> coming around.

I have never said a word to keep you from doing this. I have only
sincerely and repeatedly recommneded that you would be infinitely
better off doing this as a new political party rather than by trying
to convince this existing political party to change its planks.  EVEN
IF you think this party is ripe to have its planks changed, it doesn't
mean they're going to go your direction.  You're behaving somewhat
like this is a numberline with only one direction to go, when in fact
it's a multi-d space where the result of "change" may be a motion in
space that leaves you no closer to "acceptable space" than when you
started.  However, if you take control of your own private process,
you have a LOT more control of your own destiny.  And when you achieve
project success, if you do--you're taking the risk upon yourself--you
can push the tool you've used.  Nothing wrong even with finding allies
here to do a private project if you can find people who will truly
help and not hamper you.  But don't assume that everyone here, even
advocates of change, are your ally.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-4ED832.04544629052003@news.netcologne.de>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > My problem description is: I can't use Lisp because my management
> > perceives it as too far out of the mainstream.  (And I don't think my
> > situation is unique.)
> > 
> > Is that OK as a problem description?
> 
> No.
> 
> Because it puts the standard at the whim of people who don't like
> Common Lisp.  Language design is not a popularity contest.

Language design is purely a popularity contest. If you don't manage to 
get your language design accepted by your target audience then you have 
failed.

The real question is: What is your target audience?

I think the whole discussion can be distilled to this question. One 
group of people is concerned mainly about the current users of Common 
Lisp while the other group is concerned more about the prospective 
future users of Common Lisp. (And, of course, there are people in 
between...)


Pascal
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwd6i2s7i0.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> Language design is purely a popularity contest. If you don't manage to 
> get your language design accepted by your target audience then you have 
> failed.

By this metric, CL's technical problems can be fixed by simply gutting it
and replacing it with the text of any language that has more users.  This
seems a weak theory of guidance.

It also does not allow for niche languages nor visionary languages.

As a matter of personal practice, I try never to make arguments based
on hapinstance.  For example, if I'm working for a vendor that makes
the fastest Lisp, I still don't encourage people to buy it for that
reason alone.  I might say "Buy it because it's fast." but never "Buy
it because it's fastest."  If I assert the latter, I'm saying that if
someone ever exceeds our speed, you should buy them instead. I simply
don't agree with that.  If CL had more users, I would not say it was
best on that basis.  Because it has fewer users, I will not say it is
worst on that basis.  The number of users is not a property of the
language.  It is an interesting statistic to look at, but that's all it
is--a statistic.

  "There are three kinds of lies: lies, damned lies and statistics."
  --Benjamin Disraeeli and/or Mark Twain
    http://www.bartleby.com/66/99/16799.html

> I think the whole discussion can be distilled to this question. One 
> group of people is concerned mainly about the current users of Common 
> Lisp while the other group is concerned more about the prospective 
> future users of Common Lisp. (And, of course, there are people in 
> between...)

To say that the Common Lisp design was not aimed at the future is 
not only untrue, but is laughably so.

This is a common rhetorical device in political forums, used by people
who don't like the direction things are going to say that they are the
voice of the future.

Languages have many possible futures, and what you are really saying is
that you devalue the goals of the original designers.  This is a legitimate
position to take politically.  But don't insult the designers by saying 
they had no thought of nor care about the future.  We care about it as deeply
as you, we simply don't agree with you about what the issues are that matter
for language survival.
From: Janis Dzerins
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <twku1bechxd.fsf@gulbis.latnet.lv>
Kent M Pitman <······@world.std.com> writes:

> As a matter of personal practice, I try never to make arguments based
> on hapinstance.  For example, if I'm working for a vendor that makes
> the fastest Lisp, I still don't encourage people to buy it for that
> reason alone.  I might say "Buy it because it's fast." but never "Buy
> it because it's fastest."  If I assert the latter, I'm saying that if
> someone ever exceeds our speed, you should buy them instead. I simply
> don't agree with that.  If CL had more users, I would not say it was
> best on that basis.  Because it has fewer users, I will not say it is
> worst on that basis.  The number of users is not a property of the
> language.  It is an interesting statistic to look at, but that's all it
> is--a statistic.
> 
>   "There are three kinds of lies: lies, damned lies and statistics."
>   --Benjamin Disraeeli and/or Mark Twain
>     http://www.bartleby.com/66/99/16799.html

There's the fourth kind: benchmarks.  (I thought it's kind of relevant
in this context.)

-- 
Janis Dzerins

  If million people say a stupid thing, it's still a stupid thing.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-81FE7C.13312129052003@news.netcologne.de>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> Pascal Costanza <········@web.de> writes:
> 
> > Language design is purely a popularity contest. If you don't manage to 
> > get your language design accepted by your target audience then you have 
> > failed.
> 
> By this metric, CL's technical problems can be fixed by simply gutting it
> and replacing it with the text of any language that has more users.  This
> seems a weak theory of guidance.
> 
> It also does not allow for niche languages nor visionary languages.

Sure, but I did include the notion of a target audience in my 
description. I am not proposing that Common Lisp should be changed in a 
way so that _anybody_ likes it.

People have certain goals they want to accomplish. When a language that 
is best suited for a certain set of goals does not get used for this set 
of goals then there is something going wrong. I can't think of a better 
way to describe it than that it lacks in popularity.

> > I think the whole discussion can be distilled to this question. One 
> > group of people is concerned mainly about the current users of Common 
> > Lisp while the other group is concerned more about the prospective 
> > future users of Common Lisp. (And, of course, there are people in 
> > between...)
> 
> To say that the Common Lisp design was not aimed at the future is 
> not only untrue, but is laughably so.

I didn't say that.

> This is a common rhetorical device in political forums, used by people
> who don't like the direction things are going to say that they are the
> voice of the future.

I didn't say that either. I am just talking about my personal 
experiences - the experiences I have made while I was learning Common 
Lisp, and the experiences I make when I try to explain Common Lisp to 
other people. Some of Common Lisp's ingredients are hard by nature - 
things like metacircularity, macros, and so on, need a while to be 
grasp. Other things are just accidentally hard and could be more 
consistent and hence, easier to understand.

Lisp-2 vs. Lisp-1 is another such issue: I am convinced that Lisp-2 is 
largely better than Lisp-1, but Lisp-1 seems to be more attractive at 
first sight. (I am not guessing here, that's what some people have told 
me.) I don't propose that Common Lisp should be turned into a Lisp-1 
just to make it more popular - I think people should rather take a 
closer look at the issues involved in the Lisp-2 vs. Lisp-1 issue.

Of course, it might be that I have just failed miserably in my efforts 
to get things across. But I am pretty sure that this is not the case.

> Languages have many possible futures, and what you are really saying is
> that you devalue the goals of the original designers.  This is a legitimate
> position to take politically.  But don't insult the designers by saying 
> they had no thought of nor care about the future.  We care about it as deeply
> as you, we simply don't agree with you about what the issues are that matter
> for language survival.

I don't understand how you can possibly interpret my statements like 
that. I have never intended to say such a thing. I am sorry if I have 
made such an impression.


Pascal
From: Adam Warner
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <pan.2003.05.29.04.51.34.674372@consulting.net.nz>
Hi Kent M Pitman,

> Pascal Costanza <········@web.de> writes:
> 
>> Language design is purely a popularity contest. If you don't manage to
>> get your language design accepted by your target audience then you have
>> failed.
> 
> By this metric, CL's technical problems can be fixed by simply gutting
> it and replacing it with the text of any language that has more users.
> This seems a weak theory of guidance.
> 
> It also does not allow for niche languages nor visionary languages.

While I agree with your points Kent I think you missed Pascal's. CL's
target audience includes those who want to use one of the most expressive,
powerful, standardised and industrial strength programming languages ever
designed. I'm part of that target audience and I'm using it.

So this isn't purely a popularity contest.

>> I think the whole discussion can be distilled to this question. One
>> group of people is concerned mainly about the current users of Common
>> Lisp while the other group is concerned more about the prospective
>> future users of Common Lisp. (And, of course, there are people in
>> between...)

In this case I just read Pascal as commenting that some people don't care
about backwards compatibility.

Regards,
Adam
From: Adam Warner
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <pan.2003.05.29.04.53.08.598029@consulting.net.nz>
Hi Kent M Pitman,

> Pascal Costanza <········@web.de> writes:
> 
>> Language design is purely a popularity contest. If you don't manage to
>> get your language design accepted by your target audience then you have
>> failed.
> 
> By this metric, CL's technical problems can be fixed by simply gutting
> it and replacing it with the text of any language that has more users.
> This seems a weak theory of guidance.
> 
> It also does not allow for niche languages nor visionary languages.

While I agree with your points Kent I think you missed Pascal's. CL's
target audience includes those who want to use one of the most expressive,
powerful, standardised and industrial strength programming languages ever
designed. I'm part of that target audience and I'm using it.

[Superseded response b/c it was unclear:] The metric is the type of
developer being targeted. And I think Common Lisp would win a popularity
contest upon such a metric.

>> I think the whole discussion can be distilled to this question. One
>> group of people is concerned mainly about the current users of Common
>> Lisp while the other group is concerned more about the prospective
>> future users of Common Lisp. (And, of course, there are people in
>> between...)

In this case I just read Pascal as commenting that some people don't care
about backwards compatibility.

Regards,
Adam
From: Kirk Kandt
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vddi9fsn1av1b2@corp.supernews.com>
"Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@k-137-79-50-101.jpl.nasa.gov...
> Your solution is (as I understand it): simply wait.  Lisp is out of the
> mainstream because it was ahead of its time (and because of AI winter),
> and sooner or later the world will realize this and Lisp's popularity will
> increase as part of the natural course of things.

I prefer to program in Lisp more than any other language, but I doubt it
will ever be mainstream. I have been struggling to understand why "power"
languages like Lisp, APL, SNOBOL (Icon), SETL, etc. never became mainstream.
To me, the main stream languages are Fortran, Cobol, C, C++, Java, and Ada.
I am not exactly sure why Cobol became successful. I assume Ada became
successful simply because it was mandated to be used in lots of government
projects from about 1985 to 1995, although that could be wrong. FORTRAN was
successful because it was the first high level language that gave people,
primarily scientists, a much easier way to program. According to F. P.
Brooks, Jr. it provided at least a 600% increase in productivity. I think C
was successful not because it was a great language, but because Unix was
developed in it and people really wanted to use Unix. Unix was written in C
and provided a simple way for people to change it. Hence, lots of people
learned C as a way to more effectively use or extend Unix. (It also came
with Unix and was free. In those days compilers cost thousands of dollars so
that may have had something to do with it too.) C++ was more-or-less an
"upgrade" to C, which allowed lots of C programmers to do object-oriented
programming with relatively modest start-up costs (e.g., training). Then,
Java comes along, and again, it required a modest investment in learning how
to use it, and the provided Sun Java books made adoption that much easier.

What I believe is that the only way people are going to switch from the
current non-main stream languages, or enhancements to a "power" language
like Lisp is in two ways. First, someone can develop Knuth's fictional
Utopia84, which is a programming language that people perceive to be at
least an order of magnitude better (in every respect) than what is currently
available today in the main stream. Second, someone has to build some
"gee-whiz" product that everyone wants that happens to be written in a
"power" language like Lisp. Basically, people must perceive that the cost of
giving up what they know (e.g., the tools they use) will be less than what
they gain by switching. There are so many ways that Java can hook people in
now through J2EE, JDBC, speech APIs, telephony APIs, and so on that I think
many people already have a huge investment in Java and will not switch. I
would not be surprised if in 5 years half the people are programming in
Java. It seems to me that something like SISC will be more successful than
other Lisp/Scheme dialects because it offers a migration path for people.
But I doubt that is enough. No matter how stupid the argument, whenever I
mention Lisp to a common layperson I always hear something like "Oh yeah,
Lots of Insidious Stupid Parenthesis!" even though they never tried the
language. I heard similar things when I did some programming in APL about
its one-liners 30 years ago.

-- Kirk Kandt
From: Ray Blaak
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <u65ns3o4z.fsf@STRIPCAPStelus.net>
"Kirk Kandt" <······@dslextreme.com> writes:
> To me, the main stream languages are [...] and Ada. [...] I assume Ada
> became successful simply because it was mandated to be used in lots of
> government projects from about 1985 to 1995, although that could be wrong.

I wouldn't call Ada mainstream. Hang out in comp.lang.ada and you hear very
similar moanings as to "why doesn't everyone just use Ada?".

Ada freaks like Ada for similar reasons that Lisp freaks like Lisp: it feels
like such a smarter language.

Both Ada and Lisp are much smarter languages than C++, C, or Fortran. Compared
to each other? I can't say, they belong to different universes. Ada is
seriously about strong typing and safety; Lisp about flexbility, expressivity,
all hanging together in some seriously cool ways.

I used to be an Ada freak, a true believer. Now I use Java every day, and
actually prefer it, due to its simplicity, runtime flexibility, and garbage
collection. However, I want to get paid to program in Lisp.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Greg Menke
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3y90ofvte.fsf@europa.pienet>
"Kirk Kandt" <······@dslextreme.com> writes:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
> ·························@k-137-79-50-101.jpl.nasa.gov...
> > Your solution is (as I understand it): simply wait.  Lisp is out of the
> > mainstream because it was ahead of its time (and because of AI winter),
> > and sooner or later the world will realize this and Lisp's popularity will
> > increase as part of the natural course of things.
> 
> I prefer to program in Lisp more than any other language, but I doubt it
> will ever be mainstream. I have been struggling to understand why "power"
> languages like Lisp, APL, SNOBOL (Icon), SETL, etc. never became mainstream.
> To me, the main stream languages are Fortran, Cobol, C, C++, Java, and Ada.
> I am not exactly sure why Cobol became successful. I assume Ada became

Cobol is successful because in some respects its superb for its
intended application.  Its a dreadful language in others.  Its quite
good at representing and processing record oriented data, where the
processing steps don't involve complex work, offering tremendous
leverage when it comes to representing database records and fiddling
around with searching & sorting.  It sucks (or at least classical
Cobol sucks) in some ways too; expressions can be very tedious,
complicated logic can become pathologically complicated to write and
read, no local variables & functions.  It wouldn't suprise me a bit if
some implementations overcame these limitations years ago.

One of the assertions of the Cobol doctrine is that its "self
documenting", which is true in the sense that you have to write code
in sentences that are more descriptive in the English language sense
of the term.  However spaghetti logic is pasta in any programming
language and nasty Cobol is just as unreadable as nasty C- so real
commenting is essential anyway.

Gregm
From: Michael Sullivan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <1fvzh5a.m8hcnlhty0qN%michael@bcect.com>
Erann Gat <···@jpl.nasa.gov> wrote:

> In article <···············@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > I want there to be a mechanism by which the community can act collectively
> > > to effect change in the language.
> > 
> > This is the core of the problem in this discussion.
> > 
> > I want a mechanism by which the community can act collectively to solve
> > its perceived problems.
> 
> Yes, this is indeed the core.
> 
> Some people in the community are well served by stability, and others
> (like me) are harmed by it. 

No.  It has not been demonstrated to my satisfaction that you (or
anyone) is harmed by CL's stability.  You have been harmed by some
combination of CL's teeny market-share and mind-share, and some
particular technical and political problems at JPL.

You have asserted a causal connection between CL's stability and its
lack of [market|mind]share.  Most of the people who have addressed this
assertion at all, think it is wrong.  You certainly haven't given more
than scattered, vague, inconclusive evidence to support the assertion.

Yet others have presented what I consider iron-clad cases in *favor* of
stability for most users and developers.  As both a user, manager, and a
developer, I can only *dream* that most of the products I am forced to
work with had the stability of Common Lisp.  If they did, then most of
the non-AI problems in my shop would be successfully automated by now,
and I might even have had time and money to spare for some of the more
interesting problems.  I would not have had to waste over half of my
development time (of which I can spare little) in the last 10 years
fixing and rewriting scripts and utilities that worked perfectly fine
until some API/OS/App changed or died out from under them, or bandaging
problems with running a 3-5 year old system (to minimize rewriting) when
customers are continually sending files using the latest software and
formats.

Stability is only a problem when something is *broken*, and CL doesn't
appear to be broken.  If there's any change to the language necessary to
grab new market share, it's bigger and better OS and domain specific
libraries, and I don't think it makes sense to open up an otherwise
non-broken core language standard to add those.  

Little stuff like special variable difficulties, or the inability to
program entirely in scheme-style without a million 'funcall's is just
not likely to be a big deal to newbies.  Anyone who even groks the kind
of scheme-style that issue would solve is already firmly in the larger
lisp camp anyway.

What's needed is better/easier tutorials, better libraries and more
visible software.  IMO, decreased stability, far from being a help to
CL's growth, might turn out to be a good way to *kill* CL completely.

> The issue is what do we do when different
> people perceive different problems that need solving.
> 
> > Changing the language is a possible solution, not a possible problem
> > description.

> My problem description is: I can't use Lisp because my management
> perceives it as too far out of the mainstream.  (And I don't think my
> situation is unique.)

> Is that OK as a problem description?

To my mind, yes, as a statement of your problem.  Unlike your statement
above, it doesn't conflate your debatable opinions about *why* this
problem exists with the problem itself.

> Your solution is (as I understand it): simply wait.  Lisp is out of the
> mainstream because it was ahead of its time (and because of AI winter),
> and sooner or later the world will realize this and Lisp's popularity will
> increase as part of the natural course of things.

> My position is: we have already waited ten years.  I see some indication
> that things are changing, but not much.  The world seems destined to
> reinvent Lisp badly (c.f. Java, XML, etc. etc.) rather than actually
> convert to using Lisp.  I would rather try to win over the world by
> reinventing Lisp well rather than pin my hopes on the rest of the world
> coming around.

And I agree with you completely, except for the part where I disagree
completely.

We need to do something active to win over the world, but IMO, by
advocating core language change, you're barking up a tree in the wrong
forest.

Really awesome language specs only win over the kind of people who are
already sold on Lisp.  CL is already good enough that a large number of
people with a varied CS background who begin to understand it for the
first time have a "Where have you been all my life?" reaction.

The problem is the people who can't make it that far on what they see
before they have an allergic reaction to the parentheses or some other
bugaboo.  Web pages, better tutorials and software is what will help win
some of these folks over.  It would take a fairly radical (and
potentially catastrophic) change to the language to make any difference
to these people's initial impressions.  If you really want that as your
focus, then it seems something like the ARC project is where to focus,
no?  That way if what you come up with fails miserably -- CL as we have
it now still exists.


Michael
From: Paul F. Dietz
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <RIicnSukU-1OtECjXTWcrg@dls.net>
Michael Sullivan wrote:

> What's needed is better/easier tutorials, better libraries and more
> visible software.  IMO, decreased stability, far from being a help to
> CL's growth, might turn out to be a good way to *kill* CL completely.

Mr. Gat's time would be much better spent making one of these positive
contributions, rather than complaining about things that the rest
of the contributors don't think need changing.

Erann, write some libraries, or at least write some test cases for
libraries, or some specifications for libraries.  Do *something*
positive, please.

	Paul
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-0306032003450001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> Michael Sullivan wrote:
> 
> > What's needed is better/easier tutorials, better libraries and more
> > visible software.  IMO, decreased stability, far from being a help to
> > CL's growth, might turn out to be a good way to *kill* CL completely.
> 
> Mr. Gat's time would be much better spent making one of these positive
> contributions, rather than complaining about things that the rest
> of the contributors don't think need changing.

Or defending myself against these continual unwarranted attacks.

Before anyone else posts another comment about me in this thread please do
me a favor and go read: gat-0705031122330001%40192.168.1.51

Pay particular attention to the following passages:

> > I don't think your problem would be solved by changing the standard.
> 
> Yes, you've convinced me that I was wrong about this.

and:

> OK.  You have made a convincing argument, I respect your views, and so I
> am dropping it.

Also note the date that article was posted.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-F2AABC.04434529052003@news.netcologne.de>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > I want there to be a mechanism by which the community can act collectively
> > to effect change in the language.
> 
> This is the core of the problem in this discussion.
> 
> I want a mechanism by which the community can act collectively to solve
> its perceived problems.
> 

Let me assure that the tedious learning process is a real problem of 
Common Lisp. I have made this experience by myself not so long ago. I 
just had a deep trust in my gut feelings that I was doing the right 
thing in learning Common Lisp, in spite of the many stumbling blocks. 
More often than not I have asked myself why the fuck Common Lisp is 
designed in such complicated ways. I wouldn't have been so patient with 
many other languages.

I am pretty sure that I have a fair understanding of most of what's in 
Common Lisp. And I am very sure that most of it can be 
simplified/generalized/unified in relatively straightforward ways.

I am not disregarding the objections to changes, but please acknowledge 
the fact that there are good reasons for changing the language. 
Beginners' problems are real problems!

The steep learning curve of Common Lisp can't be solved by purely 
pedagogical means. I prefer to learn languages by studying their specs, 
and I am pretty sure that I am not the only one.


Pascal
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87y90p6drc.fsf@darkstar.cartan>
Pascal Costanza <········@web.de> writes:

> In article <···············@shell01.TheWorld.com>,
>  Kent M Pitman <······@world.std.com> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > I want there to be a mechanism by which the community can
> > > act collectively to effect change in the language.
> > 
> > This is the core of the problem in this discussion.
> > 
> > I want a mechanism by which the community can act
> > collectively to solve its perceived problems.
> 
> Let me assure that the tedious learning process is a real
> problem of Common Lisp. I have made this experience by myself
> not so long ago. I just had a deep trust in my gut feelings
> that I was doing the right thing in learning Common Lisp, in
> spite of the many stumbling blocks.  More often than not I have
> asked myself why the fuck Common Lisp is designed in such
> complicated ways. I wouldn't have been so patient with many
> other languages.
> 
> I am not disregarding the objections to changes, but please
> acknowledge the fact that there are good reasons for changing
> the language.  Beginners' problems are real problems!
> 
> The steep learning curve of Common Lisp can't be solved by
> purely pedagogical means. I prefer to learn languages by
> studying their specs, and I am pretty sure that I am not the
> only one.

Learning a language by studying its spec is fine if you already
know a similar language.  I already knew (some) C++ when I
learned Java, so reading Java's definition was the way to go for
me.  I already knew OCaml when I learned SML, so I learned SML
solely by its spec (after I had learned to read that denotational
semantics stuff ;-).  But it would have been a rather dumb
approach for me to learn, say, OCaml, only by reading its
definition without knowing any other language like it.  That
would have been unnecessarily complicated.  I'd have to
understand the whole spec before I could write a mere hello world
program.  Understanding this pattern matching thing alone would
have been very hard, whereas understanding it by looking at some
tutorial and example programs is very easy.  I even learned some
French to be able to read O'Reilly's OCaml book.  /After/ that,
reading OCaml's definition was really easy.

A language definition is not a Lisp tutorial.  If people try to
learn Lisp by reading the HyperSpec alone, because that's how
they learned C, C++, Java and C#, it's their own fault if they
find that unnecessarily hard.  One thing you have to learn is the
Lisp way of thinking about programs, and a raw definition cannot
communicate this.  Whereas books like Norvig's PAIP does a
wonderful job at teaching Lisp and its spirit.

> I am pretty sure that I have a fair understanding of most of
> what's in Common Lisp. And I am very sure that most of it can
> be simplified/generalized/unified in relatively straightforward
> ways.

This seems to be a bit arrogant to me.  A lot of bright people
have worked together on the standard, and typically there will be
very real reasons why certain things are the way they are.  Often
technical, often political reasons.  Lisp wasn't designed by some
researcher who worked alone until he came up with one ``unified��
whole.  If you had the power to apply a list of changes to the
standard in order to ``simplify�� and ``unify�� things, you will
certainly be pleased by the result.  But I claim that you will be
very surprised when you realize that a great lot of people
totally disagrees and thinks you have ruined the language :-)
Kent just told a very similar, not hypothetical, story about how
Steele came up with such a list at the first X3J13 meeting or
some such.

Many people have worked very long and very hard to arrive at what
there is.  Every single one of them probably knows a way to
``unify�� and ``simplify�� things.  But things aren't as simple
as that in real life.



Both you and Erann seem to make a similar mistake: You ignore
that some things just /are/ hard to grasp.  Even if they seem
totally trivial once you've understood them.  That questions
about, say, special variables come up over and over again is
/not/ evidence that there is anything wrong with the way they are
handled.  We simply have to explain them over and over again and
that is that.  Other languages don't have anything like special
variables, that's why people find it hard to grasp.  They think
they know what a variable is, and are offended that Lisp seems to
disagree :-)

I have spent many years of my life explaining mathematics to
students.  Every single one of them had trouble with integrals.
Almost all of them finally got it, but they had to do some hard
work on their own.  I could tell them how things are, and try to
make it as transparent as possible, but some things simply /are/
hard to grasp at first.

In some German universities, the fact that so small a percentage
of mathematics majors survives the first few semesters, led
certain professors, who could not bear the idea of some
proletarians being more able than others, to attempt to
``simplify�� things for them, so more of them would get a degree,
and everybody could see that Marx was right and every proletarian
would be a great mathematician once mathematics was stripped of
its bourgois ways of thinking (yessir, such were the times).
Unfortunately, simplifying things proved to be not as easy as
they thought (they probably castigated themselves daily before
their Lenin bust so he would help them get rid of the evil
bourgois spirits they were obviously still possessed by).

In the end they resorted to teaching the old Riemann integral
instead of the Lebesgue integral.  The Riemann integral is /much/
harder to use than the Lebesgue integral, but it is a bit easier
to understand at first.  More students passed, goal achieved.
Unfortunately, these students were now seriously crippled and had
trouble to understand more advanced texts, that of course assumed
their readers would know the Lebesgue integral, and they could
not switch to other German universities anymore that were not yet
led by the avantgarde of the proletariat (other universities
would require that students take /their/ calculus courses first
so they know at least something as basic as the Lebesgue
integral).  But, in hindsight, it was probably a good thing, they
thought, that students were prevented from getting to places
where they will undoubtedly be subject to brainwashing by the
class enemy and turn into fashists, anyway.



So, some things /are/ hard to grasp and consequently hard to
teach.  We simply have to live with that; if anybody lacks the
patience to explain something for the 100th time, let somebody
else explain it.

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905030934030001@192.168.1.51>
In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:

> Both you and Erann seem to make a similar mistake: You ignore
> that some things just /are/ hard to grasp.

Richard Feynman once said that if you can't explain something so that an
undergraduate can understand it then you haven't really understood it
yourself.  And he was talking about quantum physics.

In my entire life I have never encountered anything that was hard to grasp
once it was properly explained.  By contrast, I have encountered many,
many things that were hard to grasp because they were not properly
explained.  (My canonical example of this is quantum mechanics, which, it
turns out to my great astonishment, even Feynman got wrong.)

If you're still not convinced then you should read the preface to
"Structure and Interpretation of Classical Mechanics."  Here's an excerpt:

"Classical mechanics is deceptively simple. It is surprisingly easy to get
the right answer with fallacious reasoning or without real understanding.
Traditional mathematical notation contributes to this problem."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

>  That questions
> about, say, special variables come up over and over again is
> /not/ evidence that there is anything wrong with the way they are
> handled.  We simply have to explain them over and over again and
> that is that.

Hogwash.  Special variables are pretty easy to explain.  It's just that
once they are explained one is left wondering: why the hell did they do it
that way?  What's the point?  Why not some other way?  Why is the
programmer left with the burden of following a typographical convention in
order to stay out of trouble?  Is there no alternative design that would
preserve the functionality while relieving the programmer of that burden? 
(What does makunbound do when called on a symbol which has a dynamic
binding in the current scope?)

> Other languages don't have anything like special
> variables, that's why people find it hard to grasp.

Again, hogwash.  It is possible to implement the functionality of special
variables in both Java and C++.

> They think
> they know what a variable is, and are offended that Lisp seems to
> disagree :-)

No, they think that the functionality of special variables is disjoint
from the functionality of lexical variables, and they are confused by the
fact that Lisp conflates the two in a way that is not lexically apparent
(except by following a typographical convention).

> I have spent many years of my life explaining mathematics to
> students.  Every single one of them had trouble with integrals.

Then the fault lies with you.  I learned integrals in High School from an
excellent teacher and understood them immediately.  (I entered college
with credit for both first and second year calculus.)  By contrast, I
struggled mightily with differential equations because the teacher (and
the textbook) sucked.  By further contrast, partial differential equations
(which are supposed to be harder) were again a breeze because I had an
excellent teacher.

If *all* of your students are having trouble with something (and
particularly if that something is integrals) then the fault is without
question yours.

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfw4r3dsk4i.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In my entire life I have never encountered anything that was hard to grasp
> once it was properly explained.

Me either. But then, it helped a lot that I never got out much and 
I tried never to read about new things.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87ptm162c3.fsf@darkstar.cartan>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:
> 
> > Both you and Erann seem to make a similar mistake: You ignore
> > that some things just /are/ hard to grasp.
> 
> Richard Feynman once said that if you can't explain something
> so that an undergraduate can understand it then you haven't
> really understood it yourself.  And he was talking about
> quantum physics.

Feynman said a lot of dumb things in his life.

> In my entire life I have never encountered anything that was
> hard to grasp once it was properly explained.  By contrast, I
> have encountered many, many things that were hard to grasp
> because they were not properly explained.  (My canonical
> example of this is quantum mechanics, which, it turns out to my
> great astonishment, even Feynman got wrong.)

In fact, I agree that physical and mathematical concepts are
often badly explained.  But that doesn't mean there is always a
way that makes it totally easy.  There is always some effort
required on the side of the student; sometimes more, sometimes
less, and of course much more if the teacher is bad.  Once we
have fully understood something, it will indeed look totally
obvious to us.  But that doesn't mean I should be able to explain
general relativity to my wife in terms of apples and oranges if I
am not a bad teacher.  Note that I never said that things are
impossible to grasp.  Only that there is always effort on side of
the listener required.  I never said it is impossible, or even
hard, to explain special variables to anybody.  Only that they
have to be explained because the concept is foreign to most
people, who are surprised that there is anything about variables
they don't already know.

> If you're still not convinced then you should read the preface
> to "Structure and Interpretation of Classical Mechanics."
> Here's an excerpt:
> 
> "Classical mechanics is deceptively simple. It is surprisingly
> easy to get the right answer with fallacious reasoning or
> without real understanding.  Traditional mathematical notation
> contributes to this problem."
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I agree with the first part, but disagree with the last sentence.
It is not ``traditional mathematical notation��.  It is the old
mathematical models that are used by physicists to explain
classical mechanics that are at fault.  Notation has little or
nothing to do with it.  Classical mechanics ought to be taught
using the invariant tensor calculus on manifolds.  Then it is a
piece of cake.  If you understand that calculus, that is.  And
there lies your fallacy: You simply forget about /your/ effort
until you finally got it.  I sometimes do that, too.  Now,
classical mechanics looks totally easy to me, because I forget
about the years of study I spent on Differential Geometry and
don't count them as part of my effort to undertstand classical
mechanics.

> > That questions about, say, special variables come up over and
> > over again is /not/ evidence that there is anything wrong
> > with the way they are handled.  We simply have to explain
> > them over and over again and that is that.
> 
> Hogwash.  Special variables are pretty easy to explain.

Yes.  But they have to be explained.

> It's just that once they are explained one is left wondering:
> why the hell did they do it that way?  What's the point?  Why
> not some other way?  Why is the programmer left with the burden
> of following a typographical convention in order to stay out of
> trouble?  Is there no alternative design that would preserve
> the functionality while relieving the programmer of that
> burden?  (What does makunbound do when called on a symbol which
> has a dynamic binding in the current scope?)

/You/ are left wondering about that.  Not the student, I bet.

> > Other languages don't have anything like special variables,
> > that's why people find it hard to grasp.
> 
> Again, hogwash.  It is possible to implement the functionality
> of special variables in both Java and C++.

To /implement/?  Come on, that's not the same.  Where are special
variables mentioned in the C++ standard, for instance?

> > They think they know what a variable is, and are offended
> > that Lisp seems to disagree :-)
> 
> No, they think that the functionality of special variables is
> disjoint from the functionality of lexical variables, and they
> are confused by the fact that Lisp conflates the two in a way
> that is not lexically apparent (except by following a
> typographical convention).

Actually, I doubt they are confused in that way.  I certainly
never was.  A variable is either special or lexical, which
determines its semantics.  What's so hard about that?  Nothing.
What's somewhat hard (or at least requires some thinking effort)
is to understand the /concept/.

> > I have spent many years of my life explaining mathematics to
> > students.  Every single one of them had trouble with
> > integrals.
> 
> Then the fault lies with you.  I learned integrals in High
> School from an excellent teacher and understood them
> immediately.

Wow, you learned measure theory and the Lebesgue integral in High
School?  That's pretty impressive.  And I bet you could solve

 \int dx / cos x

ten minutes after you first saw an integral sign.  You are truly
a genius, Erann.

> (I entered college with credit for both first and second year
> calculus.)  By contrast, I struggled mightily with differential
> equations because the teacher (and the textbook) sucked.  By
> further contrast, partial differential equations (which are
> supposed to be harder) were again a breeze because I had an
> excellent teacher.

I have certain doubts that PDEs are a ``breeze�� for anyone.

> If *all* of your students are having trouble with something
> (and particularly if that something is integrals) then the
> fault is without question yours.

*sigh*

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031345110001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:

> Feynman said a lot of dumb things in his life.

People who live in glass houses shouldn't throw stones.

> But that doesn't mean I should be able to explain
> general relativity to my wife in terms of apples and oranges if I
> am not a bad teacher.

The best teacher of general relativity I ever saw was Kip Thorne, and he
explained general relativity in terms of basketballs.  He threw a
basketball up in the air and said: "When there are no forces acting on
this basketball it moves in a straight line.  Einstein figured out that
its motion appears curved because spacetime is curved."  Even my wife
understood.

Taylor and Wheeler (and even Einstein himself) also wrote excellent,
easily accessible books on both special and general relativity.  Mermin is
also very good.

> > Again, hogwash.  It is possible to implement the functionality
> > of special variables in both Java and C++.
> 
> To /implement/?  Come on, that's not the same.  Where are special
> variables mentioned in the C++ standard, for instance?

Read what wrote more carefully.  I didn't say that C++ had special
variables.  I said it was possible to implement the same functionality.

> Wow, you learned measure theory and the Lebesgue integral in High
> School?

No, we learned the Riemann integral.  Your point would be?

>  You are truly a genius, Erann.

Why thank you.  <blush>

Oh, you were being sarcastic?  I never would have guessed.  Maybe I'm not
so smart after all.

Seriuosly though, while I don't think I'm the dimmest bulb in the box, I
would not call myself a genius.  That is precisely my point.  I don't know
if I was ever able to solve the integral of 1/cos(x) -- I certainly can't
today.  But if I ever could it's not because I figured out on my own how
to do it, it's because I was taught how to do it.

> > (I entered college with credit for both first and second year
> > calculus.)  By contrast, I struggled mightily with differential
> > equations because the teacher (and the textbook) sucked.  By
> > further contrast, partial differential equations (which are
> > supposed to be harder) were again a breeze because I had an
> > excellent teacher.
> 
> I have certain doubts that PDEs are a ``breeze�� for anyone.

"Breeze" is perhaps overstating the case a bit, but they did not seem to
me to be particularly difficult at the time.  Of course, there was a lot
of grunt work involved in working out the details of a solution, but I can
remember just cranking through it, getting to the answer (an hour or two
later), and thinking, "boy, what a pain in the ass that was" but never, "I
don't get this at all."

By way of further further contrast, in electromagnetic fields I had a very
bad teacher.  I was able to crank through the problems in much the same
way, but I never "got it" the way I "got" PDF's, and I put a whole lot
more effort into fields than I did into PDF's.  I had to.  It's a damn
good thing my fields prof graded on a curve or everyone in that class
(including me) would have failed.

E.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <878ysp5rtx.fsf@darkstar.cartan>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:
> 
> > Feynman said a lot of dumb things in his life.
> 
> People who live in glass houses shouldn't throw stones.

I didn't mean to throw stones at Feynman.  I think he was a
genius.  But some statement is not automatically true just
because he said so.  He enjoyed making piffy statements of all
kinds.  Not all of them are absolute truth.

> > But that doesn't mean I should be able to explain general
> > relativity to my wife in terms of apples and oranges if I am
> > not a bad teacher.
> 
> The best teacher of general relativity I ever saw was Kip
> Thorne, and he explained general relativity in terms of
> basketballs.  He threw a basketball up in the air and said:
> "When there are no forces acting on this basketball it moves in
> a straight line.  Einstein figured out that its motion appears
> curved because spacetime is curved."  Even my wife understood.

She understood what he said, but I doubt that she could write
down the gravitational tensor now.  And I wouldn't say somebody
who can't do even that has understood general relativity.
Moreover, I seriously doubt that she has the slightest idea what
it means for a spacetime to be ``curved�� (and what the hell is a
``spacetime��, after all?)

> Taylor and Wheeler (and even Einstein himself) also wrote
> excellent, easily accessible books on both special and general
> relativity.  Mermin is also very good.

My favorite one is O'Neill's ``Semi-Riemannian Geometry��.  Only
after that one I could understand others, too.

> > > Again, hogwash.  It is possible to implement the
> > > functionality of special variables in both Java and C++.
> > 
> > To /implement/?  Come on, that's not the same.  Where are
> > special variables mentioned in the C++ standard, for
> > instance?
> 
> Read what wrote more carefully.  I didn't say that C++ had
> special variables.  I said it was possible to implement the
> same functionality.

It is possible to implement pretty much anything in C++.  But I
was talking about the concept of special variables.  No C++
programmer I know (except those who are also Lisp programmers ;-)
has the slightest idea what that is (at least until I told them
;-).

> > Wow, you learned measure theory and the Lebesgue integral in
> > High School?
> 
> No, we learned the Riemann integral.  Your point would be?

My point would be that we are talking about different things.
What the hell made you insult me like this?  You have never seen
me explaining integrals or anything else to an audience, but you
claim several times that I am a bad teacher.  Nobody I ever
explained mathematics to said anything like that.  For a while I
used to pay my rent by giving private lessons to people who
wouldn't otherwise survive their mathematics courses at the
university.  When I gave courses, people from other courses would
come listen to me instead because of mouth propaganda that what I
say can actually be understood by mere mortals.  Even now, years
after I left the university, total strangers approach me in
public to thank me and tell me how much they appreciated my
lessons.  And you have the gall claiming that I suck as a teacher
even though you never watched me teaching?  Fuck you, too!

You're out of line.  I never said anything about your qualities
as a physicist or engineer, even though we disagree and quarrel
all the time, did I?

> Seriuosly though, while I don't think I'm the dimmest bulb in
> the box, I would not call myself a genius.  That is precisely
> my point.  I don't know if I was ever able to solve the
> integral of 1/cos(x) -- I certainly can't today.  But if I ever
> could it's not because I figured out on my own how to do it,
> it's because I was taught how to do it.

Well, I would say that you haven't ``understood�� the whole thing
until you /can/ solve that one on your own.  And when I say
people had trouble understanding integrals, I meant that they had
some work to do until they were able to solve integrals like that
even without being taught how to do that particular one.  Does it
make more sense now?

> > I have certain doubts that PDEs are a ``breeze�� for anyone.
> 
> "Breeze" is perhaps overstating the case a bit, but they did
> not seem to me to be particularly difficult at the time.  Of
> course, there was a lot of grunt work involved in working out
> the details of a solution, but I can remember just cranking
> through it, getting to the answer (an hour or two later), and
> thinking, "boy, what a pain in the ass that was" but never, "I
> don't get this at all."

Ok, but that's exactly what I am talking about, too.  You had to
put some effort into it.  /Some/ effort is always needed for
understanding PDEs.  There is not something wrong with PDEs or
the way they are explained because of that.  Actually, PDEs are a
particularly bad example.  The whole theory of PDEs is horrible.
It is extremely hard to understand (mostly because there /is/ no
``whole theory of PDEs��.  Mostly it is just a collection of some
unrelated and tedious patchwork).  I can somewhat guess what that
course was like -- like in most courses, they have probably
picked out the few examples that actually /are/ relatively simple
and elegant.  /Linear/ PDEs probably.  If you have ever tried to
find some non-trivial solutions of something like, say, the
Sine-Gordon equation, or only understand what some other people
have done in the field, things would probably look very
different.  PDEs are hell (but that's also what makes them
somewhat attractive at the same time :-)

> By way of further further contrast, in electromagnetic fields I
> had a very bad teacher.  I was able to crank through the
> problems in much the same way, but I never "got it" the way I
> "got" PDF's, and I put a whole lot more effort into fields than
> I did into PDF's.  I had to.  It's a damn good thing my fields
> prof graded on a curve or everyone in that class (including me)
> would have failed.

Aargh, don't remind me.  Same here :-)

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031623030001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:

> It is possible to implement pretty much anything in C++.  But I
> was talking about the concept of special variables.  No C++
> programmer I know (except those who are also Lisp programmers ;-)
> has the slightest idea what that is (at least until I told them
> ;-).

Actually, I think any C++ or Java programmer who has done thread
programming would recognize specials as per-thread variables with stack
unwinding.


> My point would be that we are talking about different things.
> What the hell made you insult me like this?

I don't believe I did.  Teching anything well is hard, and teaching a
complex subject well is really really hard.  There is no shame in not
being able to teach complex tecnical subject well.  Very few people can. 
(There is, however, shame in blaming the resulting difficulties in
understanding entirely on your students.)

In any case, if you felt insulted I apologise.  That was not my intent.


> > Seriuosly though, while I don't think I'm the dimmest bulb in
> > the box, I would not call myself a genius.  That is precisely
> > my point.  I don't know if I was ever able to solve the
> > integral of 1/cos(x) -- I certainly can't today.  But if I ever
> > could it's not because I figured out on my own how to do it,
> > it's because I was taught how to do it.
> 
> Well, I would say that you haven't ``understood�� the whole thing
> until you /can/ solve that one on your own.

Well, that just begs the question of what it means to "understand"
something.  I understand that an integral of F is the function which when
differentiated yields F.  But as with many inverse problems, finding a
solution often relies on remembering a particular trick or insight.  You
say that remembering those tricks is part of understanding; I don't. 
We'll just have to agree to disagree on that.

> Ok, but that's exactly what I am talking about, too.  You had to
> put some effort into it.  /Some/ effort is always needed for
> understanding PDEs.  There is not something wrong with PDEs or
> the way they are explained because of that.

But the effort I put into PDEs was vastly less than the effort I put into
fields, and the net result was that I (at least apparently) understood
vastly more about PDEs than I did about fields despite having put vastly
less effort into them.  The most reasonable explanation for this is that
my PDE teacher was excellent and my fields teacher sucked.  (I remember
this particular fields prof with great fondness by the way.  He was very
friendly and charismatic, spoke clearly, and seemed to know what he was
talking about.  But I couldn't make heads or tails out of what he was
saying, and neither apparently could anyone else in the class because the
high score on the final was something like 30%.  And this was in a class
of 300 people.  We couldn't *all* have been that dumb.  What happened was
that he tried to push us too hard and ended up losing the whole class, and
that made him a bad teacher.  Not a bad person, or a bad scholar, just a
bad teacher.)

Bad teaching starts from the premise that teaching is a process of
unidirectionally transferring information from the teacher to the
student.  It isn't.  Good teaching requires a bidirectional flow of
information.  Teaching is about making targeted incremental changes to a
student's mental state, so it requires that the teacher understand what
that mental state is.  Good teaching requires pacing.  Go too slow and you
don't tell the student anything he doesn't already know.  Go too fast and
you lose him.  And good teaching requires humility, the ability for the
teacher to step back and say, OK, that didn't work.  I wonder why not, and
what I should do differently.

Bad teaching generally involves pontification and blaming the students
when they don't understand.

I believe that the range of apparent complexities that result from
variation in how subjects are taught is much larger than the range of
complexity that is inherent in the topics themselves.  Take computer
programming.  The apparent complexity of programming varies over a vast
range, from hairy beyond comprehension (C++, especially once templates are
involved), to so simple a child can do it (Lisp, Logo).  I believe that
the inherent complexity of programming is much less than the artificial
complexity that is introduced by bad infrastructure design (both
programming languages and operating systems) and bad teaching.  I again
point to myself as a data point in support of this thesis.  People's
perceptions of my coding abilities range from guru to complete moron. 
There is a high correlation between those who think I'm a guru and those
who know me as a Lisp programmer, and those who think I'm a moron and
those who know me as a C++ or Java programmer.  I do not believe this is
merely coincidence.  I think this is causal.  I think Lisp is better
designed than C++.  A result of that better design is that programming in
Lisp is easier than programming in C++, and so you can take a person like
me who is a hopeless C++ programmer and put them to work coding in Lisp
and suddenly they appear to be brilliant.  (The contrast is truly
astonishing.  My few forays into C++ have been largely disastrous, while
in Lisp I am so productive that I can get a week's worth of work done in a
few hours.  That's one of the reasons I can spend so much time posting on
usenet.)

I also believe that the current design of Common Lisp does not represent
the limits of what can be accomplished by taking advantage of this
phenomenon.

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86of1l57af.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:
> 
> > > Seriuosly though, while I don't think I'm the dimmest bulb in
> > > the box, I would not call myself a genius.  That is precisely
> > > my point.  I don't know if I was ever able to solve the
> > > integral of 1/cos(x) -- I certainly can't today.  But if I ever
> > > could it's not because I figured out on my own how to do it,
> > > it's because I was taught how to do it.
> > 
> > Well, I would say that you haven't ``understood�� the whole thing
> > until you /can/ solve that one on your own.
> 
> Well, that just begs the question of what it means to "understand"
> something.  I understand that an integral of F is the function which when
> differentiated yields F.  But as with many inverse problems, finding a
> solution often relies on remembering a particular trick or insight.  You
> say that remembering those tricks is part of understanding; I don't. 
> We'll just have to agree to disagree on that.
> 

I think you both are talking about different kinds of understanding:

1: fact based understanding, you can independantly recreate the proof.

2: faith based understanding, he/she said so.

Both have their place in the world, but they are different.

marc
From: Fred Gilham
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <u7llwoqw02.fsf@snapdragon.csl.sri.com>
Marc Spitzer wrote:
> I think you both are talking about different kinds of understanding:
> 
> 1: fact based understanding, you can independantly recreate the
>    proof. 
> 
> 2: faith based understanding, he/she said so.
> 
> Both have their place in the world, but they are different.

Well, I'd call #2 "authority based understanding."  Kind of like
believing what Feynman said about understanding meaning being able to
explain it to undergrads --- Feynman was an authority, so people tend
to accept what he said.

Both of the above understandings require faith.  The first requires
faith in the validity of what you are doing when you recreate the
proof, and *that* faith, ironically enough, is for most people based
on the authority of those from whom they learned their methodology.
The second, of course, involves faith in the authority.  "Faith" here
does not mean "blind unquestioning acceptance", of course.

Also, one can have blind unquestioning acceptance of one's own ability
to independently recreate a proof (or whatever) just as much as blind
unquestioning acceptance of an authority.  One of the oft-recurring
problems we have with "newbie" Lispers is this very trait --- assuming
that the means they used in the past to understand things about
programming are applicable without modification to Lisp, and refusing
to modify their views even when told they aren't.

The point of this little diatribe is to show that faith, which can be
defined as opening yourself to something you aren't sure of, is an
essential aspect of almost all human knowledge.

-- 
Fred Gilham                                        ······@csl.sri.com
A common sense interpretation of the facts suggests that a
superintellect has monkeyed with physics, as well as with chemistry
and biology, and that there are no blind forces worth speaking about
in nature. --- Fred Hoyle
From: Gareth McCaughan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <slrnbdf47l.2eb1.Gareth.McCaughan@g.local>
Nils Goesche wrote:

[Erann Gat:]
> > By way of further further contrast, in electromagnetic fields I
> > had a very bad teacher.  I was able to crank through the
> > problems in much the same way, but I never "got it" the way I
> > "got" PDF's, and I put a whole lot more effort into fields than
> > I did into PDF's.  I had to.  It's a damn good thing my fields
> > prof graded on a curve or everyone in that class (including me)
> > would have failed.

[Nils:]
> Aargh, don't remind me.  Same here :-)

May I suggest an application of what you've been saying
to Erann? Electromagnetics is just hard; harder, I think,
than most other courses usually done at the same level.
Of course I'm guessing what that level was. For me, it
was 1st-year undergraduate. I stopped attending lectures
about half way through the course because I hated it so
much. :-)

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <8765nsgvan.fsf@darkstar.cartan>
Gareth McCaughan <················@pobox.com> writes:

> Nils Goesche wrote:
> 
> [Erann Gat:]
> > > By way of further further contrast, in electromagnetic
> > > fields I had a very bad teacher.  I was able to crank
> > > through the problems in much the same way, but I never "got
> > > it" the way I "got" PDF's, and I put a whole lot more
> > > effort into fields than I did into PDF's.  I had to.  It's
> > > a damn good thing my fields prof graded on a curve or
> > > everyone in that class (including me) would have failed.
> 
> [Nils:]
> > Aargh, don't remind me.  Same here :-)
> 
> May I suggest an application of what you've been saying
> to Erann? Electromagnetics is just hard; harder, I think,
> than most other courses usually done at the same level.
> Of course I'm guessing what that level was. For me, it
> was 1st-year undergraduate. I stopped attending lectures
> about half way through the course because I hated it so
> much. :-)

You're right, of course.  Although I was really happy when I
finally found a treatment for mathematicians that basically
reduced it to

d \omega = 0    and   d * \omega = 0.  So what?

:-)

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Gareth McCaughan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <slrnbdfrq8.fac.Gareth.McCaughan@g.local>
Nils Goesche wrote:

> You're right, of course.  Although I was really happy when I
> finally found a treatment for mathematicians that basically
> reduced it to
> 
> d \omega = 0    and   d * \omega = 0.  So what?
> 
> :-)

:-), indeed. Unfortunately, (1) you can't teach it
to first-year undergraduates in a form that looks
so simple, and (2) if you actually want to *use*
the theory then you end up having to translate it
back into less abstract, and uglier, forms.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Andreas Eder
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3k7c9nwe8.fsf@elgin.eder.de>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:
> 
> > Feynman said a lot of dumb things in his life.
> 
> People who live in glass houses shouldn't throw stones.
> 
> > But that doesn't mean I should be able to explain
> > general relativity to my wife in terms of apples and oranges if I
> > am not a bad teacher.
> 
> The best teacher of general relativity I ever saw was Kip Thorne, and he
> explained general relativity in terms of basketballs.  He threw a
> basketball up in the air and said: "When there are no forces acting on
> this basketball it moves in a straight line.  Einstein figured out that
> its motion appears curved because spacetime is curved."  Even my wife
> understood.

Why is it that people always try to understand in terms of something
else?
Did your wife really understand? Or did she just think she understood?
That's a hell of a difference.

> No, we learned the Riemann integral.  Your point would be?
Did you really learn the Riemann integral or was it just some rule to
use to find out integrals? Did you really construct the measure? I
doubt it.

Andreas
-- 
Wherever I lay my .emacs, there�s my $HOME.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031940130001@192.168.1.51>
In article <··············@elgin.eder.de>, Andreas Eder
<············@t-online.de> wrote:

> Why is it that people always try to understand in terms of something
> else?

I think that's how our brains work.

> Did your wife really understand? Or did she just think she understood?
> That's a hell of a difference.

How can one tell the difference?

> > No, we learned the Riemann integral.  Your point would be?
> Did you really learn the Riemann integral or was it just some rule to
> use to find out integrals?

Truthfully, I don't know.  We learned the stuff that one normally learns
in introductory calculus classes.  I am given to understand (view
mathworld) that that is the Riemann integral.  But I could be mistaken.

Your point would be?

E.
From: Christophe Rhodes
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sqisrsq1zx.fsf@lambda.jcn.srcf.net>
···@jpl.nasa.gov (Erann Gat) writes:

>> > No, we learned the Riemann integral.  Your point would be?
>> Did you really learn the Riemann integral or was it just some rule to
>> use to find out integrals?
>
> Truthfully, I don't know.  We learned the stuff that one normally learns
> in introductory calculus classes.  I am given to understand (view
> mathworld) that that is the Riemann integral.  But I could be mistaken.
>
> Your point would be?

Maybe that if Nils says that "integration intrinsically requires an
effort to understand", and you say "sucks to that, it's easy, I was
integrating functions at 15", maybe you're talking about different
things, either "integrating" or "understanding".

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Gareth McCaughan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <slrnbdf4rb.2eb1.Gareth.McCaughan@g.local>
Erann Gat wrote:

[Andreas Eder:]
> > Did you really learn the Riemann integral or was it just some rule to
> > use to find out integrals?
> 
> Truthfully, I don't know.  We learned the stuff that one normally learns
> in introductory calculus classes.  I am given to understand (view
> mathworld) that that is the Riemann integral.  But I could be mistaken.
> 
> Your point would be?

Presumably that when Nils (I think it was) said that
integration is hard, he didn't mean "the idea of
antidifferentiation is hard", but "the underlying
theory is hard". I think integration is harder than
differentiation in two ways.

  - Viewed as antidifferentiation, it's algorithmically
    harder. There are simple algorithms anyone can
    learn to follow to differentiate a function; there
    are no such algorithms for integration in general.
    So you end up having to learn a whole bag of random
    techniques, none of which you can be sure will work
    on a given problem until you've tried it. Some
    things just can't be integrated in closed form,
    and there's no easy (read: accessible to students)
    way to recognize them.

  - Viewed in terms of measure theory and all that stuff,
    it's technically harder. Especially for Lebesgue
    as opposed to Riemann integration, and unfortunately
    the Lebesgue integral is more useful when you get
    onto more advanced topics.

Neither of these has anything to do with the idea of
an antiderivative being difficult.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Andreas Eder
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3fzmwnwah.fsf@elgin.eder.de>
Gareth McCaughan <················@pobox.com> writes:

> I think integration is harder than
> differentiation in two ways.
> 
>   - Viewed as antidifferentiation, it's algorithmically
>     harder. There are simple algorithms anyone can
>     learn to follow to differentiate a function; there
>     are no such algorithms for integration in general.

Oh, there is (at least for a very large class of functions); look up 
the Risch algorithm. But you are right in that this is a far more 
complicated algorithm than what people learn for differentiating.
It is in a totally different ballpark

>   - Viewed in terms of measure theory and all that stuff,
>     it's technically harder. Especially for Lebesgue
>     as opposed to Riemann integration, and unfortunately
>     the Lebesgue integral is more useful when you get
>     onto more advanced topics.

This is the point. That (the measure) is what integration is all
about. The above mentioned algorithm for finding the antiderivative is
a completely different thing - mostly differential
algebra. Integration and measure theory is completely different, and I
guess it was that which Nils talked about.

Andreas
-- 
Wherever I lay my .emacs, there�s my $HOME.
From: Gareth McCaughan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <slrnbdfrk9.fac.Gareth.McCaughan@g.local>
Andreas Eder wrote:
>  Gareth McCaughan <················@pobox.com> writes:
>  
> > I think integration is harder than
> > differentiation in two ways.
> > 
> >   - Viewed as antidifferentiation, it's algorithmically
> >     harder. There are simple algorithms anyone can
> >     learn to follow to differentiate a function; there
> >     are no such algorithms for integration in general.
> 
> Oh, there is (at least for a very large class of functions); look up 
> the Risch algorithm. But you are right in that this is a far more 
> complicated algorithm than what people learn for differentiating.
> It is in a totally different ballpark

I wonder whether you missed the word "simple" in
what I wrote. I know about the Risch algorithm;
it's hardly a simple algorithm anyone can learn. :-)

> >   - Viewed in terms of measure theory and all that stuff,
> >     it's technically harder. Especially for Lebesgue
> >     as opposed to Riemann integration, and unfortunately
> >     the Lebesgue integral is more useful when you get
> >     onto more advanced topics.
>  
> This is the point. That (the measure) is what integration is all
> about. The above mentioned algorithm for finding the antiderivative is
> a completely different thing - mostly differential
> algebra. Integration and measure theory is completely different, and I
> guess it was that which Nils talked about.

Very likely. On the other hand, despite what you say,
antiderivatives are *also* what integration is about,
and it wasn't stupid of Erann to think that's what
Nils meant. (I shall refrain from commenting on his
decision to assume it was and then accuse Nils of
being an incompetent teacher on the grounds that his
students found "integration" difficult, however.)

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005032351430001@192.168.1.51>
In article <·······························@g.local>,
················@pobox.com wrote:

> Very likely. On the other hand, despite what you say,
> antiderivatives are *also* what integration is about,
> and it wasn't stupid of Erann to think that's what
> Nils meant. (I shall refrain from commenting on his
> decision to assume it was and then accuse Nils of
> being an incompetent teacher on the grounds that his
> students found "integration" difficult, however.)

You know, I had never heard of the Lebesgue integral before Nils mentioned
it, so I did the obvious thing and looked it up on Google.  That led me to
mathworld, where I found this:


"""
The Riemann integral is the integral normally encountered in calculus
texts and used by physicists and engineers. Other types of integrals exist
(e.g., the Lebesgue integral), but are unlikely to be encountered outside
the confines of advanced mathematics texts. In fact, according to Jeffreys
and Jeffreys (1988, p. 29), "it appears that cases where these methods
[i.e., generalizations of the Riemann integral] are applicable and
Riemann's [definition of the integral] is not are too rare in physics to
repay the extra difficulty."
"""

In other words, no one but mathematicians use Lebesgue integrals, and the
rest of the world uses the term "integral" to mean the Riemann integral
(and this convention is so prevalent that most people don't even know that
there is any other kind).

BTW, I did not "accuse" Nils of being an incompetant teacher.  All I said
was that if all his students weren't getting something then it was his
fault, not theirs.  That doesn't necessarily make him incompetent, just
not as good as he could be if he put his mind to it IMO.

However, I can't help but wonder in light of this new information why Nils
chose to use the unadorned term "integration" to mean "Lebesgue
integration."  Either he didn't know that by overwhelming majority the
world takes "integration" to mean "Riemann integration" (which seems
unlikely) or he intentionally used the term in a way that he knew would be
misleading.  Either way, this may not be altogether unrelated to the fact
that his students have trouble understanding some of the things he tries
to teach them.

E.
From: Klaus Momberger
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <80a8af7d.0305310129.2adb252b@posting.google.com>
···@jpl.nasa.gov (Erann Gat) wrote in message news:<····················@192.168.1.51>...
> In article <·······························@g.local>,
> ················@pobox.com wrote:
> 
.......
> In other words, no one but mathematicians use Lebesgue integrals, and the
> rest of the world uses the term "integral" to mean the Riemann integral
> (and this convention is so prevalent that most people don't even know that
> there is any other kind).
> 
....... 
> However, I can't help but wonder in light of this new information why Nils
> chose to use the unadorned term "integration" to mean "Lebesgue
> integration."  Either he didn't know that by overwhelming majority the
> world takes "integration" to mean "Riemann integration" (which seems
> unlikely) or he intentionally used the term in a way that he knew would be
> misleading.  Either way, this may not be altogether unrelated to the fact
> that his students have trouble understanding some of the things he tries
> to teach them.
> 
> E.

I haven't been following your discussion,  just wanted to note that the
Lebesgue Integral is not as esoteric as it appears from your google-results.
In fact, it is being used in physics all the time, particularly in quantum
mechanics. To the physicist, it is what the Riemann integral is to an engineer. ;-) 

-klaus.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87y90n5i9s.fsf@darkstar.cartan>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <·······························@g.local>,
> ················@pobox.com wrote:
> 
> > Very likely. On the other hand, despite what you say,
> > antiderivatives are *also* what integration is about, and it
> > wasn't stupid of Erann to think that's what Nils meant.

Actually, I meant both; anybody who wants to learn integration
has to learn at least /some/ kind of measure, and without being
able to compute the anti-derivative, always computing integrals
usins the measure alone would be quite ... tedious for students
:-)  And even if all you are taught is Riemann sums in the
simplest case (functions of one real variable), explaining why
anti-derivatives have anything to do with it is still not trivial.

> You know, I had never heard of the Lebesgue integral before
> Nils mentioned it, so I did the obvious thing and looked it up
> on Google.

I mentioned both the Riemann and the Lebesgue integral in the
very posting you were following up to.

> That led me to mathworld, where I found this:
> 
> """
> The Riemann integral is the integral normally encountered in
> calculus texts and used by physicists and engineers. Other
> types of integrals exist (e.g., the Lebesgue integral), but are
> unlikely to be encountered outside the confines of advanced
> mathematics texts. In fact, according to Jeffreys and Jeffreys
> (1988, p. 29), "it appears that cases where these methods
> [i.e., generalizations of the Riemann integral] are applicable
> and Riemann's [definition of the integral] is not are too rare
> in physics to repay the extra difficulty."
> """

I would say this is complete bullshit :-) It is the typical
statement sometimes made by people who erroneously think the main
point of using the Lebesgue integral was that you can integrate
weird functions over weird sets.  Instead, the main points are
that you have nice convergence theorems (rules about when you are
allowed to exchange the limit and integral sign) and that
function spaces like L^2 are /complete/.  As Klaus Momberger
correctly said, there is hardly any physics undergraduate who
hasn't encountered L^2 spaces in quantum mechanics.  Sure, you
can try to omit the Lebesgue integral when teaching mathematics
for physicists, but then you'll have big trouble justifying
whatever you do with integrals... because Riemann integrals are a
pain in the ass to use correctly in practice.

> In other words, no one but mathematicians use Lebesgue
> integrals, and the rest of the world uses the term "integral"
> to mean the Riemann integral (and this convention is so
> prevalent that most people don't even know that there is any
> other kind).

There is a lot of people who use the Riemann integral in a way
that would only be correct if they were using the Lebesgue
integral :-) But that's basically cheating.

> BTW, I did not "accuse" Nils of being an incompetant teacher.
> All I said was that if all his students weren't getting
> something then it was his fault, not theirs.  That doesn't
> necessarily make him incompetent, just not as good as he could
> be if he put his mind to it IMO.

So, if it were only taught by good teachers, we could start
teaching the Lebesgue measure to three-year-olds?

> However, I can't help but wonder in light of this new
> information why Nils chose to use the unadorned term
> "integration" to mean "Lebesgue integration."  Either he didn't
> know that by overwhelming majority the world takes
> "integration" to mean "Riemann integration" (which seems
> unlikely) or he intentionally used the term in a way that he
> knew would be misleading.

I was explicitly mentioning the Lebesgue integral four times in
the posting you were following up to.  If you don't even know
what I'm talking about, how can you be so sure that what I'm
saying is wrong?  However, even if I hadn't, my point would
/still/ be that learning integration, Lebesgue or otherwise, is
hard.  Sometimes more, sometimes less.  You perfectly made my
point when you said that you can't solve

  \int dx / cos x,

and even if you once could, then only because somebody had told
you how to solve this particular one, or some kind of ``trick��.
I simply wouldn't say that somebody who cannot solve this one on
his own has learned integration.  There is exactly one way to
solve this one, and it is to use your brain and do it.  It is not
a particularly easy one, but when I say I taught somebody
integration and he got it, he will be able to solve this one,
too, on his own.  Until he gets that far, he will have to heavily
use his brain several times, on easier problems.  That's why he
perceives it as being ``hard��.

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105031010140001@192.168.1.51>
In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:

> I mentioned both the Riemann and the Lebesgue integral in the
> very posting you were following up to.

That's true, but your first reference to it was *after* you said:

> I have spent many years of my life explaining mathematics to
> students.  Every single one of them had trouble with integrals.

In that context you were still using the term unmodified.

You also said:

> In the end they resorted to teaching the old Riemann integral
> instead of the Lebesgue integral.  The Riemann integral is /much/
> harder to use than the Lebesgue integral, but it is a bit easier
> to understand at first.

If the Riemann integral is only "a bit easier to understand at first" why
should it be so much harder to teach?

> > That led me to mathworld, where I found this:
> > 
> > """
> > The Riemann integral is the integral normally encountered in
> > calculus texts and used by physicists and engineers. Other
> > types of integrals exist (e.g., the Lebesgue integral), but are
> > unlikely to be encountered outside the confines of advanced
> > mathematics texts. In fact, according to Jeffreys and Jeffreys
> > (1988, p. 29), "it appears that cases where these methods
> > [i.e., generalizations of the Riemann integral] are applicable
> > and Riemann's [definition of the integral] is not are too rare
> > in physics to repay the extra difficulty."
> > """
> 
> I would say this is complete bullshit :-)

Well, you'll have to take that up with Jeffreys, Jeffreys, and Wolfram.  I
am pretty much clueless about all this, so at this point I can only repeat
back what I read.

> > BTW, I did not "accuse" Nils of being an incompetant teacher.
> > All I said was that if all his students weren't getting
> > something then it was his fault, not theirs.  That doesn't
> > necessarily make him incompetent, just not as good as he could
> > be if he put his mind to it IMO.
> 
> So, if it were only taught by good teachers, we could start
> teaching the Lebesgue measure to three-year-olds?

Of course not.  But I'll bet you could teach them to college students.


> You perfectly made my point when you said that you can't solve
> 
>   \int dx / cos x,

I can't solve this because I don't know what your notation means.  (If it
means the integral of 1/cos(x) with respect to x then I can solve it,
which is why I presume it means something else.)

E.
From: Jens Axel Søgaard
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ed8bd13$0$97188$edfadb0f@dread12.news.tele.dk>
Nils Goesche wrote:
> ···@jpl.nasa.gov (Erann Gat) writes:

> I mentioned both the Riemann and the Lebesgue integral in the
> very posting you were following up to.
> 
> 
>>That led me to mathworld, where I found this:

>>"
>>The Riemann integral is the integral normally encountered in
>>calculus texts and used by physicists and engineers. Other
>>types of integrals exist (e.g., the Lebesgue integral), but are
>>unlikely to be encountered outside the confines of advanced
>>mathematics texts. ..."

> I would say this is complete bullshit :-) It is the typical
> statement sometimes made by people who erroneously think the main
> point of using the Lebesgue integral was that you can integrate
> weird functions over weird sets.  Instead, the main points are
> that you have nice convergence theorems (rules about when you are
> allowed to exchange the limit and integral sign) and that
> function spaces like L^2 are /complete/.  

Well put.

The interesting thing is, that there exists an integral,
that are easier taught than the Lebesgue integral and
still gives the sought after convergence theorems.

Some people argue, that it would be better if that integral
was the only one taught. But tradition is strong in mathematics...

See
   <http://www.maths.uq.edu.au/~rv/preface.html>

-- 
Jens Axel S�gaard
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87ptlz5g33.fsf@darkstar.cartan>
Jens Axel S�gaard <······@jasoegaard.dk> writes:

> The interesting thing is, that there exists an integral, that
> are easier taught than the Lebesgue integral and still gives
> the sought after convergence theorems.
> 
> Some people argue, that it would be better if that integral
> was the only one taught. But tradition is strong in mathematics...
> 
> See
>    <http://www.maths.uq.edu.au/~rv/preface.html>

I didn't know about that one.  Thanks for the reference!

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <smqxbnb2.fsf@ccs.neu.edu>
> ···@jpl.nasa.gov (Erann Gat) writes:
>
> > If you're still not convinced then you should read the preface
> > to "Structure and Interpretation of Classical Mechanics."
> > Here's an excerpt:
> > 
> > "Classical mechanics is deceptively simple. It is surprisingly
> > easy to get the right answer with fallacious reasoning or
> > without real understanding.  Traditional mathematical notation
> > contributes to this problem."
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 

Nils Goesche <···@cartan.de> writes:
> I agree with the first part, but disagree with the last sentence.
> It is not ``traditional mathematical notation��.  It is the old
> mathematical models that are used by physicists to explain
> classical mechanics that are at fault.  Notation has little or
> nothing to do with it.  

I have to agree with Erann (and indirectly with Sussman & Wisdom).
Traditional notation is a mess.  In any particular formula, the bound
and free variables can appear *anywhere* --- subscript, superscript,
in parenthesis, etc.  Inputs and outputs are distinguished by typeface
only.  Outputs are *embedded* (I just *love* induction formulas where
they give you x0 and the formula that takes you from xn to xn-1.)
Equals signs are used to denote equivalence, definition, relations,
mappings, etc.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87llwp60be.fsf@darkstar.cartan>
Joe Marshall <···@ccs.neu.edu> writes:

> > ···@jpl.nasa.gov (Erann Gat) writes:
> >
> > > If you're still not convinced then you should read the preface
> > > to "Structure and Interpretation of Classical Mechanics."
> > > Here's an excerpt:
> > > 
> > > "Classical mechanics is deceptively simple. It is surprisingly
> > > easy to get the right answer with fallacious reasoning or
> > > without real understanding.  Traditional mathematical notation
> > > contributes to this problem."
> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

> Nils Goesche <···@cartan.de> writes:

> > I agree with the first part, but disagree with the last
> > sentence.  It is not ``traditional mathematical notation��.
> > It is the old mathematical models that are used by physicists
> > to explain classical mechanics that are at fault.  Notation
> > has little or nothing to do with it.
> 
> I have to agree with Erann (and indirectly with Sussman &
> Wisdom).  Traditional notation is a mess.  In any particular
> formula, the bound and free variables can appear *anywhere* ---
> subscript, superscript, in parenthesis, etc.  Inputs and
> outputs are distinguished by typeface only.  Outputs are
> *embedded* (I just *love* induction formulas where they give
> you x0 and the formula that takes you from xn to xn-1.)  Equals
> signs are used to denote equivalence, definition, relations,
> mappings, etc.

Ah, that.  Don't know, I never had any trouble with mathematical
notation.  What I meant is that the classical index tensor
calculus makes things very hard to understand.  And no matter how
you change its notation, it will keep being hard because it is so
hard to see what is a tensor and what isn't, what operations will
create new tensors and which won't, which coordinate systems are
being used at any given time, and so on.  To overcome these
difficulties, you'll have to invent something essentially
different (the invariant calculus).  And I think you can't call
that just a change of notation; it's more.

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <n0h5bju4.fsf@ccs.neu.edu>
> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > I have to agree with Erann (and indirectly with Sussman &
> > Wisdom).  Traditional notation is a mess.  In any particular
> > formula, the bound and free variables can appear *anywhere* ---
> > subscript, superscript, in parenthesis, etc.  Inputs and
> > outputs are distinguished by typeface only.  Outputs are
> > *embedded* (I just *love* induction formulas where they give
> > you x0 and the formula that takes you from xn to xn-1.)  Equals
> > signs are used to denote equivalence, definition, relations,
> > mappings, etc.

Nils Goesche <···@cartan.de> writes:
> 
> Ah, that.  Don't know, I never had any trouble with mathematical
> notation.  

I guess I don't mind feeling stupid in this forum.  I'd already been
through college when it dawned on me that a definite integral was a
function of two arguments, but that a derivative was a function of
one.  If I had learned lambda notation before calculus, I'd have known
that years before.
From: Nils Goesche
Subject: Integrals [OT] WAS: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87he7d5vf3.fsf_-_@darkstar.cartan>
Joe Marshall <···@ccs.neu.edu> writes:

> I'd already been through college when it dawned on me that a
> definite integral was a function of two arguments, but that a
> derivative was a function of one.  If I had learned lambda
> notation before calculus, I'd have known that years before.

Uhm, well, I think you could look at it that way ;-)  But I'd
prefer to say that

    b
    /
    | f(x) dx
    /
    a

is just a number (in the most simple case ;-).  Of course, you
/can/ regard it as a function of two variables:

             b
             /
(a,b) |->    | f(x) dx
             /
             a

If you regard it as a function of one variable, as in

             t
             /
    t |->    | f(x) dx
             /
             a

you might say that it was at least derived from a function of two
variables.  However, you could also look at it this way:

             b
             /
    f |->    | f(x) dx
             /
             a

where f is the variable (this is done very often in modern
mathematics, in fact) (f is a variable that lives in some
infinite-dimensional space here, like C[a,b] :-)

So, to avoid all these complications, it is probably best to say
that it is just a number :-)

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Ingvar Mattsson
Subject: Re: Integrals [OT] WAS: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87he7c911x.fsf@gruk.tech.ensign.ftech.net>
Nils Goesche <···@cartan.de> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > I'd already been through college when it dawned on me that a
> > definite integral was a function of two arguments, but that a
> > derivative was a function of one.  If I had learned lambda
> > notation before calculus, I'd have known that years before.
> 
> Uhm, well, I think you could look at it that way ;-)  But I'd
> prefer to say that
[SNIP]
> is just a number (in the most simple case ;-).  Of course, you
> /can/ regard it as a function of two variables:
[SNIP]
> If you regard it as a function of one variable, as in
[SNIP]
> you might say that it was at least derived from a function of two
> variables.  However, you could also look at it this way:
[SNIP]
> where f is the variable (this is done very often in modern
> mathematics, in fact) (f is a variable that lives in some
> infinite-dimensional space here, like C[a,b] :-)
> 
> So, to avoid all these complications, it is probably best to say
> that it is just a number :-)

I would probably go as far as saying that

  /
  | f dx
  /

is a function of two variables (function to integrate and variable to
integrate with) into a function space. Then

  b
  /
  | f dx
  /
  a

is a function of three (or four) arguments, depending if one
sees it as plugging numbers into the indefinite integral or doing it
all "in one go".

//Ingvar (plus a constant)
-- 
Self-referencing
Five, seven, five syllables
This haiku contains
From: Joe Marshall
Subject: Re: Integrals [OT] WAS: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <s5KBa.53831$_t5.42661@rwcrnsc52.ops.asp.att.net>
"Nils Goesche" <···@cartan.de> wrote in message ······················@darkstar.cartan...
>
> Uhm, well, I think you could look at it that way ;-)  But I'd
> prefer to say that
>
>     b
>     /
>     | f(x) dx
>     /
>     a
>
> is just a number (in the most simple case ;-).

True.  It is a function of f, a, and b.
But of course you can curry the function to get
the various other forms you discussed.

The `epiphany' (such as it was) that I had was that
the definite integral was not the inverse of the
derivative as I had been taught in high school because
the arity differs.  It would have been obvious if the
teacher wrote:

integral is (lambda (f a b) ...)
derivative is (lambda (f x) ...)
From: Michael Park
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <ff20888b.0305301216.2de8a279@posting.google.com>
Joe Marshall <···@ccs.neu.edu> wrote in message news:<············@ccs.neu.edu>...
> > Joe Marshall <···@ccs.neu.edu> writes:
> > 
> > > I have to agree with Erann (and indirectly with Sussman &
> > > Wisdom).  Traditional notation is a mess.  In any particular
> > > formula, the bound and free variables can appear *anywhere* ---
> > > subscript, superscript, in parenthesis, etc.  Inputs and
> > > outputs are distinguished by typeface only.  Outputs are
> > > *embedded* (I just *love* induction formulas where they give
> > > you x0 and the formula that takes you from xn to xn-1.)  Equals
> > > signs are used to denote equivalence, definition, relations,
> > > mappings, etc.
> 
> Nils Goesche <···@cartan.de> writes:
> > 
> > Ah, that.  Don't know, I never had any trouble with mathematical
> > notation.  
> 
> I guess I don't mind feeling stupid in this forum.  I'd already been
> through college when it dawned on me that a definite integral was a
> function of two arguments, but that a derivative was a function of
> one.  If I had learned lambda notation before calculus, I'd have known
> that years before.

What college was that?

integral of sqrt(4 - x^2) from 0 to 2 = pi

You must feel pretty superior to all those stupid dorks still
believing pi is a constant :)
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <WISBa.782749$OV.714480@rwcrnsc54>
"Michael Park" <···········@whoever.com> wrote in message
·································@posting.google.com...
> Joe Marshall <···@ccs.neu.edu> wrote in message news:<············@ccs.neu.edu>...
> >
> > I guess I don't mind feeling stupid in this forum.  I'd already been
> > through college when it dawned on me that a definite integral was a
> > function of two arguments, but that a derivative was a function of
> > one.  If I had learned lambda notation before calculus, I'd have known
> > that years before.
>
> What college was that?

I can't blame the college, I learned calculus at my high school
and local community college.  And I'm sure that there are *many*
calculus books that tell you that integration and differentiation
are inverse operations.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-1F35FF.20555029052003@news.netcologne.de>
In article <··············@darkstar.cartan>,
 Nils Goesche <···@cartan.de> wrote:

> > I am pretty sure that I have a fair understanding of most of
> > what's in Common Lisp. And I am very sure that most of it can
> > be simplified/generalized/unified in relatively straightforward
> > ways.
> 
> This seems to be a bit arrogant to me.  A lot of bright people
> have worked together on the standard, and typically there will be
> very real reasons why certain things are the way they are.  Often
> technical, often political reasons.  Lisp wasn't designed by some
> researcher who worked alone until he came up with one ``unified��
> whole.  If you had the power to apply a list of changes to the
> standard in order to ``simplify�� and ``unify�� things, you will
> certainly be pleased by the result.  But I claim that you will be
> very surprised when you realize that a great lot of people
> totally disagrees and thinks you have ruined the language :-)
> Kent just told a very similar, not hypothetical, story about how
> Steele came up with such a list at the first X3J13 meeting or
> some such.
> 
> Many people have worked very long and very hard to arrive at what
> there is.  Every single one of them probably knows a way to
> ``unify�� and ``simplify�� things.  But things aren't as simple
> as that in real life.

I totally agree, and I wonder what has driven me to write the above 
statement. ;) (It was late at night.)

What I wanted to say is that most of the _weak spots_ can be simplified.

The story about Guy Steele is indeed an interesting one in this regard 
and makes me want to rethink my position.

> So, some things /are/ hard to grasp and consequently hard to
> teach.  We simply have to live with that; if anybody lacks the
> patience to explain something for the 100th time, let somebody
> else explain it.

You have a point here.


Pascal
From: Lars Magne Ingebrigtsen
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3el2jdph4.fsf@quimbies.gnus.org>
···@jpl.nasa.gov (Erann Gat) writes:

> I do believe that some change to how dynamic binding is done would
> be pedagogically useful, but that issue is dominated by the fact
> that at the moment there seems to be no mechanism for collectively
> managing change -- any change -- in the language, except to resist
> it at all costs.

I think you're going about this the wrong way.

If you have some sort of language feature that you think is essential,
then it's up to you to start hacking away at (say) any of the free
Common Lisp implementations to implement that feature.  Since the
feature you're implementing is so important, you patches will be
accepted with joy by the maintainers of that implementation.  Users
will flock to that implementation to be able to use your feature,
making it the most important implementation.  Other implementations
will have to implement the feature, too, since the users all demand
to have that feature.

And then the standard will change.

-- 
(domestic pets only, the antidote for overdose, milk.)
   ·····@gnus.org * Lars Magne Ingebrigtsen
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031011080001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@quimbies.gnus.org>, Lars Magne Ingebrigtsen
<·····@gnus.org> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > I do believe that some change to how dynamic binding is done would
> > be pedagogically useful, but that issue is dominated by the fact
> > that at the moment there seems to be no mechanism for collectively
> > managing change -- any change -- in the language, except to resist
> > it at all costs.
> 
> I think you're going about this the wrong way.
> 
> If you have some sort of language feature that you think is essential,
> then it's up to you to start hacking away at (say) any of the free
> Common Lisp implementations to implement that feature.  Since the
> feature you're implementing is so important, you patches will be
> accepted with joy by the maintainers of that implementation.  Users
> will flock to that implementation to be able to use your feature,
> making it the most important implementation.  Other implementations
> will have to implement the feature, too, since the users all demand
> to have that feature.
> 
> And then the standard will change.

That is a very constructive suggestion, but there are two problems: first,
I do not have the technical skills to do what you suggest.  I am not a
Lisp implementor, I am just a Lisp user.  That is not to say that I could
not acquire those skills, but life is short and I have to decide how best
to allocate my time.

Second (and this seems to be the thing that no one gets) I am not so much
interested in any particular change as I am in the *process* by which
change happens (or doesn't as the case may be).  The process you suggest
is, essentially: implement it and see if they come.  Well, that is in fact
precisely what I did (another fact that no one seems to get).  My proposal
included an implementation of (most of) the change I was proposing since
(most of) it could be implemented entirely within Common Lisp.  (What
little could not be implemented within Common Lisp was 100%
backwards-compatible with the current standard).

So you see, despite the fact that I did as much of what you suggest as I
could, my proposal was still met (at least initially) not with any
constructive criticism but rather with hostility towards the mere fact
that I was proposing a change.  Now, perhaps my proposal wasn't very good,
but that's beside the point.  As a *process*, meeting new proopsals, even
bad ones, with hostility discourages people from making proposals, even
good ones.  The result is (let's put a positive spin on it) more
stability.  Stabillity can be a good thing, but only up to a point.  Taken
to an extreme it begins to be harmful.

E.
From: Lars Magne Ingebrigtsen
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3smqyc3x5.fsf@quimbies.gnus.org>
···@jpl.nasa.gov (Erann Gat) writes:

> So you see, despite the fact that I did as much of what you suggest as I
> could, my proposal was still met (at least initially) not with any
> constructive criticism but rather with hostility towards the mere fact
> that I was proposing a change.  Now, perhaps my proposal wasn't very good,
> but that's beside the point. 

No, that's exactly the point.  If it had been a feature that people
found useful, the reaction probably would have been different.

You implemented a new feature, not many people found it useful, and
that was that.  

-- 
(domestic pets only, the antidote for overdose, milk.)
   ·····@gnus.org * Lars Magne Ingebrigtsen
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031226430001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@quimbies.gnus.org>, Lars Magne Ingebrigtsen
<·····@gnus.org> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > So you see, despite the fact that I did as much of what you suggest as I
> > could, my proposal was still met (at least initially) not with any
> > constructive criticism but rather with hostility towards the mere fact
> > that I was proposing a change.  Now, perhaps my proposal wasn't very good,
> > but that's beside the point. 
> 
> No, that's exactly the point.  If it had been a feature that people
> found useful, the reaction probably would have been different.

Would it?  Just because many people did not find it useful does not mean
that there did not additionally exist many people who did find it useful. 
But even conceding that the particular proposal I put forth as a test case
was not useful, surely over the last ten years *someone* has put forth a
proposal on *something* that people *did* find useful.  What has become of
those?  They have not resulted in what I would call "change to the
language", they have resulted in, at best, a Babel of vendor-specific
extensions.  The result is that there is in Lisp no common way of doing
something as simple and undeniably useful as, say, opening a socket. 
Every implementation is different.  Because of that, if you want to write
a book on, say, "Web programming in Common Lisp" it is not long before you
have to hang your head in shame and admit that it is in fact impossible to
do Web programming in Common Lisp without relying on vendor-specific
extensions to the language.

E.
From: Lars Magne Ingebrigtsen
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3k7cabzaz.fsf@quimbies.gnus.org>
···@jpl.nasa.gov (Erann Gat) writes:

> The result is that there is in Lisp no common way of doing something
> as simple and undeniably useful as, say, opening a socket.  Every
> implementation is different.

That's true, and is really annoying.  However, we're seeing the
emergence of things like CLOCC which has the potential to form a
common interface to many of these things.  Which is as it should be.

(The sad thing is that many of the Lisp implemetations have similar
feature sets, with purely syntactical differences.  Which is pretty
weird.)

-- 
(domestic pets only, the antidote for overdose, milk.)
   ·····@gnus.org * Lars Magne Ingebrigtsen
From: Daniel Barlow
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87r86ircz5.fsf@noetbook.telent.net>
···@jpl.nasa.gov (Erann Gat) writes:

> extensions.  The result is that there is in Lisp no common way of doing
> something as simple and undeniably useful as, say, opening a socket. 

Yeah.  This problem has definitely held C back.


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Florian Weimer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87d6i27vlj.fsf@deneb.enyo.de>
Lars Magne Ingebrigtsen <·····@gnus.org> writes:

> No, that's exactly the point.  If it had been a feature that people
> found useful, the reaction probably would have been different.

I hate to disappoint you, but features are sometimes rejected for
political reasons. 8-(

(For example, think of the C backend for GCC.)
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <t58Ba.2296$vi4.918726@news0.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@k-137-79-50-101.jpl.nasa.gov...
>
> So you see, despite the fact that I did as much of what you suggest as I
> could, my proposal was still met (at least initially) not with any
> constructive criticism but rather with hostility towards the mere fact
> that I was proposing a change.  Now, perhaps my proposal wasn't very good,
> but that's beside the point.  As a *process*, meeting new proopsals, even
> bad ones, with hostility discourages people from making proposals, even
> good ones.  The result is (let's put a positive spin on it) more
> stability.  Stabillity can be a good thing, but only up to a point.  Taken
> to an extreme it begins to be harmful.

New ideas take strong advocates.  Its the nature of a democracy and free
choice.  This is part of the system.  To be peer reviewed and accepted, it
HAS to be challenged, by people and by use in real life.  What you are calling
hostility is just hard questioning. Its you getting up, putting your thesis to the
review panel and them questioning you DEEPLY about it.  If you cave
at the first tough question, what does that show about the idea and your
work?

Your musing are all academic, come up with a better proposal and we
shall see what happens.  The world and the CL community are not
against you or change, they are against revisting the invention of the
wheel.  If you wish to propose a change, they are asking for a NEW
change.

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031338020001@k-137-79-50-101.jpl.nasa.gov>
In article <·····················@news0.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
> ·························@k-137-79-50-101.jpl.nasa.gov...
> >
> > So you see, despite the fact that I did as much of what you suggest as I
> > could, my proposal was still met (at least initially) not with any
> > constructive criticism but rather with hostility towards the mere fact
> > that I was proposing a change.  Now, perhaps my proposal wasn't very good,
> > but that's beside the point.  As a *process*, meeting new proopsals, even
> > bad ones, with hostility discourages people from making proposals, even
> > good ones.  The result is (let's put a positive spin on it) more
> > stability.  Stabillity can be a good thing, but only up to a point.  Taken
> > to an extreme it begins to be harmful.
> 
> New ideas take strong advocates.  Its the nature of a democracy and free
> choice.  This is part of the system.  To be peer reviewed and accepted, it
> HAS to be challenged, by people and by use in real life.  What you are calling
> hostility is just hard questioning.

I just went back and re-read the original thread where I first made my
"modest proposal" and I see I got my history wrong.  In the original
discussion the feedback was mainly along the lines of:  1) this is not a
very serious problem and therefore does not warrant a change in the
language and 2) someone should write "the complete idiot's guide to
special variables."

(The original discussion was actually surprisingly civilized and constructive.)

Since then, the discussion has been rekindled periodically when a confused
newbie asks about something having to do with special variables, and I
take the opportunity to point out that this is evidence that perhaps there
is a problem here after all and that this issue perhaps ought to be
revisited.

During the most recent rekindling, Kent Pitman insisted rather
vociferously (but with no actual support) that there was not a problem,
and that Lisp needed to "protect itself from people like [me]" and went on
at great length about the evils that change can bring.  That's where I
thought things went a little out of control.

> Its you getting up, putting your thesis to the
> review panel and them questioning you DEEPLY about it.  If you cave
> at the first tough question, what does that show about the idea and your
> work?

I've got no problem with that.

> Your musing are all academic, come up with a better proposal and we
> shall see what happens.  The world and the CL community are not
> against you or change, they are against revisting the invention of the
> wheel.  If you wish to propose a change, they are asking for a NEW
> change.

Very well:

Along the lines of the topic of this thread, I have found that it is
actually not very hard to make Common Lisp act like a Lisp1 for those who
like that sort of thing.  There is only one thing that can't be done
within the confines of the current standard, and that is correctly handle
forms of the sort:

  ((f ...) ...)

This is defined in the standard to be an error.

I propose to change this so that instead of being an error such forms
instead get rewritten to:

  (funcall (f ...) ...)

or

  (some-user-redefinable-macro-that-signals-an-error-by-default (f ...) ...)

This would be easy to implement and 100% backwards-compatible with the
existing standard (though it might break some code walkers).

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwznl67oiu.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> Along the lines of the topic of this thread, I have found that it is
> actually not very hard to make Common Lisp act like a Lisp1 for those who
> like that sort of thing.  There is only one thing that can't be done
> within the confines of the current standard, and that is correctly handle
> forms of the sort:
> 
>   ((f ...) ...)
> 
> This is defined in the standard to be an error.
> 
> I propose to change this so that instead of being an error such forms
> instead get rewritten to:
> 
>   (funcall (f ...) ...)
> 
> or
> 
>   (some-user-redefinable-macro-that-signals-an-error-by-default (f ...) ...)
> 
> This would be easy to implement and 100% backwards-compatible with the
> existing standard (though it might break some code walkers).

But it's not the only possible interpretation of ((xxx) ...).

 - There is still value in catching errors, for example.
   I suspect it's common for this case to catch editos.

 - Some people have suggested that all function names should be permissible
   in this position, hence
    ((setf car) 3 some-list)
   should work.  This can't be done compatibly with your proposal.

 - Your proposal doesn't address ((lambda ...) ...).

 - Your proposal precludes "lambda macros" like the LispM offers.  That is,
   the Lisp Machine has a facility for defining ((xxx) ...) separately 
   from (xxx ...), such that compatibility with other dialects like Interlisp
   can be conveniently enabled, and allowing things like
    ((nlambda (a b c) (list a b c)) x y z) => (X Y Z)
   [That is, NLAMBDA does not evaluate its arguments in Interlisp.]
   However, the Lisp Machine's facility for "lambda macros" would allow
   a compatible definition of Scheme's DEFINE to make each Scheme-defined
   name be also defined as a lambda macro that rewrites itself as a 
   regular form, so it is more powerful than the solution you propose.

   (I'm pretty sure I proposed lambda macros during the ANSI process and
   they were rejected as "excess baggage".  Then again, nearly everything
   that I proposed in an attempt to bring the CL and Scheme communities
   got this same response...)

 - I have a partial design for a compatibility layer that unifies CL and
   Scheme which is not completed.  You've seen it, most people haven't.
   The proposal you are making would make that layer much more cumbersome
   because it does the same thing a different way.

But most importantly:

 - Your proposal does not address a computational need.  It is not carefully
   describe in terms of a problem to be solved other than "Problem: CL doesn't
   let me do X.  Solution: Let me do X."  This is not a well-formed prpoosal.

 - Further, your proposal does not analyze alternatives that you might be 
   precluding by an ad hoc stopgap solution such as the one you propose.
   Before making any change that would preclude alternatives, I strongly
   recommend the vendors enter into SERIOUS discussion with the community.

It's not that I think that vendors should never make changes like this, but
I do think they should hold the bar pretty high, and should listen to all
comers.

100% backward compatibility is not the only criterion for something being
a no-brainer.  You also want 100% certainty that you are not cutting off
better solutions.  There are a number of things you could propose which 
are 100% backward compatible changes that are commonly asked for that I
would oppose absent SERIOUS other discussion merely because it precludes
other people from doing other, unrelated, backward-compatible things.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031533410001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > Along the lines of the topic of this thread, I have found that it is
> > actually not very hard to make Common Lisp act like a Lisp1 for those who
> > like that sort of thing.  There is only one thing that can't be done
> > within the confines of the current standard, and that is correctly handle
> > forms of the sort:
> > 
> >   ((f ...) ...)
> > 
> > This is defined in the standard to be an error.
> > 
> > I propose to change this so that instead of being an error such forms
> > instead get rewritten to:
> > 
> >   (funcall (f ...) ...)
> > 
> > or
> > 
> >   (some-user-redefinable-macro-that-signals-an-error-by-default (f ...) ...)
> > 
> > This would be easy to implement and 100% backwards-compatible with the
> > existing standard (though it might break some code walkers).
> 
> But it's not the only possible interpretation of ((xxx) ...).

The second alternative is Turing complete and so covers all possible
interpretations of ((xxx) ...) with a suitably defined macro.  The
particular behavior of the macro could be further standardized.

>  - There is still value in catching errors, for example.
>    I suspect it's common for this case to catch editos.

That is why the default behavior of the macro signals an error. 
Redefining this macro would not be something that a newbie would be well
advised to do.

>  - Some people have suggested that all function names should be permissible
>    in this position, hence
>     ((setf car) 3 some-list)
>    should work.  This can't be done compatibly with your proposal.

Yes it can.

>  - Your proposal doesn't address ((lambda ...) ...).

I didn't explicitly say so, but I meant that this expansion should happen
only for forms which would otherwise be errors, which is to say, not
((lambda ...) ...) forms.

>  - Your proposal precludes "lambda macros" like the LispM offers.

No it doesn't.

>  - I have a partial design for a compatibility layer that unifies CL and
>    Scheme which is not completed.  You've seen it,

I have?  You'll have to refresh my memory.


> But most importantly:
> 
>  - Your proposal does not address a computational need.  It is not carefully
>    describe in terms of a problem to be solved other than "Problem: CL doesn't
>    let me do X.  Solution: Let me do X."  This is not a well-formed prpoosal.

Problem: the inability for the user to define the behavior of ((non-lambda
...) ...) to be anything other than an error makes it impossible to change
Common Lisp's behavior to be compatible with other Lisp dialects in
circumstances under which this might be desirable.

>  - Further, your proposal does not analyze alternatives that you might be 
>    precluding by an ad hoc stopgap solution such as the one you propose.

True, but I am not aware of any alternative proposals.  I dont even know
where I would go to look for them.  This is part of the problem that I am
trying to highlight.

> 100% backward compatibility is not the only criterion for something being
> a no-brainer.  You also want 100% certainty that you are not cutting off
> better solutions.

Good point.  That's why I proposed the user-redefinable-macro solution in
addition to the funcall solution.  I am pretty sure that solution subsumes
all other possibilities.

> There are a number of things you could propose which 
> are 100% backward compatible changes that are commonly asked for that I
> would oppose absent SERIOUS other discussion merely because it precludes
> other people from doing other, unrelated, backward-compatible things.

That's fine.  But how, where, and when is this discussion going to take place?

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfw4r3e3aq2.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> > > This would be easy to implement and 100% backwards-compatible with the
> > > existing standard (though it might break some code walkers).
> > 
> > But it's not the only possible interpretation of ((xxx) ...).
> 
> The second alternative is Turing complete and so covers all possible
> interpretations of ((xxx) ...) with a suitably defined macro.  The
> particular behavior of the macro could be further standardized.

Actually, I acknowledged this issue in a later bullet item.  I understand
what you mean about Turing complete, however, that isn't necessarily as
compelling an argument as you might think.

Another story from Kent's mental archive of standards process weirdness:

In trying to get X3J13 to accept "compiler macros", I ran up against
people saying "but we already have compiler optimizers that would be
broken by this".  I said "You have something that allows me to get ahold
of a form and rewrite it arbitrarily?" and they said "Yes".  And I said
"Why can't I write one that just calls COMPILER-MACROEXPAND then?" and
it was not a simple "Oh, ok" from there.  People kept saying things like
"But then there would be two ways do do this" (as if this was a bad thing)
and other kinds of weird arguments.

I guess the thing I'm getting at is that once you install a way of doing
something, you're increasing the difficulty of installing yet another
facility for doing what is apparently the same thing.  And I guess I'm
saying the trampoline idea is ugly to me, and the idea of layering a more
elegant solution atop it leaves me feeling not so good.

But I wish we were not discussing this in this forum, which I find to be
an inappropriate one for a number of reasons.
From: Don Geddis
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87r86hlj5w.fsf@sidious.geddis.org>
Kent M Pitman <······@world.std.com> writes:
> But I wish we were not discussing this in this forum, which I find to be
> an inappropriate one for a number of reasons.

Care to elaborate on those reasons?  On the surface, it would seem like a
discussion on the future of the Lisp language is perfectly appropriate to
the group.  I'd be curious what issues you see with using this forum for it.

_______________________________________________________________________________
Don Geddis                    http://don.geddis.org              ···@geddis.org
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwn0h5sbxz.fsf@shell01.TheWorld.com>
Don Geddis <···@geddis.org> writes:

> Kent M Pitman <······@world.std.com> writes:
>
> > But I wish we were not discussing this in this forum, which I find to be
> > an inappropriate one for a number of reasons.
> 
> Care to elaborate on those reasons?  On the surface, it would seem like a
> discussion on the future of the Lisp language is perfectly appropriate to
> the group.  I'd be curious what issues you see with using this forum for it.

I wasn't saying that the discussion was out of order per se.  Just that 
a newsgroup is a poor substitute for an organized "design forum".

What design forum, you ask?  Yeah, yeah, I'm working on that...

I did say "I wish" for a reason.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87d6i15v92.fsf@darkstar.cartan>
Don Geddis <···@geddis.org> writes:

> Kent M Pitman <······@world.std.com> writes:

> > But I wish we were not discussing this in this forum, which I
> > find to be an inappropriate one for a number of reasons.

> Care to elaborate on those reasons?  On the surface, it would
> seem like a discussion on the future of the Lisp language is
> perfectly appropriate to the group.  I'd be curious what issues
> you see with using this forum for it.

There are the newsgroups comp.lang.c and comp.std.c, for
instance.  Maybe something like comp.std.lisp would be more
appropriate.

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Ivan Boldyrev
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fhtlqxueh.ln2@elaleph.borges.cgitftp.uiggm.nsc.ru>
Nils Goesche <···@cartan.de> writes:

> Don Geddis <···@geddis.org> writes:
>
>> Kent M Pitman <······@world.std.com> writes:
>
>> > But I wish we were not discussing this in this forum, which I
>> > find to be an inappropriate one for a number of reasons.
>
>> Care to elaborate on those reasons?  On the surface, it would
>> seem like a discussion on the future of the Lisp language is
>> perfectly appropriate to the group.  I'd be curious what issues
>> you see with using this forum for it.
>
> There are the newsgroups comp.lang.c and comp.std.c, for
> instance.  Maybe something like comp.std.lisp would be more
> appropriate.

There is comp.std.lisp, and it is good idea to move some threads to
comp.std.lisp.

-- 
Ivan Boldyrev

                       Perl is a language where 2 x 2 is not equal to 4.
From: Mario S. Mommer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fz7k8alkew.fsf@cupid.igpm.rwth-aachen.de>
···@jpl.nasa.gov (Erann Gat) writes:
> During the most recent rekindling, Kent Pitman insisted rather
> vociferously (but with no actual support) that there was not a problem,
> and that Lisp needed to "protect itself from people like [me]" and went on
> at great length about the evils that change can bring.  That's where I
> thought things went a little out of control.

Well, then let me seize the opportunity to voice my support for the
opinions he inflicted on you so ferociously on that occasion. It was a
great article, so here's the link, in case anyone missed it:

http://groups.google.com/groups?selm=sfwznlyx3y9.fsf%40shell01.TheWorld.com

and just in case, a link to a different post, with an "interesting"
interpretation of what happened:

http://groups.google.com/groups?selm=gat-2705031116200001%40k-137-79-50-101.jpl.nasa.gov

(No ill will?)
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <PUeBa.4030$vi4.1175758@news0.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@k-137-79-50-101.jpl.nasa.gov...
> Very well:
>
> Along the lines of the topic of this thread, I have found that it is
> actually not very hard to make Common Lisp act like a Lisp1 for those who
> like that sort of thing.  There is only one thing that can't be done
> within the confines of the current standard, and that is correctly handle
> forms of the sort:
>
>   ((f ...) ...)

1) Why do you think the standards way of handling it is wrong?  Is the
standard currently technically errored?  What is the nature of the mistake?

1a) Does Lisp need to support all possible syntatic forms?

2) Why do you think the correct way is to expand to (funcall (f ..) ..)?

3) Do you have code (a macro, say your own defun) that implements this
expansion before compilation?  Or is this done with a change to the
standard readtable?  Are there any issues with apps which shadow
funcall?

4) Does this increase the functionality of the language?

5) What impact would this change have on the rest of the standard?

6) What impact would this change have on the vendors?

7) What impact would this change have on existing code legacies,
for instance, code which currently implements this type of extension
internally?


>
> This is defined in the standard to be an error.
>

8) Which section is it in?  If the standard explicitly mentions this as an error,
then why?

9) What is the estimated cost in dollars to the CL community in terms of
dollars?  What is the estimated dollar benefit in adopting the change?

10) Who currently uses this extension?  Are there coding examples of
its use?


> I propose to change this so that instead of being an error such forms
> instead get rewritten to:
>
>   (funcall (f ...) ...)
>
> or
>
>   (some-user-redefinable-macro-that-signals-an-error-by-default (f ...) ...)
>
> This would be easy to implement and 100% backwards-compatible with the
> existing standard (though it might break some code walkers).
>
> E.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805032331220001@192.168.1.51>
In article <······················@news0.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
> ·························@k-137-79-50-101.jpl.nasa.gov...
> > Very well:
> >
> > Along the lines of the topic of this thread, I have found that it is
> > actually not very hard to make Common Lisp act like a Lisp1 for those who
> > like that sort of thing.  There is only one thing that can't be done
> > within the confines of the current standard, and that is correctly handle
> > forms of the sort:
> >
> >   ((f ...) ...)
> 
> 1) Why do you think the standards way of handling it is wrong?

I do not mean wrong in an absolute sense, I mean wrong with respect to the
goal of making CL behave more like a Lisp1 on the assumption that this is
what one wants.  The standard makes it impossible to handle ((f ...) ...)
"correctly" on this definition beause the standard defines this to be an
error.  (See below.)

> 1a) Does Lisp need to support all possible syntatic forms?

No, but this is a case where functionality is strictly monotonically
increased.  Surely there are no legacy application that depend on ((f ...)
...) signalling an error.  So the usual argument that this change entails
an enormous cost does not apply.  (BTW, it took me about half an hour to
implement this in OpenMCL.  Someone who actually knew what they were doing
could probably have done it even faster.)

> 2) Why do you think the correct way is to expand to (funcall (f ..) ..)?

Because the extension was proposed in the context of supporting a
particular style of functional programming, and funcall is the correct
expansion for that application.  However, I am not dogmatic about this. 
That's why I proposed the macro alternative.

> 3) Do you have code (a macro, say your own defun) that implements this
> expansion before compilation?  Or is this done with a change to the
> standard readtable?  Are there any issues with apps which shadow
> funcall?

I have implemented this in MCL, but it required hacking the compiler.  The
only other way to do it is with a code walker.

> 4) Does this increase the functionality of the language?

Not in a theoretical sense (obviously since CL is Turing-complete), but it
does provide better support for certain styles of coding.

> 5) What impact would this change have on the rest of the standard?

None as far as I know, but I confess I have not thought it through.  (In
particular, I have not thought through the issues of what sorts of
additional later extensions this would preclude.)

> 6) What impact would this change have on the vendors?

Like I said, it took me half an hour to implement in MCL.  It was a
trivial change.  There is no reason to believe it would be significantly
harder in any other Lisp.

> 7) What impact would this change have on existing code legacies,
> for instance, code which currently implements this type of extension
> internally?

The standard currently requires ((f ...) ...) to be an error, so I can't
imagine any legacy code relying on this behavior.

> 8) Which section is it in?  If the standard explicitly mentions this as
an error,
> then why?

Section 3.1.2.1.2:

 "If the car of the compound form is not a symbol, then that car must be a
lambda expression..."

I don't know what the rationale was.  The standard doesn't say.  Kent
probably knows.

> 9) What is the estimated cost in dollars to the CL community in terms of
> dollars?  What is the estimated dollar benefit in adopting the change?

Cost is probably negligible (or at least as small as any change could
possibly be).  Benefit is that some users who would otherwise choose
Scheme might now choose Common Lisp.  Actual dollar benefit is impossible
to estimate, but it would probably take only a handful of converts to
recover the actual costs.

> 10) Who currently uses this extension?  Are there coding examples of
> its use?

There are no examples of its use in CL because it doesn't exist (except in
my copy of MCL).  It's an idiom that is widely used in Scheme.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <PKiBa.4125$vi4.1326838@news0.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@192.168.1.51...
> > 1) Why do you think the standards way of handling it is wrong?
> 
> I do not mean wrong in an absolute sense, I mean wrong with respect to the
> goal of making CL behave more like a Lisp1 on the assumption that this is
> what one wants.  The standard makes it impossible to handle ((f ...) ...)
> "correctly" on this definition beause the standard defines this to be an
> error.  (See below.)
> 
> > 1a) Does Lisp need to support all possible syntatic forms?
> 
> No, but this is a case where functionality is strictly monotonically
> increased.  Surely there are no legacy application that depend on ((f ...)
> ...) signalling an error.  So the usual argument that this change entails
> an enormous cost does not apply.  (BTW, it took me about half an hour to
> implement this in OpenMCL.  Someone who actually knew what they were doing
> could probably have done it even faster.)
> 
> > 2) Why do you think the correct way is to expand to (funcall (f ..) ..)?
> 
> Because the extension was proposed in the context of supporting a
> particular style of functional programming, and funcall is the correct
> expansion for that application.  However, I am not dogmatic about this. 
> That's why I proposed the macro alternative.
> 
> > 3) Do you have code (a macro, say your own defun) that implements this
> > expansion before compilation?  Or is this done with a change to the
> > standard readtable?  Are there any issues with apps which shadow
> > funcall?
> 
> I have implemented this in MCL, but it required hacking the compiler.  The
> only other way to do it is with a code walker.
> 

How was this change actually coded (code please)?  Is there a general
expansion/parser hook in the compiler to have users add additional functionality?
Is this an alternative, have CL add a standard way to extend the compiler?

What changes, if any are needed to the implementation's debugger? How
is this new form displayed in the debugger, in its original form or some
expanded version?

> 
> > 10) Who currently uses this extension?  Are there coding examples of
> > its use?
> 
> There are no examples of its use in CL because it doesn't exist (except in
> my copy of MCL).  It's an idiom that is widely used in Scheme.

Please provide Scheme Coding examples of this idiom of programming.

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905030836430001@192.168.1.51>
In article <······················@news0.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> > > 3) Do you have code (a macro, say your own defun) that implements this
> > > expansion before compilation?  Or is this done with a change to the
> > > standard readtable?  Are there any issues with apps which shadow
> > > funcall?
> > 
> > I have implemented this in MCL, but it required hacking the compiler.  The
> > only other way to do it is with a code walker.
> > 
> 
> How was this change actually coded (code please)?

Here's how I did it:

(defun nx1-combination (form env)
  (destructuring-bind (sym &rest args)
                      form
    (if (symbolp sym)
      (let* ((*nx-sfname* sym) special)
        (if (and (setq special (gethash sym *nx1-alphatizers*))
                 ;(not (nx-lexical-finfo sym env))
                 (not (memq sym *nx1-target-inhibit*))
                 (not (nx-declared-notinline-p sym *nx-lexical-environment*)))
          (funcall special form env) ; pass environment arg ...
          (progn
            (when (memq sym *nx1-target-inhibit*)
              (warn "Wrong platform for call to ~s in ~s ." sym form))
            (nx1-typed-call sym args))))
      (if (lambda-expression-p sym)
        (nx1-lambda-bind (%cadr sym) args (%cddr sym))
        (nx1-combination (cons 'funcall form) env)))))
                         ^^^^^^^^^^^^^^^^^^^^

The underlined part is the only change from the original.

Here it is in action:

? (defun foo () 'car)
FOO
? (defun baz (x) ((foo) x))
BAZ
? (baz '(1 2 3))
1

This is not a complete solution.  MCL handles simple top-level forms
differently, so ((foo) x) at the top-level still doesn't work.  Also, the
stepper needs to be modified.  But as a data point to prove that this
isn't hard I believe this is adequate.

>  Is there a general
> expansion/parser hook in the compiler to have users add additional
> functionality?

Not that I know of.

> Is this an alternative, have CL add a standard way to extend the compiler?

There is already a CL-standard way to extend the compiler.  They are
called macros, but they do not cover this case.  One way to look at this
proposal is as a proposal to extend this facility to define an "anonymous
macro" that gets invoked on forms that are currently illegal in CL.

> What changes, if any are needed to the implementation's debugger?

None.  Since this is a simple code rewrite the debugger can't tell that
the code was originally written without the funcall.

> How
> is this new form displayed in the debugger, in its original form or some
> expanded version?

It's displayed as if one had written it in the expanded form.  Whether the
debugger actually displays the expanded form obviouslty depends on whether
or not the compiler optimizes it away.

> Please provide Scheme Coding examples of this idiom of programming.

Here's a self-evaluating meta-circular evaluator that uses this idiom heavily:

(define code-jas-eval
  '(lambda (ev)
     (lambda (e)
       (lambda (r)
         (cond
          ((symbol? e) (r e))
          ((not (pair? e)) e)
          (t (cond
              (((ceq? (first e)) 'quote)  (first (rest e)))
              (((ceq? (first e)) 'cond)
               (cond
                ((((ev ev) (first (second e))) r) (((ev ev) (second
(second e))) r))
                (t (((ev ev) ((ccons 'cond) (rest (rest e)))) r))))
              (((ceq? (first e)) 'lambda)
               (lambda (x)
                 (((ev ev) (third e))
                  (lambda (n)
                    (cond
                     (((ceq? (first (second e))) n)   x)
                     (t (r n)))))))
              (t ((((ev ev) (first e)) r) (((ev ev) (second e)) r))))))))))

Here's the CL equivalent.  As you can see, it's rather laden with funcalls:

(defun jas-eval (ev)
  (lambda (e)
    (lambda (r)
      (cond
       ((symbol? e) (funcall r e))
       ((not (pair? e))  e)
       ((pair? e)
        (cond
         ((funcall (ceq? (first e)) 'quote)  (first (rest e)))
         ((funcall (ceq? (first e)) 'cond)
          (cond
           ((funcall (funcall (funcall ev ev) (first (second e))) r)
            (funcall (funcall (funcall ev ev) (second (second e))) r))
           (t (funcall (funcall (funcall ev ev) (funcall (ccons 'cond)
(rest (rest e)))) r))))
         ((funcall (ceq? (first e)) 'lambda)
          (lambda (x)
            (funcall
             (funcall (funcall ev ev) (third e))
             (lambda (n)
               (cond
                ((funcall (ceq? (first (second e))) n)   x)
                (t (funcall r n)))))))
         (t (funcall (funcall (funcall (funcall ev ev) (first e)) r)
                     (funcall (funcall (funcall ev ev) (second e)) r)))))))))

E.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031036570001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@192.168.1.51>, ···@jpl.nasa.gov (Erann
Gat) wrote:

>         (nx1-combination (cons 'funcall form) env)))))
>                          ^^^^^^^^^^^^^^^^^^^^
> 
> The underlined part is the only change from the original.

Correction: that whole line is changed from the original, which was (of course):

          (nx-error "~S is not a symbol or lambda expression in the form ~S ."
                    sym form)))))

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <ofuBa.191$sq2.123138@news1.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@192.168.1.51...
> In article <······················@news0.telusplanet.net>, "Wade Humeniuk"
> <····@nospam.nowhere> wrote:
> 
> > > > 3) Do you have code (a macro, say your own defun) that implements this
> > > > expansion before compilation?  Or is this done with a change to the
> > > > standard readtable?  Are there any issues with apps which shadow
> > > > funcall?
> > > 
> > > I have implemented this in MCL, but it required hacking the compiler.  The
> > > only other way to do it is with a code walker.
> > > 
> > 
> > How was this change actually coded (code please)?
> 
> Here's how I did it:
> 
> (defun nx1-combination (form env)
>   (destructuring-bind (sym &rest args)
>                       form
>     (if (symbolp sym)
>       (let* ((*nx-sfname* sym) special)
>         (if (and (setq special (gethash sym *nx1-alphatizers*))
>                  ;(not (nx-lexical-finfo sym env))
>                  (not (memq sym *nx1-target-inhibit*))
>                  (not (nx-declared-notinline-p sym *nx-lexical-environment*)))
>           (funcall special form env) ; pass environment arg ...
>           (progn
>             (when (memq sym *nx1-target-inhibit*)
>               (warn "Wrong platform for call to ~s in ~s ." sym form))
>             (nx1-typed-call sym args))))
>       (if (lambda-expression-p sym)
>         (nx1-lambda-bind (%cadr sym) args (%cddr sym))
>         (nx1-combination (cons 'funcall form) env)))))
>                          ^^^^^^^^^^^^^^^^^^^^
> 

Your code seems in error.   Should it be?

(defun nx1-combination (form env)
  (destructuring-bind (sym &rest args)
                      form
    (cond
     ((symbolp sym)
      (let* ((*nx-sfname* sym) special)
        (if (and (setq special (gethash sym *nx1-alphatizers*))
                 ;(not (nx-lexical-finfo sym env))
                  (not (memq sym *nx1-target-inhibit*))
                  (not (nx-declared-notinline-p sym *nx-lexical-environment*)))
             (funcall special form env) ; pass environment arg ...
           (progn
             (when (memq sym *nx1-target-inhibit*)
               (warn "Wrong platform for call to ~s in ~s ." sym form))
             (nx1-typed-call sym args)))))
     ((lambda-expression-p sym)
      (nx1-lambda-bind (%cadr sym) args (%cddr sym)))
     ((listp sym)
      (nx1-combination (cons 'funcall form) env))
     (t
      (nx-error "~S is not a symbol or expression in the form ~S ."
                    sym form)))))

In nx1-combination can sym be a special, macro or lambda form?  Or all
all forms macroexpanded before this call?


Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031447180001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@news1.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> Your code seems in error.

No, it isn't.

>  Should it be?

No, it shouldn't.  Perhaps you should go take a look at the OpenMCL
sources instead of making me do all your legwork for you.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <7PvBa.3979$Si2.134486@news2.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@k-137-79-50-101.jpl.nasa.gov...
> In article <····················@news1.telusplanet.net>, "Wade Humeniuk"
> <····@nospam.nowhere> wrote:
>
> > Your code seems in error.
>
> No, it isn't.
>
> >  Should it be?
>
> No, it shouldn't.  Perhaps you should go take a look at the OpenMCL
> sources instead of making me do all your legwork for you.

It is your proposal, why should I do that?

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031807000001@k-137-79-50-101.jpl.nasa.gov>
In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
> ·························@k-137-79-50-101.jpl.nasa.gov...
> > In article <····················@news1.telusplanet.net>, "Wade Humeniuk"
> > <····@nospam.nowhere> wrote:
> >
> > > Your code seems in error.
> >
> > No, it isn't.
> >
> > >  Should it be?
> >
> > No, it shouldn't.  Perhaps you should go take a look at the OpenMCL
> > sources instead of making me do all your legwork for you.
> 
> It is your proposal, why should I do that?

Because you might want more information.

Because providing complete responses to speculations born of ignorance
cannot be a legitimate part of a review process.  If it were, anyone who
wished to obstruct the process could drag it on forever with random
questions.

Because your question is a non-sequitur.  The point of this implementation
was merely to show that it's easy to do, so whether or not the code has a
bug is a irrelevant unless that bug is hard to fix, which it clearly
isn't.

Notwithstanding, you do (obliquely) raise a legitimate (if minor) point,
which is that my implementation is not quite consistent with the
description I gave.  I accidentally captured not only forms of the form
((f ...) ...) but all forms whose car was not a symbol or lambda
expression.  The only actual difference in behavior is trying to call
non-callable objects now generates a runtime error rather than a
compile-time error, and trying to call function objects now works as one
might expect whereas before it would be an error.  It's not at all clear
to me whether this is a bug or a serendipitous feature.

If this proposal were actually close to being adopted as a change in the
standard then these sorts of details would matter, but at this preliminary
stage in the discussion this seems like a pretty minor issue to me.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <LWzBa.6093$Si2.201386@news2.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@k-137-79-50-101.jpl.nasa.gov...
> > It is your proposal, why should I do that?
>
> Because you might want more information.
>
> Because providing complete responses to speculations born of ignorance
> cannot be a legitimate part of a review process.  If it were, anyone who
> wished to obstruct the process could drag it on forever with random
> questions.
>

Well to this point, first

1) How do you know that I am ignorant?
2) In a real process, with a more complex issue, there will tons of ignorance,
out of necessity.  I would suggest that people get ready for that.
3) Yes your point is valid and procedures have to be place to handle
obstruction, BUT, what are those???  Kent's postings of the last while
talks about the politics of this.  Is just you and I right now, think of what will
happen when more are invloved over a contentious issue.  Frustrated enough
yet?

I can keep on questioning you about your proposal until the cows come
home.  I have no wish to but in part I have been playing the Devil's advocate.

> Because your question is a non-sequitur.  The point of this implementation
> was merely to show that it's easy to do, so whether or not the code has a
> bug is a irrelevant unless that bug is hard to fix, which it clearly
> isn't.
>

It shows that there is no working implementation, no proving ground, no
compliance testing and thus the proposal is non-sequiter.

> Notwithstanding, you do (obliquely) raise a legitimate (if minor) point,
> which is that my implementation is not quite consistent with the
> description I gave.  I accidentally captured not only forms of the form
> ((f ...) ...) but all forms whose car was not a symbol or lambda
> expression.  The only actual difference in behavior is trying to call
> non-callable objects now generates a runtime error rather than a
> compile-time error, and trying to call function objects now works as one
> might expect whereas before it would be an error.  It's not at all clear
> to me whether this is a bug or a serendipitous feature.
>

The error is in your original statement of the syntax you wanted.  It
should be a formally worded replacement to the current
specs's wording.  A formal process will entail many formalities.
That's why they used ANSI's process in the original spec.

> If this proposal were actually close to being adopted as a change in the
> standard then these sorts of details would matter, but at this preliminary
> stage in the discussion this seems like a pretty minor issue to me.

Its a pretty minor (seemingly) change.  Think of what is going to happen is
something more substantial comes up.

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905032155420001@192.168.1.51>
In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> I have been playing the Devil's advocate.

Yes, I could tell.  So I decided to push back a little.

> It shows that there is no working implementation

Not yet.  It's only been 72 hours.

> The error is in your original statement of the syntax you wanted.  It
> should be a formally worded replacement to the current
> specs's wording.  A formal process will entail many formalities.

Yes, but at the moment there is no process, formal or otherwise.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <1mLBa.8381$Wg4.330468@news0.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@192.168.1.51...
> In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
> <····@nospam.nowhere> wrote:
> 
> > I have been playing the Devil's advocate.
> 
> Yes, I could tell.  So I decided to push back a little.

Do not get me wrong though, my behaviour is likely to be the
same in any real process

> 
> > It shows that there is no working implementation
> 
> Not yet.  It's only been 72 hours.
> 
> > The error is in your original statement of the syntax you wanted.  It
> > should be a formally worded replacement to the current
> > specs's wording.  A formal process will entail many formalities.
> 
> Yes, but at the moment there is no process, formal or otherwise.

I assume there is, as the spec is an ANSI standard document and
there are rules in creating the document, then are already ANSI 
rules in revising the document. 

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005031040070001@k-137-79-50-101.jpl.nasa.gov>
In article <·····················@news0.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@192.168.1.51...
> > In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
> > <····@nospam.nowhere> wrote:
> > 
> > > I have been playing the Devil's advocate.
> > 
> > Yes, I could tell.  So I decided to push back a little.
> 
> Do not get me wrong though, my behaviour is likely to be the
> same in any real process

I understand.  If this were a real process and there were a real hope of
actually accomplishing something then the effort would be worth it.  As
things stand, this is just practice for the real thing, and I don't have
quite as much time and energy to devote to it.

> > Yes, but at the moment there is no process, formal or otherwise.
> 
> I assume there is, as the spec is an ANSI standard document and
> there are rules in creating the document, then are already ANSI 
> rules in revising the document. 

You're probably right, and I should look into this.

However, I don't necessarily want to change the standard.  I am,
notwithstanding people's opinions of my motives, mindful of the value of
stability.  What I would really like to see is something more like SRFI's,
or sub-standards.  So for now I am going to wait until Kent finishes his
work.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <eJyBa.5834$Si2.182079@news2.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@192.168.1.51...
> Here's the CL equivalent.  As you can see, it's rather laden with funcalls:
> 
> (defun jas-eval (ev)
>   (lambda (e)
>     (lambda (r)
>       (cond
>        ((symbol? e) (funcall r e))
>        ((not (pair? e))  e)
>        ((pair? e)
>         (cond
>          ((funcall (ceq? (first e)) 'quote)  (first (rest e)))
>          ((funcall (ceq? (first e)) 'cond)
>           (cond
>            ((funcall (funcall (funcall ev ev) (first (second e))) r)
>             (funcall (funcall (funcall ev ev) (second (second e))) r))
>            (t (funcall (funcall (funcall ev ev) (funcall (ccons 'cond)
> (rest (rest e)))) r))))
>          ((funcall (ceq? (first e)) 'lambda)
>           (lambda (x)
>             (funcall
>              (funcall (funcall ev ev) (third e))
>              (lambda (n)
>                (cond
>                 ((funcall (ceq? (first (second e))) n)   x)
>                 (t (funcall r n)))))))
>          (t (funcall (funcall (funcall (funcall ev ev) (first e)) r)
>                      (funcall (funcall (funcall ev ev) (second e)) r)))))))))

Here is another CL equivalent.  I am sure with a little more cleaning up it could
be even tidier and more understandable.

(defun jas-eval (ev)
  (macrolet ((fc (ev &rest args)
                 (let ((inner `(funcall ,ev ,ev)))
                   (loop for arg in args do
                         (setf inner
                               (list 'funcall inner arg))
                         finally (return inner))))
             (cceq? (place sym)
                    `(funcall (ceq? ,place) ,sym)))
    (lambda (e)
      (lambda (r)
        (cond
         ((symbol? e) (funcall r e))
         ((not (pair? e)) e)
         ((pair? e)
          (cond
           ((cceq? (first e) 'quote)  (first (rest e)))

           ((cceq? (first e) 'cond)
            (cond
             ((fc ev (first (second e)) r) (fc ev (second(second e)) r))
             (t (fc (funcall (ccons 'cond) (cddr e))) r)))

           ((cceq? (first e) 'lambda)
            (lambda (x) (funcall (fc ev (third e))
                                 (lambda (n)
                                   (cond
                                    ((cceq? (first (second e)) n) x)
                                    (t (funcall r n)))))))
           
           (t (funcall (fc ev (first e) r)
                       (fc ev (second e) r))))))))))

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031945140001@192.168.1.51>
In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> Here is another CL equivalent.  I am sure with a little more cleaning up
it could
> be even tidier and more understandable.

The original has the interesting property that it is written in the same
dialect that it evaluates, that is, it can evaluate itself.  Neither of
the CL versions have that property.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <wYzBa.6094$Si2.206263@news2.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@192.168.1.51...
> In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
> <····@nospam.nowhere> wrote:
> 
> > Here is another CL equivalent.  I am sure with a little more cleaning up
> it could
> > be even tidier and more understandable.
> 
> The original has the interesting property that it is written in the same
> dialect that it evaluates, that is, it can evaluate itself.  Neither of
> the CL versions have that property.

Cuteness is not a property of a good program. :)

Wade
From: Jens Axel Søgaard
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ed78629$0$97275$edfadb0f@dread12.news.tele.dk>
Wade Humeniuk wrote:
> "Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@192.168.1.51...

>>The original has the interesting property that it is written in the same
>>dialect that it evaluates, that is, it can evaluate itself.  Neither of
>>the CL versions have that property.

> Cuteness is not a property of a good program. :)

The meta circular interpreter was written as an answer to Gat's
question "What's the smallest selv evaluating interpreter?"

Thus the coding style is not one I usually use in Scheme.

Joe Marshall has provided some examples from SICM, but
there are common Scheme libraries that use this technique too.
As an example there is Oleg's implementation of treaps
Here is a typical use:

   ((treap 'get) a-key) old-assoc))

I admit that the main reason for this sort of interface
is that there are no module system in the standard.



If you want to play with the metacircular interpreter, there
are examples of use in:

<http://groups.google.com/groups?hl=da&lr=&ie=UTF-8&selm=3da6fdc4%240%2472280%24edfadb0f%40dspool01.news.tele.dk&rnum=25>


See
   <http://okmij.org/ftp/Scheme/lib/treap.scm>
for Oleg's treap code.

-- 
Jens Axel S�gaard
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005031048120001@k-137-79-50-101.jpl.nasa.gov>
In article <·························@dread12.news.tele.dk>,
=?windows-1252?Q?Jens_Axel_S=F8gaard?= <······@jasoegaard.dk> wrote:

> Wade Humeniuk wrote:
> > "Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@192.168.1.51...
> 
> >>The original has the interesting property that it is written in the same
> >>dialect that it evaluates, that is, it can evaluate itself.  Neither of
> >>the CL versions have that property.
> 
> > Cuteness is not a property of a good program. :)
> 
> The meta circular interpreter was written as an answer to Gat's
> question "What's the smallest selv evaluating interpreter?"
> 
> Thus the coding style is not one I usually use in Scheme.

Just to complete the background: my motive in asking that question is for
pedagogical purposes.  I was putting together an introductory talk on Lisp
and was writing a metacircular interpreter that I was trying to optimize
for clarity of presentation.  The problem is, it interpreted a different
language than it was written in, which I found unaesthetic.  When I tried
to write an MCI in Common Lisp that could interpret itself it ended up
being an awful mess.  That's what prompted me to ask the question.

I do not consider this merely an academic excercise.  I believe that
pedagogy is a legitimate practical concern, particularly for a
non-mainstream language.

E.
From: Jens Axel Søgaard
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ed847b1$0$97215$edfadb0f@dread12.news.tele.dk>
Erann Gat wrote:
> =?windows-1252?Q?Jens_Axel_S=F8gaard?= <······@jasoegaard.dk> wrote:

>>The meta circular interpreter was written as an answer to Gat's
>>question "What's the smallest selv evaluating interpreter?"
>>
>>Thus the coding style is not one I usually use in Scheme.

> Just to complete the background: my motive in asking that question is for
> pedagogical purposes.  I was putting together an introductory talk on Lisp
> and was writing a metacircular interpreter that I was trying to optimize
> for clarity of presentation. The problem is, it interpreted a different
> language than it was written in, which I found unaesthetic.  

Ok - I see. So you needed short and easy to read.

Have you checked "Lisp in Small Pieces" by Quinnec?
It contains several interpreters and excellent explanations.

> I do not consider this merely an academic excercise.  I believe that
> pedagogy is a legitimate practical concern, particularly for a
> non-mainstream language.

I do too. It's difficult to find the right balance though.
A language like Python is way to "pedagogical" for my taste -
I never understood the argument that there should only be
one way to do things in order not to confuse people.

-- 
Jens Axel S�gaard
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb79fc$q76$1@f1node01.rhrz.uni-bonn.de>
Wade Humeniuk wrote:

>>>10) Who currently uses this extension?  Are there coding examples of
>>>its use?
>>
>>There are no examples of its use in CL because it doesn't exist (except in
>>my copy of MCL).  It's an idiom that is widely used in Scheme.
> 
> Please provide Scheme Coding examples of this idiom of programming.

Joe Marshall has recently posted a (IMHO convincing) example.

See http://makeashorterlink.com/?C12E61FB4

> You might want to take a look at `Structure and Interpretation of
> Classical Mechanics' by Sussman and Wisdom.  They use Scheme to
> explain Lagrangian and Hamiltonian mechanics.  In many of the
> examples, the objects being dealt with are higher order functions.
> For instance, an object's position may be described as a function of
> time to coordinates.  A coordinate transformation is a function from
> one coordinate system to another.  Applying the coordinate transform
> to the function allows you to view an object's position in a different
> coordinate frame.  You end up with some very abstract functions and
> some deep nesting.  This leads to an abundance of FUNCALLs in the
> Lisp-2 version.
> 
> ;; Scheme version
> (define ((F->C F) local)
>   (->local (time local)
>            (F local)
>            (+ (((partial 0) F) local)
>               (* (((partial 1) F) local)
>                  (velocity local)))))
> 
> ;; CL version
> (define F->C (F)
>   #'(lambda (local)
>       (->local (time local)
>                (funcall F local)
>                (+ (funcall (funcall (partial 0) F) local)
>                   (* (funcall (funcall (partial 1) F) local)
>                      (velocity local))))))

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <6%KBa.8108$Wg4.325563@news0.telusplanet.net>
"Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> Wade Humeniuk wrote:

> > coordinate frame.  You end up with some very abstract functions and
> > some deep nesting.  This leads to an abundance of FUNCALLs in the
> > Lisp-2 version.

This is only because it is simple-mindedly translated into CL without
any thought.  It could easily be something like

;; CL version
(defun partial1 (n f l) (funcall (funcall (partial n) f) l))
(defun ->local1 (f local) (->local (time local)
                                   (funcall f local)
                                   (+ (partial1 0 f local)
                                      (* (partial1 1 f local) (velocity local)))))

(defun f->c (f)
  (lambda (local) (->local1 f local)))

It is also likely that if partial and ->local were rewritten
this would eliminate partial1 and ->local1.  This seems to 
be a problem with the two Scheme examples so far, the
CL versions are just directly translated.  I would almost go as
far to say that this example was contrived just to illuminate
some Scheme syntax.

Another way to resolve this issue it could be easily resolved
by writing a Scheme implementation within CL and then these 
programs could be run directly.  This could be a layered Scheme
compatibilty standard.  One of the many Scheme within CL 
implementations would suffice.  Then the Schemers would have
nothing to reject, as Scheme is now a compatibility mode with 
CL. :))  How's that for a layered standard?

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005031033540001@k-137-79-50-101.jpl.nasa.gov>
In article <·····················@news0.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> Another way to resolve this issue it could be easily resolved
> by writing a Scheme implementation within CL and then these 
> programs could be run directly.

There exists a dichotomy in the world right now between Scheme coding
practices and styles and CL coding practices and styles because there is a
dichotomy between Scheme and CL.  The issue is not that some people want
to write Scheme code in Common Lisp, the issue is that some people would
like to write Common Lisp code in Common Lisp but would nonetheless like
to use certain coding styles and practices that are currently not possible
there.  Simply implementing Scheme in Common Lisp can be done, has been
done, but it doesn't address this issue.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb7vih$104a$1@f1node01.rhrz.uni-bonn.de>
Wade Humeniuk wrote:
> "Pascal Costanza" <········@web.de> wrote in message ·················@f1node01.rhrz.uni-bonn.de...
> 
>>Wade Humeniuk wrote:
> 
>>>coordinate frame.  You end up with some very abstract functions and
>>>some deep nesting.  This leads to an abundance of FUNCALLs in the
>>>Lisp-2 version.
> 
> This is only because it is simple-mindedly translated into CL without
> any thought.  It could easily be something like
> 
> ;; CL version
> (defun partial1 (n f l) (funcall (funcall (partial n) f) l))
> (defun ->local1 (f local) (->local (time local)
>                                    (funcall f local)
>                                    (+ (partial1 0 f local)
>                                       (* (partial1 1 f local) (velocity local)))))
> 
> (defun f->c (f)
>   (lambda (local) (->local1 f local)))

A Lisp-1 would still look nicer to me.

> It is also likely that if partial and ->local were rewritten
> this would eliminate partial1 and ->local1.  This seems to 
> be a problem with the two Scheme examples so far, the
> CL versions are just directly translated.  I would almost go as
> far to say that this example was contrived just to illuminate
> some Scheme syntax.

Joe claims that this is an actual example from SICM. I haven't checked 
this by myself, but I tend to believe him. ;)

Anyway, an embedded Lisp-1 would apparently still require less work on 
the programmer's side. And that's what programming languages are there 
for: making the programmer's job easier. The fact that you can implement 
anything in some more or less complicated way is just trivially true 
because of computational completeness.

> Another way to resolve this issue it could be easily resolved
> by writing a Scheme implementation within CL and then these 
> programs could be run directly.  This could be a layered Scheme
> compatibilty standard.  One of the many Scheme within CL 
> implementations would suffice.  Then the Schemers would have
> nothing to reject, as Scheme is now a compatibility mode with 
> CL. :))  How's that for a layered standard?

If you embed it as an interpreter you get unnecessary performance 
penalties. If you want to embed Lisp-1-ness [1] on the same level as 
Common Lisp itself (for example, some variant of define-compiler-macro 
could do the job - not the current one - I am just brainstorming), the 
requirement stands in the way that the compiler must reject any form 
that has a cons in its car, except for ((lambda (...) ...) ...).

Or do you suggest that a layered standard is allowed to reword parts of 
the ANSI specs? I don't think that this would be feasible because you 
would probably need to find a way to deal with feature interactions 
between several additional layers.


Pascal

[1] We are actually not talking about embedding Scheme here, just the 
Lisp-1 feature.

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwof1kbb1z.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> Anyway, an embedded Lisp-1 would apparently still require less work on
> the programmer's side. And that's what programming languages are there
> for: making the programmer's job easier.

I'm curious how you support this claim.

> The fact that you can implement anything in some more or less
> complicated way is just trivially true because of computational
> completeness.

This statement was not clearly enough worded for me to know what question to
ask about it.

> If you embed it as an interpreter you get unnecessary performance
> penalties. If you want to embed Lisp-1-ness [1] on the same level as
> Common Lisp itself (for example, some variant of define-compiler-macro
> could do the job - not the current one - I am just brainstorming), the
> requirement stands in the way that the compiler must reject any form
> that has a cons in its car, except for ((lambda (...) ...) ...).
> ...
> [1] We are actually not talking about embedding Scheme here, just the
> Lisp-1 feature.

You also then need a macro hygiene system (which a Lisp2 avoids) and
you need a smarter compiler (to be equally efficient), since you
cannot jump directly to a function that you have not proved is really
a function.  In a Lisp2, you can check for function-ness at _store_
time into the Lisp2 namespace to assure that only functions are stored
there, so you can avoid the check at runtime.  A runtime check can be
avoided in a Lisp1 only by compiler smarts.  You don't want to jumping
to the raw address of whatever F points to in (F X) if you don't know
the knowledge that it's a real function object.  

So if all you're doing is trying to "patch" Common Lisp to make it less
complex, you're not really managing to do that by the steps you describe.

> Or do you suggest that a layered standard is allowed to reword parts
> of the ANSI specs?  I don't think that this would be feasible because [...]

Forget "feasible".  It isn't permissible.

This is NOT the conventional meaning of a layered standard.

There are specific criteria for what it means to say you conform to CL
and this is not it.  Ditto for conforming to a subset.

There was a movement at some point in the ANSI process to define "coforming"
to mean that either a program would run in the specified language or would
run in it after mechanical transformation.  It was observed that this 
definition would allow FORTRAN programs to conform to ANSI CL.  It seemed
meaningless to allow such an interpretation.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005031102450001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> > [1] We are actually not talking about embedding Scheme here, just the
> > Lisp-1 feature.
> 
> You also then need a macro hygiene system (which a Lisp2 avoids) and
> you need a smarter compiler (to be equally efficient), since you
> cannot jump directly to a function that you have not proved is really
> a function.

Permit me to try to short-circuit some of this discussion by pointing out
that it is in fact possible to make CL behave like a Lisp1 entirely within
CL.  I posted a partial solution here earlier, and sent a more complete
solution to Pascal via email.  There is a small performance penalty (one
funcall) for calling Lisp1 functions, but if that's a show stopper then
you probably need to explore other solutions anyway.

I thought for a long time that this couldn't be done, but it turns out I
was suffering from some long-standing misunderstandings about how symbol
macros work.  Once Kent straightened me out the solution was pretty
straightforward.  So as an argument about changing the standard, the Lisp1
issue is largely moot.

There is only one exception, and that is the ((f ...) ...) syntax, which
cannot be handled under the current standard except with a code walker.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb84p2$vqq$1@f1node01.rhrz.uni-bonn.de>
Kent M Pitman wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Anyway, an embedded Lisp-1 would apparently still require less work on
>>the programmer's side. And that's what programming languages are there
>>for: making the programmer's job easier.
> 
> I'm curious how you support this claim.

I can't imagine any counter arguments. If it wasn't for convenience why 
would I want to have a high-level programming language?

> So if all you're doing is trying to "patch" Common Lisp to make it less
> complex, you're not really managing to do that by the steps you describe.

OK.

>>Or do you suggest that a layered standard is allowed to reword parts
>>of the ANSI specs?  I don't think that this would be feasible because [...]
> 
> Forget "feasible".  It isn't permissible.
> 
> This is NOT the conventional meaning of a layered standard.

OK.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwfzmwb9fw.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> Kent M Pitman wrote:
> > Pascal Costanza <········@web.de> writes:
> >
> >>Anyway, an embedded Lisp-1 would apparently still require less work on
> >>the programmer's side. And that's what programming languages are there
> >>for: making the programmer's job easier.
> > I'm curious how you support this claim.
> 
> I can't imagine any counter arguments. If it wasn't for convenience
> why would I want to have a high-level programming language?

I was asking you to justify your need to insert "Lisp-1" in that sentence.
If your reply here means you feel identically about "Lisp-2", then you
can safely ignore my question.  However, as it was, I took your remark 
to be a claim that Lisp-1 was less work for programmers than Lisp-2.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb87oa$ra8$1@f1node01.rhrz.uni-bonn.de>
Kent M Pitman wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Kent M Pitman wrote:
>>
>>>Pascal Costanza <········@web.de> writes:
>>>
>>>
>>>>Anyway, an embedded Lisp-1 would apparently still require less work on
>>>>the programmer's side. And that's what programming languages are there
>>>>for: making the programmer's job easier.
>>>
>>>I'm curious how you support this claim.
>>
>>I can't imagine any counter arguments. If it wasn't for convenience
>>why would I want to have a high-level programming language?
> 
> I was asking you to justify your need to insert "Lisp-1" in that sentence.
> If your reply here means you feel identically about "Lisp-2", then you
> can safely ignore my question.  However, as it was, I took your remark 
> to be a claim that Lisp-1 was less work for programmers than Lisp-2.

Ah, I see. Well in this case (the code example given by Joe), I find the 
Lisp-1 code easier to understand. Of course, I don't want to imply that 
Lisp-1 is generally superior.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Adam Warner
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <pan.2003.05.30.10.49.40.11177@consulting.net.nz>
> Joe Marshall has recently posted a (IMHO convincing) example.
> 
> See http://makeashorterlink.com/?C12E61FB4

<http://groups.google.com/groups?selm=vfwpcivr.fsf%40ccs.neu.edu>
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ED4E2B1.4020207@nyc.rr.com>
Erann Gat wrote:
> I introduced a particular change as a test case mainly to take the issue
> out of the abstract.  I don't care very much about that particular change
> (and I have in fact stopped advocating it).

Then it is still abstract. CL is perfect. Well, OK, (- 3) should be 3, 
but other than that...

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb2r55$vfi$1@f1node01.rhrz.uni-bonn.de>
Kenny Tilton wrote:
> Erann Gat wrote:
> 
>> I introduced a particular change as a test case mainly to take the issue
>> out of the abstract.  I don't care very much about that particular change
>> (and I have in fact stopped advocating it).
> 
> Then it is still abstract. CL is perfect. Well, OK, (- 3) should be 3, 
> but other than that...

There are lots of directions in which you could change the standard in 
minor but still meaningful ways.

- The deprecated features could be removed. It doesn't make sense to 
keep them in the standard becaues implementations already support them.

- defstruct could be deprecated - it's redundant. (But don't remove it 
immediately - only in ten years, or so. ;)

- The data structures could be reworked into a CLOS-based collections 
API. (Yes, this would be against the rule to include only things that 
have already been implemented, but implementing a collections API is 
easy.) This probably means that some of the current forms in this regard 
could be deprecated.

- The MOP could be included. (This implies changes to the core because 
it needs to be clarified how the defXXX macros interact with the MOP. 
For example, ANSI CL doesn't require that defgeneric/defmethod uses 
ensure-generic-function.)

- The linearization algorithm for superclasses could be corrected.

- The condition system could be integrated better with CLOS. (I think 
this requires changes to the core, but I am not sure about this one.)

- Pathnames could be amended. (I don't if this requires changes to the 
core - just my gut feeling.)

- A defsystem could be added. (This interacts with some of the currently 
deprecated features.)


I think that these are all useful and desirable changes (at least from 
some perspectives). Note: I am not advocating any of these changes in 
particular - the reasons why the standard should not be changed still 
apply. This is just to mention that there are quite a bunch of possible 
improvements.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwvfvuuab3.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> Kenny Tilton wrote:
> > Erann Gat wrote:
> >
> >> I introduced a particular change as a test case mainly to take the issue
> >> out of the abstract.  I don't care very much about that particular change
> >> (and I have in fact stopped advocating it).
> > Then it is still abstract. CL is perfect. Well, OK, (- 3) should be
> > 3, but other than that...
> 
> There are lots of directions in which you could change the standard in
> minor but still meaningful ways.

I don't agree that "figuring out how to change the standard" should be
a goal.  This is a technique for fixing problems, not a problem itself.

The standard can be changed any time just by setting its read-only bit
to false, gritting your teeth, and preparing for chaos.  This can and
will be done if/when the community as a whole feels the potential negative
costs of destabilization are outweighed by the particular benefits of 
repairing what awful thing finally causes the need for the change.

For now, we should be talking about solving problems, not changing standards.
It has not, that I've seen, been demonstrated that the standard can't be 
worked with.  Hence, I don't understand why anyone cares about changing it
as a goal.

Syntactically, I think anyone who says they want to change the
standard rather than solve a problem really doesn't have a funded
project and a nearterm (within one year) product deadline sitting in
front of them, because if they did, I'm personally quite sure they
would find a more efficient and reliable way to use their funds to
solve the problem by deadline than "change the standard".  In fact, I
can't think of a less reliable nor more expensive way to solve an
observed problem with Common Lisp than "change the standard".
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-DED894.04294629052003@news.netcologne.de>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> For now, we should be talking about solving problems, not changing standards.
> It has not, that I've seen, been demonstrated that the standard can't be 
> worked with.  Hence, I don't understand why anyone cares about changing it
> as a goal.

It's hard to work with the standard as a teaching and learning tool. 
Other language standards are much better in this regard.

There are many exceptions and historical accidents incorporated in the 
standard, and this usually means that you have to provide lots of 
background information in order to make someone understand why certain 
features are the way they are.

I think this is probably one of the reasons why Scheme is preferred in 
university settings as a teaching language.

Even if this is not the most pressing and important problem of Common 
Lisp, it still is a valid problem.

> Syntactically, I think anyone who says they want to change the
> standard rather than solve a problem really doesn't have a funded
> project and a nearterm (within one year) product deadline sitting in
> front of them, because if they did, I'm personally quite sure they
> would find a more efficient and reliable way to use their funds to
> solve the problem by deadline than "change the standard".  In fact, I
> can't think of a less reliable nor more expensive way to solve an
> observed problem with Common Lisp than "change the standard".

Yes, you're right that I don't have any near tearm problems to solve 
with Common Lisp. So what?


Pascal
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwhe7es81h.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> It's hard to work with the standard as a teaching and learning tool. 
> Other language standards are much better in this regard.

If so, it's news to me.

I was told that most language standards are utterly unreadable and that
we did an above average job, but I was continually instructed by the 
committee that "tutorial nature" was a nongoal of the standard.

As far as I know, some language standards are not even written in
English at all, but are written in formal languages of one kind or another.

Nevertheless, a large number of people have said they found the standard 
quite approachable, so this is probably also a matter of personal taste.

> I think this is probably one of the reasons why Scheme is preferred in 
> university settings as a teaching language.

Also the fact that the Scheme spec practically makes its _only_ goal
to be teaching.  Numerous times I asked for clarifications of things
like branch cuts, error handling, porting concerns, and other
practical information and was told that this was just "clutter" and
would make the spec look messy.  In my personal opinion, no _serious_
language specification would omit such information.  

That the Scheme community has managed to build products based on the 
Scheme spec is a testimony not to the Scheme spec but to the endurance
and sometimes good sense of humor of the various Scheme implementors,
in the absence of strong support in this regard. (Again, just my opinion.)

> Even if this is not the most pressing and important problem of Common 
> Lisp, it still is a valid problem.
> 
> > Syntactically, I think anyone who says they want to change the
> > standard rather than solve a problem really doesn't have a funded
> > project and a nearterm (within one year) product deadline sitting in
> > front of them, because if they did, I'm personally quite sure they
> > would find a more efficient and reliable way to use their funds to
> > solve the problem by deadline than "change the standard".  In fact, I
> > can't think of a less reliable nor more expensive way to solve an
> > observed problem with Common Lisp than "change the standard".
> 
> Yes, you're right that I don't have any near tearm problems to solve 
> with Common Lisp. So what?

I'm tired and am going to leave this for someone else to answer for
me.  I'm confident that (a) I expressed myself clearly and (b) there
are people besides me who understood me, even if you don't.
From: Gareth McCaughan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <slrnbdbg2s.2eb1.Gareth.McCaughan@g.local>
Kent Pitman wrote:

> > > Syntactically, I think anyone who says they want to change the
> > > standard rather than solve a problem really doesn't have a funded
> > > project and a nearterm (within one year) product deadline sitting in
> > > front of them, because if they did, I'm personally quite sure they
> > > would find a more efficient and reliable way to use their funds to
> > > solve the problem by deadline than "change the standard".  In fact, I
> > > can't think of a less reliable nor more expensive way to solve an
> > > observed problem with Common Lisp than "change the standard".
> > 
> > Yes, you're right that I don't have any near tearm problems to solve 
> > with Common Lisp. So what?
> 
> I'm tired and am going to leave this for someone else to answer for
> me.  I'm confident that (a) I expressed myself clearly and (b) there
> are people besides me who understood me, even if you don't.

OK, I'll try. Kent's point wasn't "aha, you don't have any
short-term problems that need solving; isn't that interesting".
It was "there are better ways to do this for real problems
as opposed to pseudo-problems, and I'm confident enough of
this to use it to predict that you don't *have* real problems,
because if you had then you'd be approaching them differently".
So the relevance of your not having any "near-term problems"
is that it's evidence in favour of Kent's claim that changing
the standard isn't a good way to deal with problems with CL.

I make no comment on the goodness of this argument. I'm just
providing a translation service :-).

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-1D134C.14181229052003@news.netcologne.de>
In article <································@g.local>,
 Gareth McCaughan <················@pobox.com> wrote:

> > > > Syntactically, I think anyone who says they want to change the
> > > > standard rather than solve a problem really doesn't have a funded
> > > > project and a nearterm (within one year) product deadline sitting in
> > > > front of them, because if they did, I'm personally quite sure they
> > > > would find a more efficient and reliable way to use their funds to
> > > > solve the problem by deadline than "change the standard".  In fact, I
> > > > can't think of a less reliable nor more expensive way to solve an
> > > > observed problem with Common Lisp than "change the standard".
> > > 
> > > Yes, you're right that I don't have any near tearm problems to solve 
> > > with Common Lisp. So what?
> > 
> > I'm tired and am going to leave this for someone else to answer for
> > me.  I'm confident that (a) I expressed myself clearly and (b) there
> > are people besides me who understood me, even if you don't.
> 
> OK, I'll try. Kent's point wasn't "aha, you don't have any
> short-term problems that need solving; isn't that interesting".
> It was "there are better ways to do this for real problems
> as opposed to pseudo-problems, and I'm confident enough of
> this to use it to predict that you don't *have* real problems,
> because if you had then you'd be approaching them differently".
> So the relevance of your not having any "near-term problems"
> is that it's evidence in favour of Kent's claim that changing
> the standard isn't a good way to deal with problems with CL.

Hmm, I think I understand the argument, but I am not really sure. Why 
would this argument be relevant to the current discussion?

Yes, certain invasive changes to the standard would result in 
unnecessary hard work for current Common Lisp users with near-term 
problems to solve, and the current standard provides enough rope to deal 
with near-term problems in sufficently good ways. I can only repeat: So 
what?

A change of the standard is unlikely to happen anyway, so this is a 
purely hypothetical scenario. I really don't get the point...

> I make no comment on the goodness of this argument. I'm just
> providing a translation service :-).

:) Thanks a lot!


Pascal
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-AAA1FB.14051329052003@news.netcologne.de>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> Pascal Costanza <········@web.de> writes:
> 
> > It's hard to work with the standard as a teaching and learning tool. 
> > Other language standards are much better in this regard.
> 
> If so, it's news to me.
> 
> I was told that most language standards are utterly unreadable and that
> we did an above average job, but I was continually instructed by the 
> committee that "tutorial nature" was a nongoal of the standard.

Yes, you did an above average job. Other language specs that are even 
better in this regard IMHO are those of Modula-2, Oberon, Java and 
ISLISP, to name a few.

It's important to distinguish the things that are described and the way 
they are described. Most of the ANSI spec contains very good and 
understandable descriptions. But the the language itself is 
unnecessarily inconsistent at some stages.

However, my impression is that the discussion has turned into a very 
emotional one. I have the feeling that you have started to take my 
statements out of context and interpret them as absolute judgements.

To make the context clear, again: We are talking about high level 
problems of a language that is the best for many problem domains. It's 
hard for me to imagine a language that can really beat Common Lisp from 
my point of view. If we don't have any other option than to leave the 
standard unchanged then that's really fine by me.

The next best thing is the substandards notion that you are tring to 
install that I am really looking forward to.

A piecemeal growth approach would be the best IMHO - but I know that 
this is very unlikely to happen for many good reasons. (Yes, I mean it! 
The reasons that, for example, you, Nils Goesche and Marc Spitzer have 
given, are good reasons!)

You were just asking for real problems that would require the language 
standard to change by stating that you can't imagine them. However, 
teaching problems are real problems, and consistent languages are easier 
to teach than languages that are slightly less consistent than 
necessary. You have acknowledged the fact several times that Common Lisp 
has its weak parts and that it is just "good enough" in some regards, 
haven't you? I am only asking you to acknowledge the fact that these 
weak parts can be stumbling blocks when learning or teaching the 
language.


Pascal
From: Ingvar Mattsson
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <874r3ecb8l.fsf@gruk.tech.ensign.ftech.net>
Pascal Costanza <········@web.de> writes:

> In article <···············@shell01.TheWorld.com>,
>  Kent M Pitman <······@world.std.com> wrote:
> 
> > For now, we should be talking about solving problems, not changing standards.
> > It has not, that I've seen, been demonstrated that the standard can't be 
> > worked with.  Hence, I don't understand why anyone cares about changing it
> > as a goal.
> 
> It's hard to work with the standard as a teaching and learning tool. 
> Other language standards are much better in this regard.

I think that standards are (and should be) references, rather than
tutorials. What is (in that case) needed is good tutorials, probably
aimed at different subsets of the standards and for different skill
levels. If I actually had any faith in my ability to (a) do a decent
job of it and (b) actually finish it, I'd probably give it a try.

I do find that Winston/Horn's book "Lisp", in its latest addition was
most valuable for me to get my head around CLOS, even though CLOS
isn't extensively touched on.

> There are many exceptions and historical accidents incorporated in the 
> standard, and this usually means that you have to provide lots of 
> background information in order to make someone understand why certain 
> features are the way they are.
> 
> I think this is probably one of the reasons why Scheme is preferred in 
> university settings as a teaching language.

At the only university I have been a student, at the time I was a
student there, the "initial extensive programming course" uses Common
Lisp for students in "Computer Science" and "Computer Science &
Engineering". For all other engineering students, the first language
was Scheme. However, this is just a single data point and possibly no
longer correct.

//Ingvar
-- 
When in doubt, debug-on-entry the function you least suspect have
anything to do with something.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-E2DE0B.13061829052003@news.netcologne.de>
In article <··············@gruk.tech.ensign.ftech.net>,
 Ingvar Mattsson <······@cathouse.bofh.se> wrote:

> > It's hard to work with the standard as a teaching and learning tool. 
> > Other language standards are much better in this regard.
> 
> I think that standards are (and should be) references, rather than
> tutorials. What is (in that case) needed is good tutorials, probably
> aimed at different subsets of the standards and for different skill
> levels. 

Tutorials for Common Lisp need to explain the "weird" features of the 
languages sooner or later, some of them pretty soon. An improvement of 
the language standard would always mean an improvement of its 
presentation, for example in tutorials.

Tutorial writers need to understand the specification in order to 
portray things correctly, so at some stage it always becomes a learning 
and teaching tool.


Pascal
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwd6i1sn63.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> In article <··············@gruk.tech.ensign.ftech.net>,
>  Ingvar Mattsson <······@cathouse.bofh.se> wrote:
> 
> > > It's hard to work with the standard as a teaching and learning tool. 
> > > Other language standards are much better in this regard.
> > 
> > I think that standards are (and should be) references, rather than
> > tutorials. What is (in that case) needed is good tutorials, probably
> > aimed at different subsets of the standards and for different skill
> > levels. 
> 
> Tutorials for Common Lisp need to explain the "weird" features of the 
> languages sooner or later, some of them pretty soon. An improvement of 
> the language standard would always mean an improvement of its 
> presentation, for example in tutorials.
> 
> Tutorial writers need to understand the specification in order to 
> portray things correctly, so at some stage it always becomes a learning 
> and teaching tool.

If you can't figure it out, perhaps you should purchase someone else's
book.  There are people who did figure it out and can explain it.
It's not necessary that everyone be such a person.  After all, if everyone
were a book author, who would be left to _buy_ their books?

It _was_ a specific goal of the standard to not be redundant.  (We
missed in some places, but it was something we sought to achieve.)  We
considered and explicitly decided as a group _not_ to include things
like pictures of the class hierarchy, because the information is
already recoverable from what is there, and listing it redundantly was
an opportunity to get it wrong (i.e., internally inconsistent) and
have people fussing forever about whether the picture or the text was
correct.

Tutorial goals are often at odds with this kind of thing.  Redundancy is
good in teaching tools.  So inevitably, the document will fall short in
someone's eyes as a training text.  Nevertheless, just as a law teacher
must resort to reading actual law in order to know what to teach and cannot
demand that it somehow be "simplified" so that s/he can teach it better,
so too Lisp teachers will have to learn to just tough it out.

Producing the standard took a long time.  Doing what you want would have
made it take even longer.  Time was the more critical quantity.

Also, you or someone has made refrence to the idea that various
simplifications would have made the presentation easier.  No doubt.
However, the language was designed to cater to people who didn't want
their programs gratuitously broken.  We actually started with the
actual text of CLTL and morphed it into the ANSI CL standard.  (At
some point, Steele said the amount of vestige of CLTL was small enough
that he conceded there was no meaningful copyright claim remaining,
but this was nevertheless the process that we took.)  The important
point here is that I, as editor, was not allowed to make _any_ change
to the standard (i.e., to CLTL) that I could not prove was
semantics-preserving unless directed by the committee.  A consequence
is that there are lots of places where I'd imagine a modular
definition of something would have been better than the redundant way
some parts are presented (especially in light of my comment about
redundancy above). However, CLTL was full of places that said things
repeatedly, sometimes with tiny differences in wording.  In some
cases, one might even reasonably make the claim that this was a "bug",
but the problem is that one would have to know whether to fix it in
one direction or the other.  And I didn't want to do this.  Had I
started down that line, people would have started to distrust
everything I did.  They needed the confidence that if a definition had
started convoluted, it might get reworded but it would still end up
meaning the same thing.  Often, when this happened, we'd submit
cleanup proposals to the committee requesting change.  But doing it
for all possible things was prohibitively expensive given our
available resources and the number of other tasks we had at hand.

It was important, also, to have a trace somewhere of all the changes we 
intentionally made (modulo out-and-out edit-o's on my part, which no 
doubt happened but hopefully not often).  

Other people as editor would quite certainly have done a lot to simplify
the language more than I did.  My contribution might be said, in part,
not to have done what was tempting to do.  I believe, and I think many
on the committee believe, that the only reason the standard succeeded in
gaining the respect and trust of the committee was that people came to
believe that I was not busy diddling with the semantics when they were
not looking.

Among other things, I worked for a vendor at any given time
(Symbolics at one point, Harlequin at another), and people would have 
worried that my doing so meant that I was secretly making changes that
would make my vendor happy and other vendors unhappy.  The level of trust
required to have your competition write the spec you're going to live
by, and in many cases even to hand that competitor money in order to pay
for the process, is something that I think is ill-understood.

People always think these things are about technical issues. Would that
that were so.  And maybe if we make a properly streamlined process, it
can mostly be about technical issues in the future.  But the politics of
community consensus was and still is enormously complex.

It's like "prayer in school" -- everyone (well, all the religious
folk, anyway :) thinks it's a good idea until they start to see the
details filled in.  And then suddenly they realize it's someone else's
prayer being said, and it starts to feel like a less good idea...
From: Ingvar Mattsson
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87y90palmf.fsf@gruk.tech.ensign.ftech.net>
Pascal Costanza <········@web.de> writes:

> In article <··············@gruk.tech.ensign.ftech.net>,
>  Ingvar Mattsson <······@cathouse.bofh.se> wrote:
> 
> > > It's hard to work with the standard as a teaching and learning tool. 
> > > Other language standards are much better in this regard.
> > 
> > I think that standards are (and should be) references, rather than
> > tutorials. What is (in that case) needed is good tutorials, probably
> > aimed at different subsets of the standards and for different skill
> > levels. 
> 
> Tutorials for Common Lisp need to explain the "weird" features of the 
> languages sooner or later, some of them pretty soon. An improvement of 
> the language standard would always mean an improvement of its 
> presentation, for example in tutorials.
> 
> Tutorial writers need to understand the specification in order to 
> portray things correctly, so at some stage it always becomes a learning 
> and teaching tool.

It is, to a degree, as is any other specification. However, that is
very much secondary to its purpose of documention and specifying the
language standard. And, in that respect, it does at least as good (and
probably better) job as any other language standard I have read.

As I said, a reference, not a tutorial.

//Ingvar
-- 
When C++ is your hammer, everything looks like a thumb
	Latest seen from Steven M. Haflich, in c.l.l
From: Björn Lindberg
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <hcsd6i0sf0t.fsf@knatte.nada.kth.se>
Ingvar Mattsson <······@cathouse.bofh.se> writes:

> > There are many exceptions and historical accidents incorporated in the 
> > standard, and this usually means that you have to provide lots of 
> > background information in order to make someone understand why certain 
> > features are the way they are.
> > 
> > I think this is probably one of the reasons why Scheme is preferred in 
> > university settings as a teaching language.
> 
> At the only university I have been a student, at the time I was a
> student there, the "initial extensive programming course" uses Common
> Lisp for students in "Computer Science" and "Computer Science &
> Engineering". For all other engineering students, the first language
> was Scheme. However, this is just a single data point and possibly no
> longer correct.

Out of curiosity, may I ask which university you went to, and when? I
am especially interested since my intuition tells me that it may have
been a Swedish university. :-)

(I am a student at KTH, the Royal Institute of Technology. When I
started, the introductory course used Scheme and C, but has since
changed, and now they are using Java, AFAIK. :-( )


Bj�rn
From: Ingvar Mattsson
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <871xyg8p8d.fsf@gruk.tech.ensign.ftech.net>
·······@nada.kth.se (Bj�rn Lindberg) writes:

> Ingvar Mattsson <······@cathouse.bofh.se> writes:
> 
> > > There are many exceptions and historical accidents incorporated in the 
> > > standard, and this usually means that you have to provide lots of 
> > > background information in order to make someone understand why certain 
> > > features are the way they are.
> > > 
> > > I think this is probably one of the reasons why Scheme is preferred in 
> > > university settings as a teaching language.
> > 
> > At the only university I have been a student, at the time I was a
> > student there, the "initial extensive programming course" uses Common
> > Lisp for students in "Computer Science" and "Computer Science &
> > Engineering". For all other engineering students, the first language
> > was Scheme. However, this is just a single data point and possibly no
> > longer correct.
> 
> Out of curiosity, may I ask which university you went to, and when? I
> am especially interested since my intuition tells me that it may have
> been a Swedish university. :-)

It was the University of Link�ping (LiU). The Lisp course was commonly
called PINK ("Programmering i INKrementella system" - programming in
incremental systems) and I can't recall what the Scheme-based course
was named.

It probably helps that the prefect of the CS department is a lisp fan
(Anders Haraldsson).

//Ingvar
-- 
Warning: Pregnancy can cause birth from females
From: Don Geddis
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87llwplime.fsf@sidious.geddis.org>
> In article <···············@shell01.TheWorld.com>, Kent M Pitman <······@world.std.com> wrote:
> > For now, we should be talking about solving problems, not changing
> > standards.  It has not, that I've seen, been demonstrated that the
> > standard can't be worked with.  Hence, I don't understand why
> > anyone cares about changing it as a goal.

Pascal Costanza <········@web.de> writes:
> It's hard to work with the standard as a teaching and learning tool. 
> Other language standards are much better in this regard.

That wasn't the goal of ANSI CL, and it isn't clear to me why it should be
a goal.  The standard was designed for language implementors, not as a teaching
or learning tool for users.

Why can't you create a separate document for users, without altering the
standard?

> There are many exceptions and historical accidents incorporated in the 
> standard, and this usually means that you have to provide lots of 
> background information in order to make someone understand why certain 
> features are the way they are.

That's true, but it is a well-understood consequence from the design criteria.
ANSI CL is supposed to be a powerful production language.  Compatibility with
existing historical code had some weight.

> I think this is probably one of the reasons why Scheme is preferred in 
> university settings as a teaching language.

Of course, because that's what Scheme was designed for.

I don't understand what you're actually suggesting.  If you want a simple,
easy to teach version of Lisp, then you get Scheme.  If you want a useful,
commercial-quality production Lisp, you get Common Lisp.

It seems like you're imagining some new Lisp language, that somehow has the
(teaching) simplicity of Scheme, yet the power (and community support!) of
ANSI CL.  Do you not see that these are intimately related?  ANSI CL has a
few "accidents" specifically because it was designed to be useful.  The
designers weren't stupid; they instead made a deliberate choice for usefulness
over purity.

It's not possible to just merge Scheme and CL, and somehow come up with a
language that has the benefits of both but the drawbacks of neither.  Instead,
you'll wind up with the opposite.

        -- Don
_______________________________________________________________________________
Don Geddis                    http://don.geddis.org              ···@geddis.org
I lay there watching that anthill for hours. I watched them scurrying back and
forth, carrying things, digging new tunnels, and finally it hit me: these are
the things that are biting me.
	-- Deep Thoughts, by Jack Handey [1999]
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwisrtsbcr.fsf@shell01.TheWorld.com>
Don Geddis <···@geddis.org> writes:

> It's not possible to just merge Scheme and CL, and somehow come up
> with a language that has the benefits of both but the drawbacks of
> neither.  Instead, you'll wind up with the opposite.

Probably.

And, in effect, ISLISP (www.islisp.info) already tried this.
I'll leave it to others to assess which of the intended traits show
through in the result of this cross-breeding.  I don't know that
ISLISP is the aforementioned "opposite", but it is certainly not
what a lot of people expected going in...

It's also worth noting though that CL itself ALREADY has Scheme as a
parent, so when people try to merge Scheme and CL, they're repeating
history, not making it.  Which may be why it tends to be so unaffected
by attempts to repeat the process.  The mark of Guy Steele is strong
on CL, he having been done the lion's share of the writing work and a
considerable amount of the design work on CLTL.  It's not like Steele
was unaware of Scheme when he did it.  There was simply no need to
make CL be Scheme.  Scheme was already Scheme.  CL had a different
purpose, and hence a different design.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwr86iu9o0.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> There are lots of directions in which you could change the standard in
> minor but still meaningful ways. [...]
> [...]
> I think that these are all useful and desirable changes (at least from
> some perspectives). Note: I am not advocating any of these changes in
> particular - the reasons why the standard should not be changed still
> apply. This is just to mention that there are quite a bunch of
> possible improvements.

Heh.  I just have to give a historial aside here...

The very first meeting of the Common Lisp committee after CLTL, if I
recall right, was in Monterrey, California.  I think it was in the
spring of 1986 (or thereabouts).  It was pre-ANSI.  We who had worked
on CLTL all appeared, but weirdly, again as many other people who had
not originally worked on CLTL and asserted they had a right to vote.

Being young and brash, I said to them "get your own group. we're in
charge here. we made this thing and if you don't like it, get your own."
But they would not be swayed.  They had used the language and they felt
this bought them a stake.  In retrospect, they were surely right.
(Some may mistake my present message as unchanged since that time. It's
not.  But let me come back to that later.)

At this meeting, Steele showed up with a list of what he had called
"non-controversial changes".  It was corrections to details that he
felt he had gotten wrong, and a few new functions he thought were fun
that he figured others would like, too.  I have the document
somewhere, I think, but it's in storage.

I remember it had XOR on the list and there was a a fuss about whether
XOR would be a function or a special form; some thought it was odd for
OR to be a special form and XOR to be a function, but XOR didn't need
to be a special form since it must necessarily evaluate all of its
arguments, left to right.

What sticks in my mind, though, was that point by point (and this was
a page or two of bulleted text, each item being about a line or two long),
it turned out that almost _nothing_ was non-controversial.  Steele was 
awestruck by this, but all of us learned a valuable lesson: predicting
the needs of a mass community is hard.

We ended up trying ineffectively then to vote on what to do, but we had no
sense of who should be allowed to vote   There was a sense that some people
should not be allowed to, or that some people should have more vote than
others.  Ultimately, we realized we were unable to fairly bootstrap an
acceptable voting scheme.

It was this realization that caused us to go to ANSI--because we had
no other decision procedure for resolving such issues, and because
ANSI came with prefabricated rules for resolving conflicts, etc.  At
the time, we didn't have any idea what the effects of those rules
would be--we were so disorganized at that meeting, and hence
powerless, that we just wanted rules of any kind...  

Now that many of us have experienced the process, we have a more
refined sense of the effect (and cost) of process.  And that's why I
(and hopefully a few others) think we need a more lightweight process
to move forward.

- - - -

Back to the part about people going away, I know it sometimes sounds
like the same young me saying "go do something elsewhere".  But, you
see, I'm not saying that out of a sense of authority any more.  I
don't define the ANSI process.  I'm not even a member any more.  My
remarks at this point are not "controlling" (though I suppose they may
feel that way) but are rather "helpful".  I'm not saying "You must go
do elsewhere."  I'm saying "I predict disaster if you don't go
elsewhere." and "I think you will be happier if you do go elsewhere."
These are, structurally, quite different things than me in my younger
days being somewhat blindly territorial.  Or so I like to think.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031539430001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> Now that many of us have experienced the process, we have a more
> refined sense of the effect (and cost) of process.  And that's why I
> (and hopefully a few others) think we need a more lightweight process
> to move forward.

Yes!

(Apologies for a "me too" post, but since Kent and I seem to be
disagreeing a lot lately I thought it would be worthwhile to point out
that we do agree on some things.)

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <868ysq50d1.fsf@bogomips.optonline.net>
Pascal Costanza <········@web.de> writes:

> Kenny Tilton wrote:
> > Erann Gat wrote:
> >
> >> I introduced a particular change as a test case mainly to take the issue
> >> out of the abstract.  I don't care very much about that particular change
> >> (and I have in fact stopped advocating it).
> > Then it is still abstract. CL is perfect. Well, OK, (- 3) should be
> > 3, but other than that...
> 
> There are lots of directions in which you could change the standard in
> minor but still meaningful ways.

You seem to be missing one key point, once you start revising the standard
all bets are off, it could come out a lisp1 for G_ds sake.  Anyone with
an axe to grind can start grinding.  Now since each person who shows up
gets one vote, think of the damage that 5 people with grant money can
do if they set up a block, we all vote for our stuff and we trade our
votes as a block.

marc
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-31E402.04305829052003@news.netcologne.de>
In article <··············@bogomips.optonline.net>,
 Marc Spitzer <········@optonline.net> wrote:

> > There are lots of directions in which you could change the standard in
> > minor but still meaningful ways.
> 
> You seem to be missing one key point, once you start revising the standard
> all bets are off, 

Please reread the thread more carefully - I have acknowledged this 
several times.


Pascal
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <lyof1n7ynp.fsf@cartan.de>
Pascal Costanza <········@web.de> writes:

> There are lots of directions in which you could change the standard
> in minor but still meaningful ways.
> 
> - The deprecated features could be removed. It doesn't make sense to
> keep them in the standard becaues implementations already support
> them.

I hope you mean they should not be called ``depricated�� anymore.
People seem to like FIND-IF-NOT.

> - defstruct could be deprecated - it's redundant. (But don't remove
> it immediately - only in ten years, or so. ;)

A lot of things are redundant in Lisp.  People use DEFSTRUCT and CADR
anyway, including me, for various reasons.

> - The data structures could be reworked into a CLOS-based
> collections API. (Yes, this would be against the rule to include
> only things that have already been implemented, but implementing a
> collections API is easy.) This probably means that some of the
> current forms in this regard could be deprecated.

Nothing stops anybody from designing a CLOS based collections API.  It
could get a name and its own standard.

> - The MOP could be included. (This implies changes to the core
> because it needs to be clarified how the defXXX macros interact with
> the MOP. For example, ANSI CL doesn't require that
> defgeneric/defmethod uses ensure-generic-function.)

Why, if the MOP specifies that they do, then they will if the compiler
is supposed to be MOP compliant.  Perfect example for a layered
standard.  ANSI C doesn't say that errno is an int, but POSIX does,
for instance.

> - The linearization algorithm for superclasses could be corrected.

Is there anything wrong with it?  Has this ever caused a problem in
actual code you wrote?

> - The condition system could be integrated better with CLOS. (I
> think this requires changes to the core, but I am not sure about
> this one.)

But it /is/ already integrated with CLOS!?

> - Pathnames could be amended. (I don't if this requires changes to
> the core - just my gut feeling.)
> 
> - A defsystem could be added. (This interacts with some of the
> currently deprecated features.)

I don't think you have to change the core for that.

> I think that these are all useful and desirable changes (at least
> from some perspectives). Note: I am not advocating any of these
> changes in particular - the reasons why the standard should not be
> changed still apply. This is just to mention that there are quite a
> bunch of possible improvements.

But there is hardly anything in it that cannot be solved by layered
standards.  So if anybody thinks producing a new standard is such an
easy thing to do, how about producing some layered standards for a
start?  Once there are a whole bunch of them, things might look
different.

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Paul F. Dietz
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <UlCdnRAcjq8gYUmjRTvU3Q@dls.net>
Nils Goesche wrote:

>>- The condition system could be integrated better with CLOS. (I
>>think this requires changes to the core, but I am not sure about
>>this one.)
>  
> But it /is/ already integrated with CLOS!?


No, it isn't.  Condition types may be implemented as standard classes,
but the spec is explicit in saying that this is not required.

The page for DEFINE-CONDITION could use some editing in any case.

	Paul
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwn0h6u9dg.fsf@shell01.TheWorld.com>
"Paul F. Dietz" <·····@dls.net> writes:

> Nils Goesche wrote:
> 
> >>- The condition system could be integrated better with CLOS. (I
> >>think this requires changes to the core, but I am not sure about
> >>this one.)
> >  But it /is/ already integrated with CLOS!?
> 
> 
> No, it isn't.  Condition types may be implemented as standard classes,
> but the spec is explicit in saying that this is not required.
> 
> The page for DEFINE-CONDITION could use some editing in any case.

Indeed, the original design was intended to heavily modularize the two systems
and that was only later changed in very minor ways.

The most major things that would help would be to take advantage of
multiple inheritance in a few places to more carefully organize the
set of conditions signaled in certain situations.  For example, the
Lisp Machine system (upon which the condition system was heavily based)
had the notion of FILE-ERROR and NETWORK-ERROR and had the notion that
some files were accessed over the net, such that some file errors might
be network errors, and might be temporary ("host is down") while others
are not ("file is missing"), but we didn't try to capture that in the
initial design.

We had discussed in committee, also, that the conditions might be
a READ-ONLY-METACLASS that perhaps did not permit changes to the object
after its creation. (There as a debate about whether init methods needed
to be able to do certain kinds of changes to the structure before the
structure was hardened against change, and how this might occur.)
The whole matter was left open to vendor experimentation.

Note, however, that this could all be done as a layered consensus/agreement
among implementors and does not require change to the underlying standard.  
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-C46787.20495428052003@news.netcologne.de>
In article <··············@cartan.de>, Nils Goesche <······@cartan.de> 
wrote:

> Pascal Costanza <········@web.de> writes:
> 
> > There are lots of directions in which you could change the standard
> > in minor but still meaningful ways.
> > 
> > - The deprecated features could be removed. It doesn't make sense to
> > keep them in the standard becaues implementations already support
> > them.
> 
> I hope you mean they should not be called ``depricated�� anymore.
> People seem to like FIND-IF-NOT.

OK, correction: some deprecated features should be removed and some 
should become undeprecated. :)

> > - defstruct could be deprecated - it's redundant. (But don't remove
> > it immediately - only in ten years, or so. ;)
> 
> A lot of things are redundant in Lisp.  People use DEFSTRUCT and CADR
> anyway, including me, for various reasons.

...but most of the defstruct features could be integrated into CLOS, 
right?

And yes, C**R should remain in the language. :) (And others will say 
this is redundant and bad, and this why opening the standard can lead to 
a very complicated situation...)

> > - The data structures could be reworked into a CLOS-based
> > collections API. (Yes, this would be against the rule to include
> > only things that have already been implemented, but implementing a
> > collections API is easy.) This probably means that some of the
> > current forms in this regard could be deprecated.
> 
> Nothing stops anybody from designing a CLOS based collections API.  It
> could get a name and its own standard.

Yep, you are probably right.

> > - The MOP could be included. (This implies changes to the core
> > because it needs to be clarified how the defXXX macros interact with
> > the MOP. For example, ANSI CL doesn't require that
> > defgeneric/defmethod uses ensure-generic-function.)
> 
> Why, if the MOP specifies that they do, then they will if the compiler
> is supposed to be MOP compliant.  Perfect example for a layered
> standard.  ANSI C doesn't say that errno is an int, but POSIX does,
> for instance.

MCL doesn't implement the MOP correctly in this regard. They can claim 
that *their* MOP does things differently from AMOP. The problem here is 
that there is no standard for a MOP. (Not that Digitool have defended 
their code like this. This is just hypethetical.)

> > - The linearization algorithm for superclasses could be corrected.
> 
> Is there anything wrong with it?  Has this ever caused a problem in
> actual code you wrote?

There are several papers on this issue. No, it doesn't affect the code 
you write - but it's a too easy target for Common Lisp opponents. A 
corrected version probably wouldn't affect your code.

(I recall reading a post by a Schemer who attacked this specific 
"problem" of Common Lisp. Of course, you can defend Common Lisp here, 
but only people who have already converted to Common Lisp will 
understand you.)

> > - The condition system could be integrated better with CLOS. (I
> > think this requires changes to the core, but I am not sure about
> > this one.)
> 
> But it /is/ already integrated with CLOS!?

No, I don't think so.

> > - Pathnames could be amended. (I don't if this requires changes to
> > the core - just my gut feeling.)
> > 
> > - A defsystem could be added. (This interacts with some of the
> > currently deprecated features.)
> 
> I don't think you have to change the core for that.

I think the widely used defsystems redefine require and provide for 
their purposes.

> > I think that these are all useful and desirable changes (at least
> > from some perspectives). Note: I am not advocating any of these
> > changes in particular - the reasons why the standard should not be
> > changed still apply. This is just to mention that there are quite a
> > bunch of possible improvements.
> 
> But there is hardly anything in it that cannot be solved by layered
> standards.  So if anybody thinks producing a new standard is such an
> easy thing to do, how about producing some layered standards for a
> start?  Once there are a whole bunch of them, things might look
> different.

Probably you are right - the substandards would be an important step in 
the right direction. Only a institutionalized standardization process 
can lead to an agreement on the need for more fundamental changes.


Pascal
From: Michal Przybylek
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb4tqq$ni3$1@nemesis.news.tpi.pl>
"Pascal Costanza" <········@web.de> wrote:

> > > - defstruct could be deprecated - it's redundant. (But don't remove
> > > it immediately - only in ten years, or so. ;)
> >
> > A lot of things are redundant in Lisp.  People use DEFSTRUCT and CADR
> > anyway, including me, for various reasons.
>
> ...but most of the defstruct features could be integrated into CLOS,
> right?


How ?

For example - nullary-methods/fields selection cannot be done at compile
time (and so inline substitution can't be applied) with presence of
"change-class".


> > > - The linearization algorithm for superclasses could be corrected.
> >
> > Is there anything wrong with it?  Has this ever caused a problem in
> > actual code you wrote?
>
> There are several papers on this issue.


Which papers ?


I'm sure that this is a matter of taste :-)



mp
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905030903470001@192.168.1.51>
In article <············@nemesis.news.tpi.pl>, "Michal Przybylek"
<········@poczta.onet.pl> wrote:

> For example - nullary-methods/fields selection cannot be done at compile
> time (and so inline substitution can't be applied) with presence of
> "change-class".

This capability could be recovered by adding class-sealing.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-1D650B.14345929052003@news.netcologne.de>
In article <············@nemesis.news.tpi.pl>,
 "Michal Przybylek" <········@poczta.onet.pl> wrote:

> > > > - The linearization algorithm for superclasses could be corrected.
> > >
> > > Is there anything wrong with it?  Has this ever caused a problem in
> > > actual code you wrote?
> >
> > There are several papers on this issue.
> 
> 
> Which papers ?

See http://doi.acm.org/10.1145/191080.191110, http://doi.acm.org/10.1145/141936.141939 and http://www.webcom.com/haahr/dylan/linearization-oopsla96.html

Also see the book "Putting Metaclasses to Work" by Ira Forman and Scott 
Danforth that has good material on this issue. 



Pascal
From: Michal Przybylek
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb8irk$2ft$1@atlantis.news.tpi.pl>
"Pascal Costanza" <········@web.de> wrote:

> > > > > - The linearization algorithm for superclasses could be corrected.
> > > >
> > > > Is there anything wrong with it?  Has this ever caused a problem in
> > > > actual code you wrote?
> > >
> > > There are several papers on this issue.
> >
> >
> > Which papers ?
>
> See http://doi.acm.org/10.1145/191080.191110,
> http://doi.acm.org/10.1145/141936.141939 and
> http://www.webcom.com/haahr/dylan/linearization-oopsla96.html


I don't have access to ACM library so I have read only the last one.


Ok, I agree with you, that linarization algorithm could be corrected. But
it's not obvious how this should be done. Maybe C3 is better then standard
CLOS linearization. Still, it is not good enough.

In general:


A < B <=>  some[E \in direct-supertypes(A)] B = E or E < B

A <= B <=> A < B or A = B

m1 <= m2 <=>
    all[i \in positions-of-dispatchable-inputs(m) +
        positions-of-non-dispatchable-outputs(m)] arg(m1, i) <= arg(m2, i)
    and all[i \in positions-of-nondispatchble-inputs +
        positions-of-dispatchable-outputs(m)] arg(m2, i) <= arg(m1, i)

where + denotes set union


MSA(m, (A1,A2, ...An)) = {k \in methods(m) : (A1, A2, ... An) <= k,
    all[l \in methods(m), (A1, A2, .. An) <= l] k <= l}



Only if MSA(x, y) = {} you can select a method which you want. Both C3
and CLOS fail in this case.



mp
From: Paul F. Dietz
Subject: Dylan's superclass linearization algorithm (was Re: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <X4WcnVLZ0tyvMkWjXTWcoQ@dls.net>
Pascal Costanza wrote:

>>>- The linearization algorithm for superclasses could be corrected.
>>
>>Is there anything wrong with it?  Has this ever caused a problem in
>>actual code you wrote?
> 
> There are several papers on this issue. No, it doesn't affect the code 
> you write - but it's a too easy target for Common Lisp opponents. A 
> corrected version probably wouldn't affect your code.


The 'surprise' is CLOS's nonmonotonicity in some situations.  It's been
claimed that Dylan's algorithm (which is identical to CLOS's when CLOS
produces monotonic inheritance) interacts better with sealed generic
functions. [1]

[1] Barrett et al., "A Monotonic Superclass Linearization for Dylan",
   OOPSLA '96, pages 69-82.

	Paul
From: Paul Foley
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m2vfvu1uk6.fsf@mycroft.actrix.gen.nz>
On Wed, 28 May 2003 19:20:28 +0200, Pascal Costanza wrote:

> Kenny Tilton wrote:
>> Erann Gat wrote:
>> 
>>> I introduced a particular change as a test case mainly to take the issue
>>> out of the abstract.  I don't care very much about that particular change
>>> (and I have in fact stopped advocating it).
>> Then it is still abstract. CL is perfect. Well, OK, (- 3) should be
>> 3, but other than that...

And (/ 3) should be an error?  Perhaps, but the current special-case
meanings are more convenient.

> There are lots of directions in which you could change the standard in
> minor but still meaningful ways.

> - The deprecated features could be removed. It doesn't make sense to
> keep them in the standard becaues implementations already support them.

I'd prefer they just be un-deprecated (except perhaps for :test-not
arguments).

> - The data structures could be reworked into a CLOS-based collections
> API. (Yes, this would be against the rule to include only things that
> have already been implemented, but implementing a collections API is
> easy.) This probably means that some of the current forms in this
> regard could be deprecated.

You can do that in a layered standard.

> - The MOP could be included. (This implies changes to the core because
> it needs to be clarified how the defXXX macros interact with the
> MOP. For example, ANSI CL doesn't require that defgeneric/defmethod
> uses ensure-generic-function.)

Another layered standard.

> - The linearization algorithm for superclasses could be corrected.

Yes.

[Though I've never heard of anyone who was actually hurt by it in
practice, just contrived examples; and you can fix it yourself with a
half dozen lines of MOP code, if you are]

> - The condition system could be integrated better with CLOS. (I think
> this requires changes to the core, but I am not sure about this one.)

It is.  [Well, it's allowed to not be, but it's also allowed to be]

> - Pathnames could be amended. (I don't if this requires changes to the
> core - just my gut feeling.)

Amended how?  You mean something other than "better specified" (which,
again, can be done without opening up the standard)

> - A defsystem could be added. (This interacts with some of the
> currently deprecated features.)

Yet another layered standard.

> I think that these are all useful and desirable changes (at least from
> some perspectives). Note: I am not advocating any of these changes in
> particular - the reasons why the standard should not be changed still
> apply. This is just to mention that there are quite a bunch of
> possible improvements.

IMO, the only one that's even a candidate for a change in the CL
standard is the CPL ordering issue.

-- 
There is no reason for any individual to have a computer in their home.
                                           -- Ken Olson, President of DEC
(setq reply-to
  (concatenate 'string "Paul Foley " "<mycroft" '(··@) "actrix.gen.nz>"))
From: Mario S. Mommer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fz8ysqdgar.fsf@cupid.igpm.rwth-aachen.de>
···@jpl.nasa.gov (Erann Gat) writes:
> 
> Well, thanks for at least granting that I'm not dumb.

You are welcome.

> I introduced a particular change as a test case mainly to take the issue
> out of the abstract.  I don't care very much about that particular change
> (and I have in fact stopped advocating it).

I see. It was some sort of stratagem to "test" the willingness of the
community to change.

> I do believe that some change
> to how dynamic binding is done would be pedagogically useful, but that
> issue is dominated by the fact that at the moment there seems to be no
> mechanism for collectively managing change -- any change -- in the
> language, except to resist it at all costs.

So you propose a bad idea, and when people do not react positively to
it (you have said that not even you really care about it - sure, I'm
misrepresenting what you wrote) you conclude in a strike of genius
that people want to resist change at all costs. That doesn't follow.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031244350001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
<········@yahoo.com> wrote:

> > I introduced a particular change as a test case mainly to take the issue
> > out of the abstract.  I don't care very much about that particular change
> > (and I have in fact stopped advocating it).
> 
> I see. It was some sort of stratagem to "test" the willingness of the
> community to change.

In part.  The proposal was put forward in good faith.  I actually thought
at the time that it was a good idea.  I have since changed my mind.  (Not
about the concept -- I still think that specials can and ought to be
simplified, and that this can be done in a backward-compatible way, but
Kent convinced me that the particular design I proposed wasn't very good.)

> > I do believe that some change
> > to how dynamic binding is done would be pedagogically useful, but that
> > issue is dominated by the fact that at the moment there seems to be no
> > mechanism for collectively managing change -- any change -- in the
> > language, except to resist it at all costs.
> 
> So you propose a bad idea, and when people do not react positively to
> it (you have said that not even you really care about it - sure, I'm
> misrepresenting what you wrote) you conclude in a strike of genius
> that people want to resist change at all costs. That doesn't follow.

True, that does not follow, but you have set up a straw man by leaving out
some important facts, not least of which is that people have explicitly
said that they want to resist change.  Their argument is: change can cause
harm (which I do not deny) therefore change must be resisted.  Actually,
to be fair, their argument is more like: change is much more likely to do
harm than good (which I also do not deny) therefore it must be resisted. 
My counter-argument is, essentially: a lack of change is also likely to do
harm, therefore, setting up some process for managing change is a better
strategy for minimizing the net harm than simply resisting change.

(Additional support for the claim that Lisp reists change is the simple
fact that the language has pretty much not changed in ten years.)

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwel2iu8u5.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> ... you have set up a straw man by leaving out
> some important facts, not least of which is that people have explicitly
> said that they want to resist change.  Their argument is: change can cause
> harm (which I do not deny) therefore change must be resisted.

No. 

One of their arguments is this.

Another of their arguments is "there are other ways to accomplish the
high level goal you have".

- - - 

Incidentally, it might interest you to know that I also hold the following
two seemingly contradictory thoughts in my head:

 - The US Constitution made some major blunders that rewriting the
   constitution would fix.

 - It would be a disaster to attempt to rewrite the US Constitution.

My point here not being to open a bunch of debate on US politics, but
rather to say that I am capable of holding this same opinion about relative
dangers of change even in situations where I'm on "the other side".
From: Mario S. Mommer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fzwugay95m.fsf@cupid.igpm.rwth-aachen.de>
···@jpl.nasa.gov (Erann Gat) writes:
> > I see. It was some sort of stratagem to "test" the willingness of the
> > community to change.
> 
> In part.  The proposal was put forward in good faith.  I actually thought
> at the time that it was a good idea.  I have since changed my mind.  (Not
> about the concept -- I still think that specials can and ought to be
> simplified, and that this can be done in a backward-compatible way, but
> Kent convinced me that the particular design I proposed wasn't very good.)

This is good news.

> > So you propose a bad idea, and when people do not react positively to
> > it (you have said that not even you really care about it - sure, I'm
> > misrepresenting what you wrote) you conclude in a strike of genius
> > that people want to resist change at all costs. That doesn't follow.
> 
> True, that does not follow, but you have set up a straw man by leaving out
> some important facts, not least of which is that people have explicitly
> said that they want to resist change.

That is exactly what worries me: you did not get that this was a
non-sequitur? Excuse me but I can't help thinking that you have an
agenda here that you cannot hold openly.

>  Their argument is: change can cause
> harm (which I do not deny) therefore change must be resisted.  Actually,
> to be fair, their argument is more like: change is much more likely to do
> harm than good (which I also do not deny) therefore it must be resisted. 
> My counter-argument is, essentially: a lack of change is also likely to do
> harm, therefore, setting up some process for managing change is a better
> strategy for minimizing the net harm than simply resisting change.
> 
> (Additional support for the claim that Lisp reists change is the simple
> fact that the language has pretty much not changed in ten years.)

Talk about sub-standards has reached almost an annoying level lately
(not that it annoys me). Proposals have been made and have not
triggered flamefests. That seems to be the sort of change people seem
to want *badly*. Do you copy, Erann? /change!/

The fact that the change that people seem to support is not the change
you care about does not imply that people do not want change. Does
/that/ follow?
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031718250001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
<········@yahoo.com> wrote:

> That is exactly what worries me: you did not get that this was a
> non-sequitur? Excuse me but I can't help thinking that you have an
> agenda here that you cannot hold openly.

I don't see what the one has to do with the other, but I have no hidden
agenda.  I have been very open and honest about my goals and motivations.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb2g4p$vjk$1@f1node01.rhrz.uni-bonn.de>
Mario S. Mommer wrote:

> I was being sarcastic. Erann has been arguing with an incredibly
> faulty logic. He is not dumb, so I cannot but see ill will in his
> actions.
> 
> He just wants to get his pet idea into the standard, even though he is
> the only one who cares about the issues he is rising, and is ready to
> spill FUD and misrepresent what other people have said if he doesn't
> get it his way (which he won't, afaict).

I don't think that this is a fair criticism of Erann's behavior.

Before I come to what I understand as the core of the problem, here is a 
quote from an interview with James Gosling wrt criticisms of the size of 
the Java Standard Edition:

"What has really gone nuts is all the different APIs that are part of 
[the Java 2 Platform, Standard Edition,] J2SETM. And this question, in 
some sense, is unanswerable. It says, if you could take a few things out 
of [the] J2SE [platform], what would they be? One of the tragedies we 
have is that we've got so many customers and everything that is in the 
platform is critical to a pretty large group of customers. So, for any 
particular person, any particular developer, not all of [the] J2SE 
[platform] is going to matter. But for every developer, the slice of the 
platform that they care about is different." 
(http://java.sun.com/features/2002/03/gosling.html)

I think this characterization is true of any large piece of software.

So it might be true that the features Erann wants to see as part of the 
language might be just minor ones that are not very important to many 
people, but even if they are important to only a few people this means 
that there is a certain level of justification to have them in the 
language. (exactly the level as represented by the group of people who 
want these features)

I understand Kent's usual reactions to such requests for changing the 
standard as follows: The ANSI standardization process just doesn't suit 
such a way of changing the language.

What Erann wants (AFAICS) is the continuous improvement of the language 
in the sense of piecemeal growth (see 
http://c2.com/cgi/wiki?PiecemealGrowth). This includes both adding new 
layers on top of the language, but also changing internals of the language.

What Kent says (AFAICS) is that the ANSI standardization process is only 
suited for "big bang" approaches - as soon as you open up the 
standardization process for one change you have to consider almost any 
requests for changes. This might turn out to be worse than keeping the 
language just as it is.

So I think this is the core of the problem: It would probably be better 
to have a piecemeal growth process for improving Common Lisp, but we can 
only have a big bang process if any.

At the moment, I can see only two ways out of this: Either the Common 
Lisp standard is handed over to another standardization process (but I 
don't know whether this can be done), or else new Lisp dialects have to 
emerge.

The latter is already happening: Scheme implementations have improved 
over time, some more Lisp dialects are maintained, and new ones like Arc 
and Goo have been started.

Just some food for thought: Lisp had its best time when there were many 
dialects and not one big standardized one. Maybe it wouldn't be too bad 
to have a new wave of Lisp dialects and think about another round of 
integration of good ideas only in maybe 10 or 20 years from now on.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwr86jb04e.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

[Considerable good summarizing omitted.]

> So I think this is the core of the problem: It would probably be
> better to have a piecemeal growth process for improving Common Lisp,
> but we can only have a big bang process if any.

Yes and no.  I think the key is to understand that the 'growth'
doesn't have to take place as "Common Lisp" per se.  A lot of the
fighting is over the name.  But Java is more than the Java language,
and C is more than the C language.

For a long time, the CL approach was to unify the language and
library.  We give you a lot more with the language than other
languages do.  And then one day we woke up and said "oh my goodness,
the world has gotten big."  We can no longer afford to re-release a
whole new language in order to address what are effectively "library"
needs.  Moreover, the language we have is suitably good that while
tweaking it could (under laboratory conditions) yield a better
language, it would _also_ (under committee conditions) yield a much
worse language.  Many who have not been through the process assume
that their own good will or even the shared good will of many people
would be enough to keep catastrophically bad things from happening if
the process were really opened, but those of us who have been there
know that these processes can result in surprising things--like things
getting added to the language that no one (not just figuratively,
but literally) wanted.  

How can this happen?  Because person A wants a certain feature and
introduces a proposal for it.  Person B feels that's a bad idea so
proceeds to offer amendments.  Person A doesn't like the amendments,
but person C, being helpful, thinks it will improve A's proposal so
helps to get the amendment added.  Eventually, after several of these
B and C (who never wanted the original thing in the first place) are now 
comfortable adding the feature because its ill effects have been 
neutralized.  Person A, who did want the feature, sees none of the 
original good effects but still has a proposal on the table in danger
of passing.  Person A may even try to withdraw the proposal, but the
others having spent all this time on it are now proud of their work
on it and want to see it not go to waste.  This exact scenario almost
happened with special variables and lexical variables, btw.  We almost
got a badly amended proposal for lexical variables introduced, and only
by pleading that this would hurt the case for later introduction of a
proper proposal did we manage to back out of it in time.

By contrast, if an individual vendor has a working implementation of
something it's harder for some single person at a committee meeting to
offer an amendment because they can't speak to how it will affect things.
There's a lot of testing of the vendor implementation and it works fine.
If multiple vendors have implemented the same thing, the item is even 
more resilient against meddling.  This is why it's important to have
people test things in vendor implementations or user code first.

But, moreover, if it's possible for a vendor to do this work compatibly
then that's usually a proof that the spec didn't have to change.
So in that case, you really don't need a change anyway. You just need
layered standards, such as we need for pathnames, to clarify just the
details of how pathnames work.  And in that case, vendors who don't
like the layered standard can opt not to support it--which has the nice
feature that they aren't opting out of the whole language, just one
extension.  It doesn't mean they have to secede from the community.

At this point in CL's lifetime, I think we have to accept that the 
language+library model most languages use is a good one, so that individual
libraries can each have timelines of their own.  Making the entire 
community stop and re-synchronize in its entirety is too much cost. 
No one would _think_ of writing a parallel processing system that did a
without-interrupts across multiple processors, and yet the creation of
a new standard is really a lot like that.  Everyone has to stop and re-synch
on new code across the board.  It's a very messy synchronization process.
It can be staged a little, but there is a quick desire to re-unify the
community around a single dialect.  The cltl->cltl2 and cltl2->ansi
transitions were very unpleasant for vendors, who had to either
maintain  two full implementations for a period (requiring double staffs
or requiring putting half resources on each project) or had to just 
suddenly dump their existing customer base who wanted to stay in the past
at the time the vendor wanted to move ahead...  This should only be done
in a dire circumstance that the community has generated within itself; it
should not be done by those in the standards community "just because they
can".

> At the moment, I can see only two ways out of this: Either the Common
> Lisp standard is handed over to another standardization process (but I
> don't know whether this can be done), or else new Lisp dialects have
> to emerge.

This is an all or nothing approach.

The process I'm engaged in producing (and yes, I'm behind schedule in 
having gotten to it because of stupidity you don't want to know about
with my house renovations, etc., but I am actively working on it last
week and this week) is a thing I call "substandards" which is a way of
addressing the lingering problems in a small, layered way.

I'm still working out some details of the approach, but I'll be publishing
stuff here as soon as I have things ready to go.

> The latter is already happening: Scheme implementations have improved
> over time, some more Lisp dialects are maintained, and new ones like
> Arc and Goo have been started.
> 
> Just some food for thought: Lisp had its best time when there were
> many dialects and not one big standardized one. Maybe it wouldn't be
> too bad to have a new wave of Lisp dialects and think about another
> round of integration of good ideas only in maybe 10 or 20 years from
> now on.

Yes, it was not necessarily intended that CL would be the only Lisp
dialect.  But the tradeoff was well-understood:  rapid change is the
enemy of stability.  Some things don't require stability, but other things
do.  Ultimately, you have to deal with the issue that anything that becomes
popular will require stability.  And stability is less fun, less flashy,
etc. than rapid change.  Some people seem to equate it with death... 
certainly the death of new ideas.  But it's not.  It just means that not
every vehicle is the place for such new ideas to appear.  And, at some 
point, you have to ask people to use the language itself.  That's what it's
there for.  Not all new ideas have to happen in the design of he language.
It's a maleable language.  Use it for what it's intended to be used for.
To overcome its own shortcomings.
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ED4C5E8.4030908@nyc.rr.com>
Pascal Costanza wrote:
> Just some food for thought: Lisp had its best time when there were many 
> dialects and not one big standardized one.

CMIIW, but I hope you do not mean to imply "/because/ there were many 
many dialects".

> Maybe it wouldn't be too bad 
> to have a new wave of Lisp dialects and think about another round of 
> integration of good ideas ...

I was starting to think nothing could stop CL from taking over the 
world, but that would do it in a heartbeat.

 > ...only in maybe 10 or 20 years from now on.

Whew! You had me going there for a second.

:)

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <lyznl785tt.fsf@cartan.de>
Pascal Costanza <········@web.de> writes:

> I understand Kent's usual reactions to such requests for changing
> the standard as follows: The ANSI standardization process just
> doesn't suit such a way of changing the language.
> 
> What Erann wants (AFAICS) is the continuous improvement of the
> language in the sense of piecemeal growth (see
> http://c2.com/cgi/wiki?PiecemealGrowth). This includes both adding
> new layers on top of the language, but also changing internals of
> the language.
> 
> What Kent says (AFAICS) is that the ANSI standardization process is
> only suited for "big bang" approaches - as soon as you open up the
> standardization process for one change you have to consider almost
> any requests for changes. This might turn out to be worse than
> keeping the language just as it is.

This sounds as if Kent was also against new layered standards on top
of CL, but I think neither he nor anybody else is.

> So I think this is the core of the problem: It would probably be
> better to have a piecemeal growth process for improving Common Lisp,
> but we can only have a big bang process if any.

No, that wouldn't be better.  Note that this ``Big Bang�� process only
applies to changes to the core.  Vendors can make compatible
extensions on top of the core at any time and nobody will complain.

I for one would immediately stop using Lisp the moment it becomes a
playground for theorists and language designers changing the core of
the language every year.  I write programs in Lisp.  I can do that
better in Lisp than in any other language.  I reuse stuff I wrote a
long time ago.  I want things to stay like this.  There is absolutely
nothing any of these proponents of ``change�� have come up with so far
that I feel would make me any more productive.  For me, all these
discussions about ``what if things were like this or like that in Lisp
instead of the way they are?�� are nothing but idle games and
speculations.  So Erann wants to change specials, for one thing.  I
use specials all the time and they just ... work the way they are.
Learning how to use them is not rocket science (Erann should know ;-).
Why should I rewrite all my code just because someone thinks he found
a more newbie-friendly way of using them?  The next guy wants to have
continuations in the language, thus making all of my code slower and
breaking everything I wrote with UNWIND-PROTECT.  The next guy will
change CL into a Lisp-1.  There is probably not a single page of code
of mine that would survive this particular change alone.  Next is
static typing, probably.  Paul Graham, OTOH, will remove CLOS.  Oh,
and none of my macros is going to work anymore because people will
change the macro system to use ``hygienic�� macros exclusively,
because..., well, ``hygienic�� is good and clean, isn't it?

No, that's not paradise.  It is hell.  The only people who like this
are theorists who do not write programs for a living but are sitting
in a university being paid for conducting such experiments.  And there
already /is/ a playground for them where they can do no harm: That's
precisely what Scheme is for.  It is perfect for that.  Scheme is
small.  You can implement all of your Favorite Things into your Scheme
dialect and impress your friends and collegues.  Scheme people like
this kind of general talk about how things would be if ...  They also
change their language all the time.  Ever read books and papers that
use Scheme as an example language?  Hardly any two of those use the
same language: Scheme is constantly changing.  There is your paradise.

It is not mine, though.

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031042250001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@cartan.de>, Nils Goesche <······@cartan.de> wrote:

> No, that wouldn't be better.  Note that this ``Big Bang�� process only
> applies to changes to the core.  Vendors can make compatible
> extensions on top of the core at any time and nobody will complain.

And they do indeed do this all the time, but that meets a crucial need
unmet: pedagogy.  If I want to teach Lisp to someone I have to either
teach them the standard with all its warts, or a dialect that is
implementable within the standard, or choose a particular vendor's
solution.

The problem with choosing a particular vendor's solution is that I'm not
really teachning them Common Lisp, I'm teaching them Vendor-X Lisp, in
much the same way that someone who learns Visual C++ with MFC isn't really
learning C++.  In the case of Visual C++ the fact that you're locked in a
cage can often be safely ignored because it's an awfully big cage, but if
what I want to do can only be done in, say, Macintosh Common Lisp, then
the fact that you're locked in a cage is suddenly much more manifest.

Sub-standards will help a lot, but so far they are vaporware.

> I for one would immediately stop using Lisp the moment it becomes a
> playground for theorists and language designers changing the core of
> the language every year.

Just because someone opposes one extreme does not mean that they advocate
the opposite extreme.

>  There is absolutely
> nothing any of these proponents of ``change�� have come up with so far
> that I feel would make me any more productive.

So that is your proposed criterion for evaluating change, whether or not
it makes Nils Goesche more productive?

>  So Erann wants to change specials, for one thing.

No, I don't.  (If I were designing a language for my own personal use I'd
do dynamic binding differently, but unlike you I recognize that my opinion
is not the only one that matters.)

> I use specials all the time and they just ... work the way they are.

And that is all that matters.  That they work for Nils Goesche.

> Learning how to use them is not rocket science (Erann should know ;-).

True, but it is more complicated than it needs to be (1), and it is a
continual source of confusion for newcomers.

> Why should I rewrite all my code just because someone thinks he found
> a more newbie-friendly way of using them?

Do you not understand the concept of backwards-compatibility?

> The next guy wants to have
> continuations in the language, thus making all of my code slower and
> breaking everything I wrote with UNWIND-PROTECT.  The next guy will
> change CL into a Lisp-1.  There is probably not a single page of code
> of mine that would survive this particular change alone.  Next is
> static typing, probably.  Paul Graham, OTOH, will remove CLOS.  Oh,
> and none of my macros is going to work anymore because people will
> change the macro system to use ``hygienic�� macros exclusively,
> because..., well, ``hygienic�� is good and clean, isn't it?

Now there is some sound reasoning:  some changes are bad.  Therefore all
changes are bad.  Q.E.D.  And people accuse me of spreading FUD?  Bah!

E.

Notes:

(1) I base this claim on the fact that I have tried to write a beginner's
guide to special variables and found that the easiest way to explain them
involves first describing a different, hypothetical language, and then
explaining Common Lisp's behavior in terms of that language.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfw4r3evpgr.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@cartan.de>, Nils Goesche <······@cartan.de> 
> wrote:
...
> >  There is absolutely
> > nothing any of these proponents of ``change�� have come up with so far
> > that I feel would make me any more productive.
> 
> So that is your proposed criterion for evaluating change, whether or not
> it makes Nils Goesche more productive?

And why shouldn't it be?

Note that the ANSI process, in effect, defines this to be THE
legitimate criterion for anyone voting.  That is, it does not contain
any verbiage about "the good of the community" nor "coherence" nor
"future" nor "people not represented".  It simply assumes (and
therefore defines) that any organization (including an individual qua
organization) can join, and that enlightened self-interest will rule
the votes of any member.

Sometimes members trade votes, giving in on things they don't care
about in one place to get something in return elsewhere that they do
care about.  This practice is not per se supported by ANSI, but it is
a de facto part of the process, and informally I believe it to be
essential to keeping the process from being even more of "tyranny of
the majority" than it already is.

> [...discussion omitted...]
> Now there is some sound reasoning:  some changes are bad.  Therefore all
> changes are bad.  Q.E.D.  And people accuse me of spreading FUD?  Bah!

Well, first, all Nils is saying is that he values stability.  I don't
see a problem with that.  The term "FUD" is correctly used to describe 
people who are trying to promote fear, uncertainty, and doubt in others;
it isn't correctly used to talk about Nils desire to self-identify as
someone who personally has (as a result of informed conversation) fear,
uncertainty, and doubt of his own.

But beyond that, if you can't predict which change you're going to get,
it's not a bad theory to be conservative, I claim.

It sometimes doesn't even matter that the participants are all
intelligent; they may well be, and that may not be enough.  As nearly
as I can tell, pure democratic process is barely more powerful than a
perceptron.  And I kind of imagine it's already proven that a
perceptron, quite literally, can't recognize a good idea if it sees
one.  This is because the parallel nature of individual votes does not
achieve necessary consistency constraints.  [I say "barely more
powerful" because perceptron components always independently, and
people sometimes talk to one another; also, sometimes people arrange
votes in order to head off known bad cases where independent voting is
going to cause a problem.  But the problem is that it's superhuman
effort of individuals that manages to ever have a positive effect;
it's not part of the voting process itself working in its natural mode
that leads to a good outcome.]

For example, it's often the case that people in the real world realize
that a republican in the white house should be counterbalanced by a
democratic congress, and that a democrat in the whitehouse should be
counterbalanced by a republican congress.  But even if everyone had
this point of view, it is possible for the democratic process to end
up electing a republican president with a republican congress or a
democratic president with a democratic congress because the voting
process doesn't support contingent choices and voters are often at the
mercy of a kind of parallelism that screens out common sense
consistency-checks that each individual voting would have applied.

Language design decisions often end up following this same paradigm.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031408320001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@cartan.de>, Nils Goesche <······@cartan.de> 
> > wrote:
> ...
> > >  There is absolutely
> > > nothing any of these proponents of ``change�� have come up with so far
> > > that I feel would make me any more productive.
> > 
> > So that is your proposed criterion for evaluating change, whether or not
> > it makes Nils Goesche more productive?
> 
> And why shouldn't it be?

What's so special about Nils Goesche?  Why not evaluate changes on the
basis of whether they make Erann Gat more productive?  Or Pascal
Costanza?  Or the village idiot?


> But beyond that, if you can't predict which change you're going to get,
> it's not a bad theory to be conservative, I claim.

That depends on the circumstances.  If you are in a situation where there
is a good chance that conservatism will lead to disaster then it might be
worth taking the risk despite the uncertainty.

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfw1xyipux3.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <···············@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > In article <··············@cartan.de>, Nils Goesche <······@cartan.de> 
> > > wrote:
> > ...
> > > >  There is absolutely
> > > > nothing any of these proponents of ``change�� have come up with so far
> > > > that I feel would make me any more productive.
> > > 
> > > So that is your proposed criterion for evaluating change, whether or not
> > > it makes Nils Goesche more productive?
> > 
> > And why shouldn't it be?
> 
> What's so special about Nils Goesche?  Why not evaluate changes on the
> basis of whether they make Erann Gat more productive?  Or Pascal
> Costanza?  Or the village idiot?

Precisely my point.  You don't know which of these people will be voting
against you in ANSI when you sign up for that process.

The substandards process I'm working out mostly does not have that
particular problem.  (Well, not because it's exclusive.  But because it
deals with control differently.)

The process I'm proposing won't let you change the base standard either.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87add6vhzm.fsf@darkstar.cartan>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <···············@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:

> > > In article <··············@cartan.de>, Nils Goesche <······@cartan.de> 
> > > wrote:

> > > > There is absolutely nothing any of these proponents of
> > > > ``change�� have come up with so far that I feel would
> > > > make me any more productive.

> > > So that is your proposed criterion for evaluating change,
> > > whether or not it makes Nils Goesche more productive?

> > And why shouldn't it be?

> What's so special about Nils Goesche?

You tell us.  Because if there is nothing special about me, I am
just an ordinary Lisp user, and this was actually precisely my
point: The most important thing to consider when evaluating
proposed changes to a language is whether these changes make its
users more productive.  Because that's what Lisp is for, its
purpose in this world.  Unlike Scheme or OCaml, which are
playgrounds for language experiments (as at least OCaml's
designers openly admit).  If it makes /me/ more productive, it
will make most other users more productive, too.  I could have
said ``Joe Lisp User�� instead of ``me�� but figured my readers
would be bright enough to understand what I mean.  And even if
there /is/ something special about me, then there is something
special about pretty much every other Lisp user, too.  If anybody
evaluates proposals on the basis of whether they'd make /himself/
more productive, we'd get a rough estimate of the needs of the
user base by voting.  That's much better than to decide on the
grounds of pure speculation about what some hypothetical users
might need in the imagination of some CS professor or something.

> Why not evaluate changes on the basis of whether they make
> Erann Gat more productive?  Or Pascal Costanza?  Or the village
> idiot?

See above.  Did you really not understand this?

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031521380001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:

> If it makes /me/ more productive, it
> will make most other users more productive, too.

Not necessarily.  Different people use Lisp for different things.  What
makes one person more productive does not necessarily do the same for
another.

More to the point, things that fail to make Nils Goesche more productive
do not necessarily fail to make other users more productive.

E.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <8765nuvf62.fsf@darkstar.cartan>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:
> 
> > If it makes /me/ more productive, it will make most other
> > users more productive, too.
> 
> Not necessarily.  Different people use Lisp for different
> things.  What makes one person more productive does not
> necessarily do the same for another.
> 
> More to the point, things that fail to make Nils Goesche more
> productive do not necessarily fail to make other users more
> productive.

That's why I also said in that very same posting:

> > And even if there /is/ something special about me, then there
> > is something special about pretty much every other Lisp user,
> > too.  If anybody evaluates proposals on the basis of whether
> > they'd make /himself/ more productive, we'd get a rough
> > estimate of the needs of the user base by voting.  That's
> > much better than to decide on the grounds of pure speculation
> > about what some hypothetical users might need in the
> > imagination of some CS professor or something.

You did not really think I was proposing that standards are
henceforth produced by testing their effect on Nils Goesche as a
general rule, did you? ;-)

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031727520001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:

> You did not really think I was proposing that standards are
> henceforth produced by testing their effect on Nils Goesche as a
> general rule, did you? ;-)

Obviously not.  However, I do think that you are using yourself as a
stand-in for an *experienced* Lisp user and neglecting the needs of
newcomers, whose views generally tend to be underrepresented here.

> > > And even if there /is/ something special about me, then there
> > > is something special about pretty much every other Lisp user,
> > > too.

This may in fact be true, but if it is then I think it is unfortunate. 
Because what is special about you and every other Lisp user is that they
know Lisp.  That's precisely the problem IMO.  I would like to see more
people using Lisp who don't know it (yet).

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86el2iecct.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@darkstar.cartan>, Nils Goesche <···@cartan.de> wrote:
> 
> > You did not really think I was proposing that standards are
> > henceforth produced by testing their effect on Nils Goesche as a
> > general rule, did you? ;-)
> 
> Obviously not.  However, I do think that you are using yourself as a
> stand-in for an *experienced* Lisp user and neglecting the needs of
> newcomers, whose views generally tend to be underrepresented here.

Could this perhaps be because they learn and then become experienced lisp
users?  Or they go away.  I would thing it is a great disservice to the
new people to set things up such that they stay new for longer then needed.
I personally do not like staying ignorant about things and would not
consider it a friendly act if a community helped me stay ignorant
about thing, for my own good of course.  

The idea is to grow competent peers not create an underclass.

The fact that learning something new takes work is not a bad thing.
It is a great filter for weeding out people who can never become
peers, because they do not want to work for it or are incapable.

> 
> > > > And even if there /is/ something special about me, then there
> > > > is something special about pretty much every other Lisp user,
> > > > too.
> 
> This may in fact be true, but if it is then I think it is unfortunate. 
> Because what is special about you and every other Lisp user is that they
> know Lisp.  That's precisely the problem IMO.  I would like to see more
> people using Lisp who don't know it (yet).
> 
> E.

So go out and teach them lisp.  Set up a course at work where you teach
CL and market it as a means of helping them become better programmers 
at their day job.  If you do this you will have a growing base of 
support at work of programmers saying "I could do X in lisp and I wish
I could do it in (C, C++, Java, python ...)".  Then they start asking
to use lisp and you haver your political base to enact CL use at your
site.  But this is a lot of work and a long term commitment.

marc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805032343540001@192.168.1.51>
In article <··············@bogomips.optonline.net>, Marc Spitzer
<········@optonline.net> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@darkstar.cartan>, Nils Goesche
<···@cartan.de> wrote:
> > 
> > > You did not really think I was proposing that standards are
> > > henceforth produced by testing their effect on Nils Goesche as a
> > > general rule, did you? ;-)
> > 
> > Obviously not.  However, I do think that you are using yourself as a
> > stand-in for an *experienced* Lisp user and neglecting the needs of
> > newcomers, whose views generally tend to be underrepresented here.
> 
> Could this perhaps be because they learn and then become experienced lisp
> users?  Or they go away.

Yes, obviously.  I would like to increase the proportion who stay.

>  I would thing it is a great disservice to the
> new people to set things up such that they stay new for longer then needed.

Yes indeed.  That is why I think that improved pedagogy is (all else being
equal) a sufficient reason for change.

> The fact that learning something new takes work is not a bad thing.
> It is a great filter for weeding out people who can never become
> peers, because they do not want to work for it or are incapable.

Take this philosophy to an extreme and you can use it to argue that C++ is
a well designed language.  Some of my peers actually believe that.

> So go out and teach them lisp.

I've actually tried.  I once prepared an introductory talk on Lisp
specifically tailored for my peers.  I scheduled it twice.  Only three
people showed up, and all of them were already Lisp programmers.

> Set up a course at work where you teach
> CL and market it as a means of helping them become better programmers 
> at their day job.

The problem is that people already have their hands full trying to learn
the things that they actually have to use.  Very few people here  believe
(or care) that learning Lisp will make them better programmers.  They are
just trying to get by.  (It's actually quite astonishing.  There is a
significant population of NASA engineers who violently resist all attempts
to make their jobs simpler or easier.  For them, complexity equals job
security, and any possible improvement is seen as a threat.  It's quite a
serious problem.)

>  If you do this you will have a growing base of 
> support at work of programmers saying "I could do X in lisp and I wish
> I could do it in (C, C++, Java, python ...)".

The problem with that theory is that there are very few X's that can be
done in Lisp that can't be done in C/C++/Java/Python/whatever.  It's just
(typically) harder.  But, as I explained above, people around here don't
care about that.  In fact, harder is often seen as a good thing.

>  Then they start asking
> to use lisp and you haver your political base to enact CL use at your
> site.  But this is a lot of work and a long term commitment.

Believe me, I have no problem with hard work and long term commitment. 
I've been pushing Lisp with varying degrees of success for a very long
time.  But your approach is based on faulty assumptions.  I've tried it,
and it doesn't work.

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <867k8aqey0.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@bogomips.optonline.net>, Marc Spitzer
> <········@optonline.net> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > In article <··············@darkstar.cartan>, Nils Goesche
> <···@cartan.de> wrote:
> > > 
> > > > You did not really think I was proposing that standards are
> > > > henceforth produced by testing their effect on Nils Goesche as a
> > > > general rule, did you? ;-)
> > > 
> > > Obviously not.  However, I do think that you are using yourself as a
> > > stand-in for an *experienced* Lisp user and neglecting the needs of
> > > newcomers, whose views generally tend to be underrepresented here.
> > 
> > Could this perhaps be because they learn and then become experienced lisp
> > users?  Or they go away.
> 
> Yes, obviously.  I would like to increase the proportion who stay.

please clarify, do you wan them to become experienced CLers or 
do you just want them to stay?

> 
> >  I would thing it is a great disservice to the
> > new people to set things up such that they stay new for longer then needed.
> 
> Yes indeed.  That is why I think that improved pedagogy is (all else being
> equal) a sufficient reason for change.

But all else is not equal.  Your proposed changes do not have $0 cost.

> 
> > The fact that learning something new takes work is not a bad thing.
> > It is a great filter for weeding out people who can never become
> > peers, because they do not want to work for it or are incapable.
> 
> Take this philosophy to an extreme and you can use it to argue that C++ is
> a well designed language.  Some of my peers actually believe that.

bully for them.  As Einstein said "as simple as possable, but no simpler"

> 
> > So go out and teach them lisp.
> 
> I've actually tried.  I once prepared an introductory talk on Lisp
> specifically tailored for my peers.  I scheduled it twice.  Only three
> people showed up, and all of them were already Lisp programmers.

after the first time did you ask them to bring a friend?

> 
> > Set up a course at work where you teach
> > CL and market it as a means of helping them become better programmers 
> > at their day job.
> 
> The problem is that people already have their hands full trying to learn
> the things that they actually have to use.  Very few people here  believe
> (or care) that learning Lisp will make them better programmers.  They are
> just trying to get by.  (It's actually quite astonishing.  There is a
> significant population of NASA engineers who violently resist all attempts
> to make their jobs simpler or easier.  For them, complexity equals job
> security, and any possible improvement is seen as a threat.  It's quite a
> serious problem.)

That is a management issue.  It is funny that you are supporting Erik's 
claim that most of the people who got into computing in the boom years 
are working at a level above there competency and are ruled by the fear
that someone will figure it out.  

> 
> >  If you do this you will have a growing base of 
> > support at work of programmers saying "I could do X in lisp and I wish
> > I could do it in (C, C++, Java, python ...)".
> 
> The problem with that theory is that there are very few X's that can be
> done in Lisp that can't be done in C/C++/Java/Python/whatever.  It's just
> (typically) harder.  But, as I explained above, people around here don't
> care about that.  In fact, harder is often seen as a good thing.

my they are scared

> 
> >  Then they start asking
> > to use lisp and you haver your political base to enact CL use at your
> > site.  But this is a lot of work and a long term commitment.
> 
> Believe me, I have no problem with hard work and long term commitment. 
> I've been pushing Lisp with varying degrees of success for a very long
> time.  But your approach is based on faulty assumptions.  I've tried it,
> and it doesn't work.

Since you already have the talk done try it again, have your 3 Clers
bring 1 or 2 friends each.

marc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905030900130001@192.168.1.51>
In article <··············@bogomips.optonline.net>, Marc Spitzer
<········@optonline.net> wrote:

> > Yes, obviously.  I would like to increase the proportion who stay.
> 
> please clarify, do you wan them to become experienced CLers or 
> do you just want them to stay?

Staying is a prerequisite to becoming experienced, so to begin with I want
them to stay.  Once they stay, becoming experienced will surely follow.

For my purposes, though, it is enough that they stay.  What I need in
order to use Lisp is simply to show that lots of other people use it.  I
do not have to show that they use it well.

> > Yes indeed.  That is why I think that improved pedagogy is (all else being
> > equal) a sufficient reason for change.
> 
> But all else is not equal.  Your proposed changes do not have $0 cost.

Where did you get from "all else being equal" to "zero cost"?  Nothing has
zero cost.  Just carrying on this conversation does not have zero cost.

> > > The fact that learning something new takes work is not a bad thing.
> > > It is a great filter for weeding out people who can never become
> > > peers, because they do not want to work for it or are incapable.
> > 
> > Take this philosophy to an extreme and you can use it to argue that C++ is
> > a well designed language.  Some of my peers actually believe that.
> 
> bully for them.  As Einstein said "as simple as possable, but no simpler"

Push that idea to its extreme and you get Scheme.

> > > Set up a course at work where you teach
> > > CL and market it as a means of helping them become better programmers 
> > > at their day job.
> > 
> > The problem is that people already have their hands full trying to learn
> > the things that they actually have to use.  Very few people here  believe
> > (or care) that learning Lisp will make them better programmers.  They are
> > just trying to get by.  (It's actually quite astonishing.  There is a
> > significant population of NASA engineers who violently resist all attempts
> > to make their jobs simpler or easier.  For them, complexity equals job
> > security, and any possible improvement is seen as a threat.  It's quite a
> > serious problem.)
> 
> That is a management issue.  It is funny that you are supporting Erik's 
> claim that most of the people who got into computing in the boom years 
> are working at a level above there competency and are ruled by the fear
> that someone will figure it out.

Even Erik wasn't wrong all the time.  ;-)

> Since you already have the talk done try it again, have your 3 Clers
> bring 1 or 2 friends each.

I don't think that would help.  Availability of Lisp programmers is not
and never has been the limiting factor.  The limiting factor, the question
the managers always ask, the one I don't have an answer for, is "Who else
uses it?"

E.
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4r3dd3kg.fsf@ccs.neu.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> I don't think that would help.  Availability of Lisp programmers is not
> and never has been the limiting factor.  The limiting factor, the question
> the managers always ask, the one I don't have an answer for, is "Who else
> uses it?"

And what is their answer to the question ``What difference does *that*
make?''
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031306070001@k-137-79-50-101.jpl.nasa.gov>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > I don't think that would help.  Availability of Lisp programmers is not
> > and never has been the limiting factor.  The limiting factor, the question
> > the managers always ask, the one I don't have an answer for, is "Who else
> > uses it?"
> 
> And what is their answer to the question ``What difference does *that*
> make?''

Generally it is to stop speaking to me.

The ostensible answer is something like this:

Space flight is an extremely expensive and inherently high-risk endeavor. 
Hundreds of millions of taxpayer dollars are at stake.  Hundred, sometimes
thousands, of people have to work together in order to succeed.  The fact
that few people use Lisp is taken as an indication that either it is
untried technology, or it has been tried and found wanting.  Furthermore,
the fact that few people use it means that it will be hard to find people
to maintain any code written in it.  It means that the Lisp vendor we're
relying on could go out of business before the mission is over.  In short,
using Lisp when few others are means taking on a large risk for a
questionable (to them) payoff.

The real answer is that if they use X and it fails, then when the board of
inquiry asks them, "Why did you risk a $100-million spacecraft on X?" they
want to be able to say, "Because that's what everyone else was using. 
Because it was industry-standard practice."

E.
From: Greg Menke
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m37k89gxp2.fsf@europa.pienet>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > I don't think that would help.  Availability of Lisp programmers is not
> > > and never has been the limiting factor.  The limiting factor, the question
> > > the managers always ask, the one I don't have an answer for, is "Who else
> > > uses it?"
> > 
> > And what is their answer to the question ``What difference does *that*
> > make?''
> 
> Generally it is to stop speaking to me.
> 
> The ostensible answer is something like this:
> 
> Space flight is an extremely expensive and inherently high-risk endeavor. 
> Hundreds of millions of taxpayer dollars are at stake.  Hundred, sometimes
> thousands, of people have to work together in order to succeed.  The fact
> that few people use Lisp is taken as an indication that either it is
> untried technology, or it has been tried and found wanting.  Furthermore,
> the fact that few people use it means that it will be hard to find people
> to maintain any code written in it.  It means that the Lisp vendor we're
> relying on could go out of business before the mission is over.  In short,
> using Lisp when few others are means taking on a large risk for a
> questionable (to them) payoff.
> 
> The real answer is that if they use X and it fails, then when the board of
> inquiry asks them, "Why did you risk a $100-million spacecraft on X?" they
> want to be able to say, "Because that's what everyone else was using. 
> Because it was industry-standard practice."
> 


I think you're oversimplifying the issue.  Being in the flight
software racket myself, I see it a little differently.  Spacecraft
flight software engineering tends to re-use as much code and
architecture as possible so to include something radically different
requires considerable effort in broadly cultivating the new skills,
even if the projects themselves want it.  On the other hand, if the
project people wanted Common Lisp for something and were willing to
pay for the work to put it all together, then I'm sure the flight
software people would be quite happy to do it.  Vendor support is
important, but theres already lots of dead-end hardware and software
in use and there will be lots more.  I think open source software in
general has a huge advantage in this space because you can CM
everything from the cross-compiler source on up through the flight
software and OS, then freely recompile it when and how you choose
without playing support games with whomever bought the defunct
vendor's IP.

I'm not saying you're wrong about political prejudices, they are key
factors in the choices for <all> the technology that goes onto a
spacecraft.  You'll find other "different" languages like C++ and Java
have similar problems in this space.

Truly, no-one wants to answer questions about how their new-fangled
software subsystem caused a mission problem, but that is no different
in an entirely traditional flight software design- very few software
subsystems are entirely unchanged even when modules from a prior
mission are incorporated in their same roles in a new mission.

I think the major hurdle that must be overcome is the current reliance
on mapping every single OS and application variable in all operating
modes to the spacecraft telecommanding protocols.  In such an
architecture, software is entirely static and you can't exploit
"out-of-band" tools such as gdb to debug in-situ.  When TCP/IP starts
to show up onboard in a big way and missions can do things like ssh
into a spacecraft to check things out, I think opportunities for less
orthodox software designs will pop up all over the place.

Gregm
From: Kirk Kandt
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vddk9vc13m9hdf@corp.supernews.com>
> >
> > The real answer is that if they use X and it fails, then when the board
of
> > inquiry asks them, "Why did you risk a $100-million spacecraft on X?"
they
> > want to be able to say, "Because that's what everyone else was using.
> > Because it was industry-standard practice."
> >

I agree with Erann on this point. I have tired of beating myself up just to
be right. I choose my battles now. The risk of using an unconventional X is
too great, and if you choose the unconventional X and the project succeeds
there is no reward because success was expected anyway. I.e, there is no
reward and high risk in using an unconventional X.

-- Kirk Kandt
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <chKBa.1075192$S_4.1082104@rwcrnsc53>
"Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@k-137-79-50-101.jpl.nasa.gov...
> In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:
>
> > ···@jpl.nasa.gov (Erann Gat) writes:
> >
> > > I don't think that would help.  Availability of Lisp programmers is not
> > > and never has been the limiting factor.  The limiting factor, the question
> > > the managers always ask, the one I don't have an answer for, is "Who else
> > > uses it?"
> >
> > And what is their answer to the question ``What difference does *that*
> > make?''
>
> The real answer is that if they use X and it fails, then when the board of
> inquiry asks them, "Why did you risk a $100-million spacecraft on X?" they
> want to be able to say, "Because that's what everyone else was using.
> Because it was industry-standard practice."

That's a sad observation, but no doubt true.

I suppose that it does no good to point out that space flight
is *not* doing what everyone else is doing.  I'll bet these
same people wouldn't hesitate to use `non industry-standard'
mechanical and electrical parts, though.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005030919090001@192.168.1.51>
In article <·························@rwcrnsc53>, "Joe Marshall"
<·············@attbi.com> wrote:

> > The real answer is that if they use X and it fails, then when the board of
> > inquiry asks them, "Why did you risk a $100-million spacecraft on X?" they
> > want to be able to say, "Because that's what everyone else was using.
> > Because it was industry-standard practice."

> I suppose that it does no good to point out that space flight
> is *not* doing what everyone else is doing.

It doesn't seem to help, no.

Here at JPL we do unmanned solar system exploration.  Technologies are
evaluated in terms of how well they help a mission meet its objectives. 
The problem is that missions always define their objectives so that they
don't need new technology in order to meet them in order to reduce risk,
so it's a catch-22.  It's damned annoying.

> I'll bet these
> same people wouldn't hesitate to use `non industry-standard'
> mechanical and electrical parts, though.

It depends on which industry you mean.  By and large they use
aerospace-industry-standard hardware.  The only exception is when they are
building something for the first time, like an ion drive or a new
instrument.

But that is a false analogy.  Lisp is not a part, it's more like a machine
tool, or a CAD package.  And if a wizzy new machine tool or CAD package
comes along it tends to meet with less resistance.  Part of the problem is
that it's a very hardware-centric culture and all software is viewed with
a certain amount of suspicion.

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwsmqwbbfy.fsf@shell01.TheWorld.com>
"Joe Marshall" <·············@attbi.com> writes:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@k-137-79-50-101.jpl.nasa.gov...
...
> > The real answer is that if they use X and it fails, then when the board of
> > inquiry asks them, "Why did you risk a $100-million spacecraft on X?" they
> > want to be able to say, "Because that's what everyone else was using.
> > Because it was industry-standard practice."
> 
> That's a sad observation, but no doubt true.
> 
> I suppose that it does no good to point out that space flight
> is *not* doing what everyone else is doing.  I'll bet these
> same people wouldn't hesitate to use `non industry-standard'
> mechanical and electrical parts, though.

Nor probably does it work to point out that when pushed on why they would use
a language that was prone to memory leaks, buffer overruns, and whatnot,
they probably reply "well, it's not like what we're doing is mission critical".

Curious that the people who are doing something mission critical look to the
ordinary practice of people who are not for advice in what to use...
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <JZ3XPmTr1sur1yRejDS1KJt8SKmu@4ax.com>
On Thu, 29 May 2003 13:06:07 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:

> Space flight is an extremely expensive and inherently high-risk endeavor. 
> Hundreds of millions of taxpayer dollars are at stake.  Hundred, sometimes
> thousands, of people have to work together in order to succeed.  The fact
> that few people use Lisp is taken as an indication that either it is
> untried technology, or it has been tried and found wanting.  Furthermore,
> the fact that few people use it means that it will be hard to find people
> to maintain any code written in it.  It means that the Lisp vendor we're
> relying on could go out of business before the mission is over.  In short,

The problem with vendors going out of business or not delivering adequate
supplies was even worse in the early 1960s, when integrated circuits were
in their infancy, yet the technology was chosen to build the Apollo
guidance computer.

By the way, this is a fascinating reading, especially recommended to
hardware junkies:

  "Journey to the Moon. The History of the Apollo Guidance Computer"
  Eldon C. Hall
  American Institute of Aeronautics and Astronautics, 1996
  ISBN 1-56347-185-X


> The real answer is that if they use X and it fails, then when the board of
> inquiry asks them, "Why did you risk a $100-million spacecraft on X?" they
> want to be able to say, "Because that's what everyone else was using. 
> Because it was industry-standard practice."

Board of inquiry question: "Why did you use toilet paper as a storage
device for a $100-million spacecraft's computer?". Answer: "Because that's
what everyone else was using. Because it was industry-standard practice."


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86smqx58ps.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@bogomips.optonline.net>, Marc Spitzer
> <········@optonline.net> wrote:
> 
> > > Yes, obviously.  I would like to increase the proportion who stay.
> > 
> > please clarify, do you wan them to become experienced CLers or 
> > do you just want them to stay?
> 
> Staying is a prerequisite to becoming experienced, so to begin with I want
> them to stay.  Once they stay, becoming experienced will surely follow.

They are completely unrelated.  It may shorten the ramp up time to be
here, when you get stuck you can ask a question and get an answer, but
you only become a peer by a lot of hard work.
  
> 
> For my purposes, though, it is enough that they stay.  What I need in
> order to use Lisp is simply to show that lots of other people use it.  I
> do not have to show that they use it well.

So you fundamentally do not want peers to be the core goal of cll, you
want this group flooded by droves of idiots who can not master the
idea of google for it first.  This is dangerous and short sighted.

What would probably happen, in short order, would be an invitation
only mailing list for the CL *peers* who got fed up and left.  This
would leave the idiots in charge and do more harm then good for CL.
But we might get Erik to join.

> 
> > > Yes indeed.  That is why I think that improved pedagogy is (all
> > > else being equal) a sufficient reason for change.
> > 
> > But all else is not equal.  Your proposed changes do not have $0 cost.
> 
> Where did you get from "all else being equal" to "zero cost"?  Nothing has
> zero cost.  Just carrying on this conversation does not have zero cost.

ok please post your financial model's for no change and your changes.

> 
> > > > The fact that learning something new takes work is not a bad thing.
> > > > It is a great filter for weeding out people who can never become
> > > > peers, because they do not want to work for it or are incapable.
> > > 
> > > Take this philosophy to an extreme and you can use it to argue
> > > that C++ is a well designed language.  Some of my peers actually
> > > believe that.
> > 
> > bully for them.  As Einstein said "as simple as possable, but no simpler"
> 
> Push that idea to its extreme and you get Scheme.

not when you read the *whole* quote, remember "but no simpler".  And scheme
fills a different niche then CL.  CL is designed for large, complex, ugly
and practile things.  Scheme is designed for academic/toy things. 

> 
> > > > Set up a course at work where you teach
> > > > CL and market it as a means of helping them become better programmers 
> > > > at their day job.
> > > 
> > > The problem is that people already have their hands full trying to learn
> > > the things that they actually have to use.  Very few people here  believe
> > > (or care) that learning Lisp will make them better programmers.  They are
> > > just trying to get by.  (It's actually quite astonishing.  There is a
> > > significant population of NASA engineers who violently resist all attempts
> > > to make their jobs simpler or easier.  For them, complexity equals job
> > > security, and any possible improvement is seen as a threat.  It's quite a
> > > serious problem.)
> > 
> > That is a management issue.  It is funny that you are supporting Erik's 
> > claim that most of the people who got into computing in the boom years 
> > are working at a level above there competency and are ruled by the fear
> > that someone will figure it out.
> 
> Even Erik wasn't wrong all the time.  ;-)

I would say he was damn near always right.
 
> 
> > Since you already have the talk done try it again, have your 3 Clers
> > bring 1 or 2 friends each.
> 
> I don't think that would help.  Availability of Lisp programmers is not
> and never has been the limiting factor.  The limiting factor, the question
> the managers always ask, the one I don't have an answer for, is "Who else
> uses it?"

Previously you have said that you had no political base to support the
use of CL at you current employer, now you say that is not an issue.
Which is it?

marc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005030908100001@192.168.1.51>
In article <··············@bogomips.optonline.net>, Marc Spitzer
<········@optonline.net> wrote:

> > Staying is a prerequisite to becoming experienced, so to begin with I want
> > them to stay.  Once they stay, becoming experienced will surely follow.
> 
> They are completely unrelated.  It may shorten the ramp up time to be
> here, when you get stuck you can ask a question and get an answer, but
> you only become a peer by a lot of hard work.

I think you may have misunderstood what I mean by "stay".  I did not mean
stay in comp.lang.lisp.  I meant "stay" in the sense of continuing to use
and learn Lisp.

> > For my purposes, though, it is enough that they stay.  What I need in
> > order to use Lisp is simply to show that lots of other people use it.  I
> > do not have to show that they use it well.
> 
> So you fundamentally do not want peers to be the core goal of cll, you
> want this group flooded by droves of idiots who can not master the
> idea of google for it first.  This is dangerous and short sighted.

This isn't about c.l.l., this is about the Lisp community at large, and
no, I do not want that community to consist entirely of peers.  I do no
think that is good for long-term viability of the language.  There has to
be a next generation of users.

> ok please post your financial model's for no change and your changes.

With no change, Lisp remains a niche language with slow or nonexistent growth.

With some change (not necessarily my change, but *some* change, which can
only come about if we first put in place a viable process for managing
change) there is no guarantee of growth, but I believe that the
probability of growth is increased.  Stability precludes growth.  Change
enables it, even if it does not guarantee it.

> CL is designed for large, complex, ugly
> and practile things.  Scheme is designed for academic/toy things. 

Practical things often tend to be complex and ugly, but nowhere is it
written that they have to be.  My philosophy is: as ugly as necessary, but
no uglier.

> > > Since you already have the talk done try it again, have your 3 Clers
> > > bring 1 or 2 friends each.
> > 
> > I don't think that would help.  Availability of Lisp programmers is not
> > and never has been the limiting factor.  The limiting factor, the question
> > the managers always ask, the one I don't have an answer for, is "Who else
> > uses it?"
> 
> Previously you have said that you had no political base to support the
> use of CL at you current employer, now you say that is not an issue.
> Which is it?

A programming team and a political base are not the same thing.  The
former I have (or can quickly assemble).  The latter I do not have.

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86k7c859e7.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@bogomips.optonline.net>, Marc Spitzer
> <········@optonline.net> wrote:
> 
> > > Staying is a prerequisite to becoming experienced, so to begin with I want
> > > them to stay.  Once they stay, becoming experienced will surely follow.
> > 
> > They are completely unrelated.  It may shorten the ramp up time to be
> > here, when you get stuck you can ask a question and get an answer, but
> > you only become a peer by a lot of hard work.
> 
> I think you may have misunderstood what I mean by "stay".  I did not mean
> stay in comp.lang.lisp.  I meant "stay" in the sense of continuing to use
> and learn Lisp.

fine

> 
> > > For my purposes, though, it is enough that they stay.  What I need in
> > > order to use Lisp is simply to show that lots of other people use it.  I
> > > do not have to show that they use it well.
> > 
> > So you fundamentally do not want peers to be the core goal of cll, you
> > want this group flooded by droves of idiots who can not master the
> > idea of google for it first.  This is dangerous and short sighted.
> 
> This isn't about c.l.l., this is about the Lisp community at large, and
> no, I do not want that community to consist entirely of peers.  I do no
> think that is good for long-term viability of the language.  There has to
> be a next generation of users.

ok you do not people to be peers, or are in the process of becoming
one, you just want bodies.  The next generation of users is the next
generation of peers, not 10 million ilias's.  The whole idea is to
make more peers, not to encourage more idiots.  These people who will
fail all by them selves will blame lisp to shift the responsibility off
of them, this is bad for lisp.

> 
> > ok please post your financial model's for no change and your changes.
> 
> With no change, Lisp remains a niche language with slow or nonexistent growth.

I see the use of CL growing at a consistent steady rate.  This is good
because it takes real time to internalize the mindset necessary to use
lisp as it should be used and if CL grew too fast you would have a "boom
of death" happen where lots of people claim to know CL because it pays well
and they damn near all screw up royally and CL gets blamed again.

> 
> With some change (not necessarily my change, but *some* change, which can
> only come about if we first put in place a viable process for managing
> change) there is no guarantee of growth, but I believe that the
> probability of growth is increased.  Stability precludes growth.  Change
> enables it, even if it does not guarantee it.

The community does have a process for this, here it is:
1: people write code for things they think are good ideas
2: It then is used or not by more and more of the community
3: if it is successful enough it becomes a defacto standard, I
think Grey streams are in this category, and more people use
and implement it
4: add it to the standard as a layered standard, this is a work in
progress but coming along.

> 
> > CL is designed for large, complex, ugly
> > and practile things.  Scheme is designed for academic/toy things. 
> 
> Practical things often tend to be complex and ugly, but nowhere is it
> written that they have to be.  My philosophy is: as ugly as necessary, but
> no uglier.

yes and that *is* where CL excels, much of the ugliness is standardized.
The scheme community on the other hand has each implementation create
its own incompatible version of ugly, for example.  So you end up with 
much more ugliness running around.

> 
> > > > Since you already have the talk done try it again, have your 3 Clers
> > > > bring 1 or 2 friends each.
> > > 
> > > I don't think that would help.  Availability of Lisp programmers is not
> > > and never has been the limiting factor.  The limiting factor, the question
> > > the managers always ask, the one I don't have an answer for, is "Who else
> > > uses it?"
> > 
> > Previously you have said that you had no political base to support the
> > use of CL at you current employer, now you say that is not an issue.
> > Which is it?
> 
> A programming team and a political base are not the same thing.  The
> former I have (or can quickly assemble).  The latter I do not have.

Then cheat, tell them we need to prototype in CL because it is much faster
then C++ or whatever.  Then when you are done port it to C++ or whatever.

Then write a CL to C++ or C translator in CL.  Look ma we are delivering in 
C or ADA or whatever.   

marc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005032312410001@192.168.1.51>
In article <··············@bogomips.optonline.net>, Marc Spitzer
<········@optonline.net> wrote:

> ok you do not people to be peers, or are in the process of becoming
> one, you just want bodies.

I want both.  It is not an either-or proposition.
 
> The community does have a process for this, here it is:
> 1: people write code for things they think are good ideas
> 2: It then is used or not by more and more of the community
> 3: if it is successful enough it becomes a defacto standard, I
> think Grey streams are in this category, and more people use
> and implement it
> 4: add it to the standard as a layered standard, this is a work in
> progress but coming along.

If Grey streams are the big success story then I'd say that's an
indication that we need a better process.

What about networking?  Graphics?  Database access?  Regular expressions?

> yes and that *is* where CL excels, much of the ugliness is standardized.
> The scheme community on the other hand has each implementation create
> its own incompatible version of ugly, for example.  So you end up with 
> much more ugliness running around.

Yes, I know.  There's a reason I'm here beating my head against this wall
instead of off writing Scheme code.

> Then cheat, tell them we need to prototype in CL because it is much faster
> then C++ or whatever.  Then when you are done port it to C++ or whatever.
> 
> Then write a CL to C++ or C translator in CL.  Look ma we are delivering in 
> C or ADA or whatever.   

By doing that I lose many of the advantages that motivated me to use Lisp
in the first place, not least of which is the ability to debug and change
the code once it is deployed.

E.
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfwptlzeebj.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> ... Grey streams [...]

Gray.

David Gray invented them.

He's not a colo(u)r, and consequently he doesn't have a British spelling.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86fzmv5kzb.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@bogomips.optonline.net>, Marc Spitzer
> <········@optonline.net> wrote:
> 
> > ok you do not people to be peers, or are in the process of becoming
> > one, you just want bodies.
> 
> I want both.  It is not an either-or proposition.

In all honesty I do not want a lot of ilias's

>  
> > The community does have a process for this, here it is:
> > 1: people write code for things they think are good ideas
> > 2: It then is used or not by more and more of the community
> > 3: if it is successful enough it becomes a defacto standard, I
> > think Grey streams are in this category, and more people use
> > and implement it
> > 4: add it to the standard as a layered standard, this is a work in
> > progress but coming along.
> 
> If Grey streams are the big success story then I'd say that's an
> indication that we need a better process.
> 
> What about networking?  Graphics?  Database access?  Regular expressions?

ok acl-compat for networking
mc-clim(clim) for graphics or garnet
cl-sql or odcl for db access
meta for RE's, or one of the RE libraries that are floating around

> 
> > yes and that *is* where CL excels, much of the ugliness is standardized.
> > The scheme community on the other hand has each implementation create
> > its own incompatible version of ugly, for example.  So you end up with 
> > much more ugliness running around.
> 
> Yes, I know.  There's a reason I'm here beating my head against this wall
> instead of off writing Scheme code.
> 
> > Then cheat, tell them we need to prototype in CL because it is much faster
> > then C++ or whatever.  Then when you are done port it to C++ or whatever.
> > 
> > Then write a CL to C++ or C translator in CL.  Look ma we are delivering in 
> > C or ADA or whatever.   
> 
> By doing that I lose many of the advantages that motivated me to use Lisp
> in the first place, not least of which is the ability to debug and change
> the code once it is deployed.

But you gain the fact that CL is being used in projects, it is now in
the system.  You still have to sell it for deployment, but that has
gotten easier because it is in use.

In fact when you demo the CL prototype have a section call "bug found
between Mars and Jupiter" and show them how you *can* fix the running
system using lisp.  And point out how this will be lost in the
production version in lang X.

marc
From: Kirk Kandt
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vddji1lljriq15@corp.supernews.com>
> ... There is a
> significant population of NASA engineers who violently resist all attempts
> to make their jobs simpler or easier.  For them, complexity equals job
> security, and any possible improvement is seen as a threat.  It's quite a
> serious problem.)

I expect this is a problem in most organizations and in other disciplines
too. One of the things I have grown to believe is that programmers are
fascinated by computers, compilers, operating systems, etc. (all things
computer related, but mostly software). Hence, they try to learn more about
every little detail than to try to solve problems. In 1990, I was told to
use combination mixins in Flavors when I was developing a knowledge-based
scheduling system for satellites because it was a cool thing to do, although
it hindered my progress in solving the problem.

As part of my current software process improvement work, I interviewed
numerous programmers at work. Many of these programmers programmed in Java
but did not use an IDE. Every explanation I heard was false, based on my
direct knowledge from using all the Java IDEs. Basically, they did not take
the time to learn the tools or what they were doing wrong because they
preferred to revert back to using make files, for example, and tinker around
with the build scripts. That's what some of these guys wanted to do; they
didn't want an IDE to automate the build process for them. I heard highly
respected programmers tell me that they do not trust tools, so they do not
use profilers, debuggers, etc. They only use an editor and a compiler. They
use print statements for debugging. I only did that back in 1973, then I
learned UCI Lisp, later used Interlisp, and then went to work for Symbolics
in 1982 and continued using those machines until 1990. I used Lisp because
it made me more productive. But that's what drives me; I do not believe that
is the driver for most people within our profession.

-- Kirk Kandt
From: Steven E. Harris
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <q67add4ml5b.fsf@raytheon.com>
"Kirk Kandt" <······@dslextreme.com> writes:

> I used Lisp because it made me more productive. But that's what
> drives me; I do not believe that is the driver for most people
> within our profession.

There are social rewards for being able to master the most complicated
tools, even if avoiding those tools would be the more productive
choice. Complexity -- even poorly-conceived accidental complexity
-- is a challenge that many programmer types /must/ answer
to. Ignoring it opens the door for someone else to learn it and laud
it over you. In some environments, the consequent reputation earned as
social currency is more valuable than the effort and mental space
wasted.

It's sad to bring this game to light, knowing I've had to play it in
the past and will likely have to continue doing so.

-- 
Steven E. Harris        :: ········@raytheon.com
Raytheon                :: http://www.raytheon.com
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-436610.05100729052003@news.netcologne.de>
In article <··············@bogomips.optonline.net>,
 Marc Spitzer <········@optonline.net> wrote:

> The idea is to grow competent peers not create an underclass.

No one has ever suggested the opposite in this thread.

> The fact that learning something new takes work is not a bad thing.
> It is a great filter for weeding out people who can never become
> peers, because they do not want to work for it or are incapable.

The fact that learning something new takes more work than necessary is a 
bad thing. It is a filter that weeds out people who might become peers. 

Pascal
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fXeBa.4044$vi4.1178224@news0.telusplanet.net>
"Pascal Costanza" <········@web.de> wrote in message
···································@news.netcologne.de...
> The fact that learning something new takes more work than necessary is a
> bad thing. It is a filter that weeds out people who might become peers.

Would you quantify that please.  Lisp in 24 hours?  Lisp in 24 Days?
What do you consider necessary?  When are they peers?  Lots of
subjective words.

Wade
From: Ingvar Mattsson
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87wugaaviz.fsf@gruk.tech.ensign.ftech.net>
Pascal Costanza <········@web.de> writes:

> In article <··············@bogomips.optonline.net>,
>  Marc Spitzer <········@optonline.net> wrote:
> 
> > The idea is to grow competent peers not create an underclass.
> 
> No one has ever suggested the opposite in this thread.
> 
> > The fact that learning something new takes work is not a bad thing.
> > It is a great filter for weeding out people who can never become
> > peers, because they do not want to work for it or are incapable.
> 
> The fact that learning something new takes more work than necessary is a 
> bad thing. It is a filter that weeds out people who might become peers. 

Look, I once wrote a piece of lisp code at a previous workplace. This
was part of a rather central thing (machine status checking for an
application service provider; monitoring system health was business
critical) and we desperately needed something that was graphical and
easy to tweak (to have exactly the functionality we wanted) and easy
to use (for the not necessarily technical monitoring peopel that were
to be hired).

So, being the person with the best idea on how to implement this, I
did a quick stab at an interface over a weekend, then showed it off on
the Monday. It was accepted, then polished until it suited our needs.

Now, a few months later, I was about to quit, so I had to bring
someone up to speed on maintaining this business critical piece of
monitoring software. And, lo, one of my co-workers got tagged as my
successor. Problem, he didn't know more lisp than he needed to tweak
his emacs settings (and those were, as far as I could tell, mostly
cargo-culted and/or grabbed from other people). It took me one week,
with daily two-hour Q&A sessions, pointing him at cmucl and generally
having him play with it until he extended the software to play sounds
depending on alarm level, using CLOS.

If *that* is too hard, I wouldn't want to see a language that's easy
to learn.

//Ingvar (furrfu!)
-- 
A routing decision is made at every routing point, making local hacks
hard to permeate the network with.
From: Ray Blaak
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <uptm2p7xk.fsf@STRIPCAPStelus.net>
···@jpl.nasa.gov (Erann Gat) writes:
> However, I do think that you are [...] neglecting the needs of newcomers,
> whose views generally tend to be underrepresented here.

Fear not. This reasonably emotional thread has been carrying on with 'nary a
fuck you.

c.l.l has come a long way in a short while.

Newcomers no longer need be nearly as scared as they used to.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Kirk Kandt
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vddknibns126e6@corp.supernews.com>
"Alain Picard" <·······················@optushome.com.au> wrote in message
···················@ibook.local....
> ···@jpl.nasa.gov (Erann Gat) writes:
>
> > Obviously not.  However, I do think that you are using yourself as a
> > stand-in for an *experienced* Lisp user and neglecting the needs of
> > newcomers, whose views generally tend to be underrepresented here.
>
> This can't possibly be a serious argument, can it?  Heck,
> C++ is 2 or 3 orders of magnitude more difficult to learn
> than CL, and I don't see it withering away.
>

I taught Lisp at UCLA for two years in the late 1980s. Recursion and problem
decomposition was an amazingly difficult concept for students to grasp.
Also, they had a lot of trouble learning s-expressions and special forms
like let and cond.

-- Kirk Kandt
From: Russell Wallace
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3eda37f1.149463103@news.eircom.net>
On Thu, 29 May 2003 21:15:26 +1000, Alain Picard
<·······················@optushome.com.au> wrote:

>This can't possibly be a serious argument, can it?  Heck,
>C++ is 2 or 3 orders of magnitude more difficult to learn
>than CL, and I don't see it withering away.

Having taught C++ commercially, I can say it's not all that difficult
to learn. The main problem is it's not usually taught properly; C is
the foundation of C++'s semantics, and memory access is the foundation
of C's semantics. The fundamental problem people have with learning
C++ is that none of the mounds of material they've studied ever once
mentioned that char* is an integer data type, not a string data type;
if you understand that fact, the rest of C++ is just trivia, if you
don't, it's an impenetrable fog.

I wonder is there a similar blind spot in the way Lisp is commonly
taught?

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3EDA3E9E.6040306@nyc.rr.com>
Russell Wallace wrote:
> On Thu, 29 May 2003 21:15:26 +1000, Alain Picard
> <·······················@optushome.com.au> wrote:
> 
> 
>>This can't possibly be a serious argument, can it?  Heck,
>>C++ is 2 or 3 orders of magnitude more difficult to learn
>>than CL, and I don't see it withering away.
> 
> 
> Having taught C++ commercially, I can say it's not all that difficult
> to learn. The main problem is it's not usually taught properly; C is
> the foundation of C++'s semantics, and memory access is the foundation
> of C's semantics. The fundamental problem people have with learning
> C++ is that none of the mounds of material they've studied ever once
> mentioned that char* is an integer data type, not a string data type;
> if you understand that fact, the rest of C++ is just trivia, if you
> don't, it's an impenetrable fog.
> 
> I wonder is there a similar blind spot in the way Lisp is commonly
> taught?
> 

When I taught C I emphasized that there are no such thing as strings, 
just some syntactic sugar which supported "Hello, world" in source.

When I teach Lisp I emphasize that there are no such things as lists, 
just cons cells.

Then again, most Lisp texts include cute little diagrams of cons cells 
early on, so it is not exactly an instructional blind spot. But I think 
newbies do tend to get into trouble by glossing over those sections.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdkjft4tofe0c1@corp.supernews.com>
················@eircom.net (Russell Wallace) wrote in 
·······················@news.eircom.net:
> 
> The fundamental problem people have with learning
> C++ is that none of the mounds of material they've studied ever once
> mentioned that char* is an integer data type, not a string data type;
> if you understand that fact, the rest of C++ is just trivia, if you
> don't, it's an impenetrable fog.
> 
> I wonder is there a similar blind spot in the way Lisp is commonly
> taught?
> 

I agree completely.

My take on this has been, and always will be, that most teachers and 
authors of "learn ... in x days" style books believe that programming is 
for the masses: mistake 1. Then proceed to attempt and simplify what are 
believed to be complex ideas: mistakes 2 and 3.

A topic (such as pointers) is not complex. To attempt and simplify it to 
the point that the topic no longer exists (such as claiming char* is the 
same as a string) only hinders those learning. And the reason for doing 
so - that programming is for everyone - is plain wrong. I am a terrible 
artist and musician. I will /never/ paint as good as a friend I have - 
no matter how many classes I take. I find programming is the same. Some 
just "get" it; others don't.

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Russell Wallace
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3edafbdb.199625066@news.eircom.net>
On Sun, 01 Jun 2003 19:03:25 -0000, Jeff Massung
<···@NOSPAM.mfire.com> wrote:

>A topic (such as pointers) is not complex. To attempt and simplify it to 
>the point that the topic no longer exists (such as claiming char* is the 
>same as a string) only hinders those learning. And the reason for doing 
>so - that programming is for everyone - is plain wrong. I am a terrible 
>artist and musician. I will /never/ paint as good as a friend I have - 
>no matter how many classes I take. I find programming is the same. Some 
>just "get" it; others don't.

I agree. Programming (like painting and music, which like you I've no
real talent for) isn't a monkey-see-monkey-do activity; and you're
probably right about the belief that it is, being responsible for
teaching methods which make it harder for even those with talent to
really "get" what's going on.

-- 
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
http://www.esatclear.ie/~rwallace
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <2EzWPlcsWxXzkZJO267Q56A+Nds4@4ax.com>
On Wed, 28 May 2003 10:42:25 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:

> Sub-standards will help a lot, but so far they are vaporware.

Do you mean Kent's standardization process? If so, as far as I know he is
working on it alone in his spare time. It is understandable that progress
is slow. But describing this work as vaporware is unfair.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Christophe Rhodes
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sqllwoq2c0.fsf@lambda.jcn.srcf.net>
Paolo Amoroso <·······@mclink.it> writes:

> On Wed, 28 May 2003 10:42:25 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:
>
>> Sub-standards will help a lot, but so far they are vaporware.
>
> Do you mean Kent's standardization process? If so, as far as I know he is
> working on it alone in his spare time. It is understandable that progress
> is slow. But describing this work as vaporware is unfair.

And if you don't mean Kent's standardization process, what about those
things which have been discussed here and elsewhere, such as the
uniform interface to POSIX and the class sealing that's implemented in
CMUCL recently?   Do you have no hope at all that these will produce
something?

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005031031090001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
<·····@cam.ac.uk> wrote:

> Paolo Amoroso <·······@mclink.it> writes:
> 
> > On Wed, 28 May 2003 10:42:25 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:
> >
> >> Sub-standards will help a lot, but so far they are vaporware.
> >
> > Do you mean Kent's standardization process? If so, as far as I know he is
> > working on it alone in his spare time. It is understandable that progress
> > is slow. But describing this work as vaporware is unfair.
> 
> And if you don't mean Kent's standardization process, what about those
> things which have been discussed here and elsewhere, such as the
> uniform interface to POSIX and the class sealing that's implemented in
> CMUCL recently?   Do you have no hope at all that these will produce
> something?

Of course I have hope, but hope should not be allowed to lull one into
complacency.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb2mh8$s8u$1@f1node01.rhrz.uni-bonn.de>
Nils Goesche wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>I understand Kent's usual reactions to such requests for changing
>>the standard as follows: The ANSI standardization process just
>>doesn't suit such a way of changing the language.
>>
>>What Erann wants (AFAICS) is the continuous improvement of the
>>language in the sense of piecemeal growth (see
>>http://c2.com/cgi/wiki?PiecemealGrowth). This includes both adding
>>new layers on top of the language, but also changing internals of
>>the language.
>>
>>What Kent says (AFAICS) is that the ANSI standardization process is
>>only suited for "big bang" approaches - as soon as you open up the
>>standardization process for one change you have to consider almost
>>any requests for changes. This might turn out to be worse than
>>keeping the language just as it is.
> 
> This sounds as if Kent was also against new layered standards on top
> of CL, but I think neither he nor anybody else is.

I recall Kent stating that the ANSI process doesn't provide a process 
for layered standards. If you open it up for one change you open it up 
for any change.

This doesn't say anything about the usefulness of layered standards. 
(But thanks for helping to make this point clear. I really think that 
Kent's substandards idea is a good one. It doesn't give you whole notion 
of piecemeal growth but might be the best compromise.)

>>So I think this is the core of the problem: It would probably be
>>better to have a piecemeal growth process for improving Common Lisp,
>>but we can only have a big bang process if any.
> 
> No, that wouldn't be better.  Note that this ``Big Bang�� process only
> applies to changes to the core.  Vendors can make compatible
> extensions on top of the core at any time and nobody will complain.
> 
> I for one would immediately stop using Lisp the moment it becomes a
> playground for theorists and language designers changing the core of
> the language every year.  I write programs in Lisp.  I can do that
> better in Lisp than in any other language.  I reuse stuff I wrote a
> long time ago.  I want things to stay like this.  There is absolutely
> nothing any of these proponents of ``change�� have come up with so far
> that I feel would make me any more productive.  For me, all these
> discussions about ``what if things were like this or like that in Lisp
> instead of the way they are?�� are nothing but idle games and
> speculations. 

I don't see any problems here. Of course, any new Lisp dialect 
(including an imaginary Common Lisp 3.0 with a changed core) would need 
to provide a Common Lisp compatibility layer in order to give people 
like you a healthy upgrade path. Of course, only unless it doesn't want 
to have Common Lispers on the same boat.

Changes to the core of the language would need to be "culturally 
compatible" in order to be accepted by the current users. (I know that 
this is very hard to achieve.)

> So Erann wants to change specials, for one thing.  I
> use specials all the time and they just ... work the way they are.
> Learning how to use them is not rocket science (Erann should know ;-).
> Why should I rewrite all my code just because someone thinks he found
> a more newbie-friendly way of using them?  The next guy wants to have
> continuations in the language, thus making all of my code slower and
> breaking everything I wrote with UNWIND-PROTECT.  The next guy will
> change CL into a Lisp-1.  There is probably not a single page of code
> of mine that would survive this particular change alone.  Next is
> static typing, probably.  Paul Graham, OTOH, will remove CLOS.  Oh,
> and none of my macros is going to work anymore because people will
> change the macro system to use ``hygienic�� macros exclusively,
> because..., well, ``hygienic�� is good and clean, isn't it?
> 
> No, that's not paradise.  It is hell. 

It might be hell, but it's the reality. The things you describe happen 
all the time. They just don't happen in the Common Lisp world. And 
you're right, and this is definitely good from a certain perspective.

But as with all things in life, this has both advantages and 
disadvantages. The stability of Common Lisp is also a disadvantage 
because even the parts of the language that are acknowledged as flaws 
cannot be changed.

 > The only people who like this
> are theorists who do not write programs for a living but are sitting
> in a university being paid for conducting such experiments.  And there
> already /is/ a playground for them where they can do no harm: That's
> precisely what Scheme is for.  It is perfect for that.  Scheme is
> small.  You can implement all of your Favorite Things into your Scheme
> dialect and impress your friends and collegues.  Scheme people like
> this kind of general talk about how things would be if ...  They also
> change their language all the time.  Ever read books and papers that
> use Scheme as an example language?  Hardly any two of those use the
> same language: Scheme is constantly changing.  There is your paradise.
> 
> It is not mine, though.

1) I am one of those guys enjoying language experiments. And I highly 
prefer Common Lisp over Scheme. (Please keep this always in mind: I am a 
big fan of Common Lisp. It is the best language available at the moment 
IMHO. Period.)

2) The dichotomy between theorists and practitioners is a very bad 
notion IMHO. Several advances in computer science wouldn't have been 
made if theorists hadn't decided to play around with wild ideas. Of 
course, the same holds for practitioners.

Good ideas come from bad ideas 
(http://www.uiweb.com/issues/issue08.htm). Only if you are willing to 
experiment you will eventually find something useful.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfw8ysqvqmo.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> I recall Kent stating that the ANSI process doesn't provide a process
> for layered standards. If you open it up for one change you open it up
> for any change.

I didn't say it doesn't contain a process for layered standards.  It does.

It doesn't have a process for opening up an existing standard to "only
certain kinds of changes".  In part, I believe this is because there is
no person outside of the voting committee qualified to judge what is in
bounds and what is not; viz, the fact that a certain person responded
with an editorial change ("please remove so-and-so from the credits page")
and we had to go through a half year of process _as if_ this were a technical
change, because ANSI had no process even for detecting the difference between
a technical and non-technical request, much less an ability to tell that 
something is "only a change to CLOS" or "only a change needed to support
multiprocessing" or something like that.  Once you take an existing standard
and open it for change under ANSI, it is possible for someone to say 
"I propose we void all of the existing standard and start from scratch."  
Of course, the committee might not vote for that, but it becomes an issue
of who is present to vote on the day that motion is proposed.

I think the reason people get confused about what I did and didn't say
is that to many people "layered standards" and "modular changes" are the
same thing.  But if it helps, what I'm saying is that monotonic additions
are good and backtracking is bad, at least as far as ANSI goes, because
of the degree of control of process that ANSI offers. (This is a subjective
assessment on my part; YMMV.)

I also think, independently of the above, that for the kinds of layered
standards that we need (i.e., very small ones), the ANSI process is way
too slow, expensive, and heavyweight (in terms of rules, process, etc.)
The substandards process I'm working on is highly streamlined for throughput,
but retains the parts of the standards process that I personally have come
to value.

(I'm being vague on its details because in between messages here [and a
few other RL things] I'm working pretty actively on things related to the
process, which includes some web pages describing what I'm up to, so that
you can comment in detail.)
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <lysmqz802o.fsf@cartan.de>
Pascal Costanza <········@web.de> writes:

> Nils Goesche wrote:
> > Pascal Costanza <········@web.de> writes:
> >
> >>I understand Kent's usual reactions to such requests for changing
> >>the standard as follows: The ANSI standardization process just
> >>doesn't suit such a way of changing the language.
> >>
> >>What Erann wants (AFAICS) is the continuous improvement of the
> >>language in the sense of piecemeal growth (see
> >>http://c2.com/cgi/wiki?PiecemealGrowth). This includes both adding
> >>new layers on top of the language, but also changing internals of
> >>the language.
> >>
> >>What Kent says (AFAICS) is that the ANSI standardization process is
> >>only suited for "big bang" approaches - as soon as you open up the
> >>standardization process for one change you have to consider almost
> >>any requests for changes. This might turn out to be worse than
> >>keeping the language just as it is.
> > This sounds as if Kent was also against new layered standards on top
> > of CL, but I think neither he nor anybody else is.
> 
> I recall Kent stating that the ANSI process doesn't provide a process
> for layered standards. If you open it up for one change you open it up
> for any change.
> 
> This doesn't say anything about the usefulness of layered
> standards. (But thanks for helping to make this point clear. I really
> think that Kent's substandards idea is a good one. It doesn't give you
> whole notion of piecemeal growth but might be the best compromise.)
> 
> >>So I think this is the core of the problem: It would probably be
> >>better to have a piecemeal growth process for improving Common Lisp,
> >>but we can only have a big bang process if any.
> > No, that wouldn't be better.  Note that this ``Big Bang�� process
> > only
> > applies to changes to the core.  Vendors can make compatible
> > extensions on top of the core at any time and nobody will complain.
> > I for one would immediately stop using Lisp the moment it becomes a
> > playground for theorists and language designers changing the core of
> > the language every year.  I write programs in Lisp.  I can do that
> > better in Lisp than in any other language.  I reuse stuff I wrote a
> > long time ago.  I want things to stay like this.  There is absolutely
> > nothing any of these proponents of ``change�� have come up with so far
> > that I feel would make me any more productive.  For me, all these
> > discussions about ``what if things were like this or like that in Lisp
> > instead of the way they are?�� are nothing but idle games and
> > speculations.
> 
> I don't see any problems here. Of course, any new Lisp dialect
> (including an imaginary Common Lisp 3.0 with a changed core) would
> need to provide a Common Lisp compatibility layer in order to give
> people like you a healthy upgrade path. Of course, only unless it
> doesn't want to have Common Lispers on the same boat.
> 
> Changes to the core of the language would need to be "culturally
> compatible" in order to be accepted by the current users. (I know that
> this is very hard to achieve.)
> 
> > So Erann wants to change specials, for one thing.

[...]

> > Oh, and none of my macros is going to work anymore because people
> > will change the macro system to use ``hygienic�� macros
> > exclusively, because..., well, ``hygienic�� is good and clean,
> > isn't it?  No, that's not paradise.  It is hell.
> 
> It might be hell, but it's the reality. The things you describe
> happen all the time. They just don't happen in the Common Lisp
> world. And you're right, and this is definitely good from a certain
> perspective.

No this does /not/ happen all the time.  Maybe you think so because
you apparently did a lot of stuff with Java before.  But Java is a
language /without/ an ANSI standard.  Java /indeed/ changes all the
time and its users curse it for that.  You can hardly distribute a
Java application without the JRE it was developed with.  And every now
and then Sun will revoke that JRE version and you may not even
distribute it anymore and have to work on very old deployed code again
or your customers will kill you.  That's one of the reasons I won't
use something like Java.  Or Python.  Or OCaml.

Stuff like that does /not/ happen with languages that have a real
standard behind them, OTOH.  Like C, for instance.  Sure, there is
C99, but it didn't break anything much and there /wasn't/ a C98, C97,
C96 or anything like that.  The C++ standard isn't going to change any
time soon, either.  There are thousands of standards on a file server
in my LAN here, and they /don't/ change all the time.  We use them to
build things and we know our products will continue to work for years.
Because standards are stable.  At most, there will be a careful,
backward compatible addendum every few years.

> But as with all things in life, this has both advantages and
> disadvantages. The stability of Common Lisp is also a disadvantage
> because even the parts of the language that are acknowledged as
> flaws cannot be changed.

So far I haven't seen anything like a ``flaw�� in it that seriously
obstructs my using it, actually.

> > The only people who like this are theorists who do not write
> > programs for a living but are sitting in a university being paid
> > for conducting such experiments.

> 1) I am one of those guys enjoying language experiments. And I
> highly prefer Common Lisp over Scheme. (Please keep this always in
> mind: I am a big fan of Common Lisp. It is the best language
> available at the moment IMHO. Period.)

Yes, yes, I know.  But that doesn't imply you have to use the CL
/standard/ a playground for language experiments, and that's exactly
what we're talking about here.  You can do lots of experiments in CL,
too.  You define a package and shadow some CL symbols.  Create your
new language in there.  There is a lot you can do that way.  And if it
really doesn't work without changing the core, heck, you can /still/
hack some small Scheme implementation to test that one feature.  But
small things like that do not justify changing a standard for a real
world language that is used in practice.  There would have to be a
/really good/ reason for that.  Some new kind of programming paradigm,
for instance, that really cannot be implemented without making changes
to the core.  Like, lexical closures for CLTL1, CLOS integration for
ANSI CL.  But Big New Important Things like those don't come up every
year.  So far I cannot see any great new programming paradigm I miss
in CL.

> 2) The dichotomy between theorists and practitioners is a very bad
> notion IMHO. Several advances in computer science wouldn't have been
> made if theorists hadn't decided to play around with wild ideas. Of
> course, the same holds for practitioners.
> 
> Good ideas come from bad ideas
> (http://www.uiweb.com/issues/issue08.htm). Only if you are willing
> to experiment you will eventually find something useful.

Sure, but where did you get the idea that the ANSI CL standard is the
right place for this kind of experimentation?  Once anybody comes up
with a really great idea, which, I repeat, hasn't happened so far, he
will first prove that it works /without/ any changes in the standard;
instead perhaps by hacking SBCL or something, without asking anybody
for approval.  Commercial vendors will adopt it if it's really so
great, probably switchable if it is really incompatible and breaks
code.  Then it's time to think about a new standard.  Many years will
pass.  Finally, certainly more than ten years from now, a new standard
will be adopted.

/This/ is how things work, and not only in the Lisp world (think about
IPv6, for just one example).

For some strange reason Erann seems to believe that if only some
things in CL's core are changed, his managers and masses of C++
programmers will be so impressed that they will flock to the Lisp
vendors and all become new Lispers.  But that is a rather silly idea:
Both his managers (I suspect) and all those masses of C++ programmers
(I know) are totally ignorant about CL internals.  They do not even
know if CL is a Lisp-1 or Lisp-2.  So, if we make any design changes
there, these changes will go absolutely unnoticed in those circles.
Thus, it wouldn't help him at all.

And because changes need such a /long/ time until they'll make it into
a new standard, these changes won't help /you/ as a language
experimentator, either, because: ``publish or perish��, remember? :-)

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031123290001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@cartan.de>, Nils Goesche <······@cartan.de> wrote:

> For some strange reason Erann seems to believe that if only some
> things in CL's core are changed, his managers and masses of C++
> programmers will be so impressed that they will flock to the Lisp
> vendors and all become new Lispers.

No, that is not what I believe (and frankly I'm starting to get pretty
annoyed with all the people pontificating incorrectly about what I
believe).

One more time for the record:

Common Lisp did not become popular ten years ago when its technical
advantages over the competition were truly overwhelming.  Since then,
Common Lisp is unchanged while other langauges have evolved, which has
eroded Common Lisp's technical lead.  Therefore, I see no reason to expect
that Common Lisp's popularity will start to increase because I see no
mechanism that will bring this about.

Now, some people think that Common Lisp's lack of popularity is not a
problem.  (Some people actually seem to think it's a feature.)  But for me
it's a problem because I happen to work in an industry with a herd
mentality, especially when it comes to software.  When I say I want to use
Common Lisp the first question I get asked is, "Who else uses it?" 
Without a good answer to that question I can't use it.  So I would like to
make Common Lisp more popular, both for the selfish reason that it seems
to be a procondition for my being able to use it personally (which I would
very much like to do), and because I think more people using Lisp would
make the world would be a better place.

So: 1) I want Lisp to be more popular and 2) I don't think it's going to
get more popular in its current form, therefore 3) I want it to change.

This DOES NOT MEAN that I want to rip the guts out the standard.  It does
not mean that I want to turn Common Lisp into Scheme.  It does not mean
that I want to undermine the installed base.  I do not want change merely
for the sake of change.

What I really want to do is not so much to change the language as to
change the climate.  What we have now is resistance to change out of
(legitimate) fear of the possible bad results of change.  But as a result
we rob ourselves of the possibility that a change might have a beneficial
effect.

BTW, I recognize that some people believe that Lisp's popularity will
improve without change, that it was simply ahead of its time, and that its
popularity will improve because now the world is ready to accept it
wheereas it wasn't before.  These people may be right; time will tell. 
For now I am not convinced.  Nonetheless, I recognize that my attempts to
effect change have largely failed, and I am only rehashing all this
because people keep insisting on misrepresenting my position, and so I
feel the need to set the record straight.  It really bothers me when
people say that I did what I did out of stupidity (especially when they
support this view by misrepresenting my position in ways that are in fact
stupid), or for an ego trip, or, worst of all, out of malicious intent.

> Both his managers (I suspect) and all those masses of C++ programmers
> (I know) are totally ignorant about CL internals.  They do not even
> know if CL is a Lisp-1 or Lisp-2.  So, if we make any design changes
> there, these changes will go absolutely unnoticed in those circles.
> Thus, it wouldn't help him at all.

That's exactly right.  All my managers seem to care about is how many
other people use it.  I do not expect any of the changes I advocate to
have any direct effect on my management.  What I am trying to accomplish
is to lower some of the current barriers to entry for newcomers in an
attempt to make CL more popular so that the next time a manager asks me,
"Who else uses it?" I have a good answer.

> And because changes need such a /long/ time until they'll make it into
> a new standard, these changes won't help /you/ as a language
> experimentator, either, because: ``publish or perish��, remember? :-)

You make so many wrong assumptions about my motives that it makes my head spin.

E.
From: George Demmy
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <wu3ciyddds.fsf@layton-graphics.com>
···@jpl.nasa.gov (Erann Gat) writes:
> So: 1) I want Lisp to be more popular and 2) I don't think it's
> going to get more popular in its current form, therefore 3) I want
> it to change.
> 
> This DOES NOT MEAN that I want to rip the guts out the standard.  It
> does not mean that I want to turn Common Lisp into Scheme.  It does
> not mean that I want to undermine the installed base.  I do not want
> change merely for the sake of change.
> 
> What I really want to do is not so much to change the language as to
> change the climate.  What we have now is resistance to change out of
> (legitimate) fear of the possible bad results of change.  But as a
> result we rob ourselves of the possibility that a change might have
> a beneficial effect.

There is a wide open "back door", and that is application
extension. You have hinted at this in the past -- I seem to remember
you saying that Boehm GC + C++ and 200 lines of code had you in a
lisp-like REPL. An ANSI Common Lisp *implementation* that sells itself
as a "C/C++ programming framework" -- much in the spirit of Brent
Benson's libscheme[1] -- would allow one to be nominally working in
C or C++, but getting the serious work done in Lisp[2].

I'm embedding Scheme via PLT Scheme[3] in an existing C
application. The library provides garbage collection to C (Boehm's GC
by default) and a variety of low-level utility functions that folks
find themselves writing time and time again.

This is no panacea. I find myself writing Emacs Lisp to write the C
callbacks for the scheme library. But once that's in place, it is
pretty sweet to start "extending" the application in lisp (yes, a
lisp-1, but it's better than no lisp). I'm sure there are plenty of
implementation issues -- C and Lisp are on different lobes of the
brain, I 'spose, but it is an angle. Didn't Lucid offer something like
this -- energize or cadillac or something like that? What ever became
of that stuff? Anyway, if folks didn't swoon before the beauty that is
Lisp back when it was science-fiction-but-no-really, they may or may
not soon for her now that she's what the programming language du jour
will be in 10 years. Replacement by insinuation is the way to go --
can't let them see you coming. But once you're there -- you crow about
the Lisp framework that allowed you to get there.

That's what I'm trying to do, anyway.

George


[1] Brent W. Benson, Jr. "libscheme: Scheme as a C Library".
    Proceedings of the 1994 USENIX Symposium on Very High Level
    Languages. 1994.
http://www.cs.indiana.edu/scheme-repository/libscheme-vhll/final.html

[2] The reference to Scheme is in a "need to get stuff done" context
    more appropriate to CL, than to "nothing left to take away" that
    makes Scheme special. Not trolling.

[3] A descendant of libscheme, but the maintainers PLT have a more
    traditionally Schemey mission than what we're speaking of here.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2805031546490001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@layton-graphics.com>, George Demmy
<······@layton-graphics.com> wrote:

> There is a wide open "back door", and that is application
> extension. You have hinted at this in the past -- I seem to remember
> you saying that Boehm GC + C++ and 200 lines of code had you in a
> lisp-like REPL.

Yes, but interpreted, and therefore very slow.  The skeleton is in place
to leverage g++ as a compiler and dlopen to link in the resulting
binaries, but that's slow too, and unfortunately does not allow functions
to be redefined, at least not under Linux.  Brick walls everywhere :-(

> An ANSI Common Lisp *implementation* that sells itself
> as a "C/C++ programming framework" -- much in the spirit of Brent
> Benson's libscheme[1] -- would allow one to be nominally working in
> C or C++, but getting the serious work done in Lisp[2].

Yep, that would be a Good Thing.  CLisp seems to be very close to this. 
If only it had threads!  (Yes, Sam, I know I promised I'd take a look at
this but I just don't have time at the moment.  Sorry.)

libcl would be a Good Thing.


> That's what I'm trying to do, anyway.

Good luck.

E.
From: Michael Park
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <ff20888b.0305282058.13a68ea8@posting.google.com>
I can see that there are two kinds of Lispers:

1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
threading, standard GUI), and more consistent, so Lisp would be easier
and more fun to use and, as a consequence, it could reach new
audience.

2.) Those who resist all change at all costs for their selfish and
short-sighted reasons (Perhaps they feel that they are responsible for
what Lisp/CL is now, so they take anything other than praise as an
insult, or perhaps they spent ten years learning all of the
idiosyncrasies of CL, or they have a 1MLOC app to maintain)

I mostly fall into the first category, but because I'm a nice guy, I
can commiserate with both POV. My primary objection was to those who
try to turn any kind of CL (sic) criticism into a flame-fest in this
Lisp (sic) newsgroup, or those who purport to speak for the community
as a whole (you know who you are), while asking others to shut up.



           _-_
         /^===^\
        /_-^^^-_\
       /=\     /=\
      (===\   /===)
       \===\ /===/      Defend Free Speech in comp.lang.lisp !
        \===/===/
         \=/===/              
          /===/.
         /===/==\
        /===/.===\
       /===/  \===\
      /===/    \===\
      \==/      \==/
       \/        \/
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86k7caqnkv.fsf@bogomips.optonline.net>
···········@whoever.com (Michael Park) writes:

> I can see that there are two kinds of Lispers:
> 
> 1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
> threading, standard GUI), and more consistent, so Lisp would be easier
> and more fun to use and, as a consequence, it could reach new
> audience.

Well lets take these things in order:

sockets: layered standard or use/improve acl-compat, ditto for threading,
use clim, aka mc-clim(free version)

more consistent: remove cond, when and unless from the language. All
you *need* is if

CL is fun to use, part of the fun is that you keep learning more CL
as time goes bye. 


> 
> 2.) Those who resist all change at all costs for their selfish and
> short-sighted reasons (Perhaps they feel that they are responsible for
> what Lisp/CL is now, so they take anything other than praise as an
> insult, or perhaps they spent ten years learning all of the
> idiosyncrasies of CL, or they have a 1MLOC app to maintain)

If you can not come up with a good answer to the question "How does
this help me", where me is someone other then you, you should in all
reasonableness expect absolutely no support.  You do not have a good
reason for getting support.  

Remember people have good reasons to behave in a selfish manner, self
interest or contractual obligation are two that come to mind.  Remember
that CL came about from the self interest of the maclisp community. 

> 
> I mostly fall into the first category, but because I'm a nice guy, I
> can commiserate with both POV. My primary objection was to those who
> try to turn any kind of CL (sic) criticism into a flame-fest in this
> Lisp (sic) newsgroup, or those who purport to speak for the community
> as a whole (you know who you are), while asking others to shut up.

The ITEF had/has(?) a rule, before anything could become a standard there
had to be at least 2 separate reference implementations to prove:
1: it worked
2: it could inter-operate between different implementations

This fixed a lot of problems with the standards, ie section 3.9.2
means this, no it means this ...  It was not perfect but it did help.

CL has that thinking as a credibility threshold, write an implementation
of what you want to show everyone else what a good idea it is.  They may
agree or not or you figure out it is not such a great idea after all.

marc
From: Adam Warner
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <pan.2003.05.29.05.26.01.51494@consulting.net.nz>
Hi Michael Park,

> I can see that there are two kinds of Lispers:

> 1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
> threading, standard GUI), and more consistent, so Lisp would be easier
> and more fun to use and, as a consequence, it could reach new audience.

Standard socket, GUI and threading interfaces do not require a
non-compatible rewriting of Common Lisp.

Teaching a subset of Common Lisp does not require a simpler language.

> 2.) Those who resist all change at all costs for their selfish and
> short-sighted reasons (Perhaps they feel that they are responsible for
> what Lisp/CL is now, so they take anything other than praise as an
> insult, or perhaps they spent ten years learning all of the
> idiosyncrasies of CL, or they have a 1MLOC app to maintain)

I see you enjoy trolling.

> I mostly fall into the first category, but because I'm a nice guy, I can
> commiserate with both POV.

You fall into the category of sadly ignorant.

> My primary objection was to those who try to turn any kind of CL (sic)
> criticism into a flame-fest in this Lisp (sic) newsgroup, or those who
> purport to speak for the community as a whole (you know who you are),
> while asking others to shut up.

You're probably posting from Google's web interface for a reason.
Commiserations to any real Michael Park.

Regards,
Adam
From: Michael Park
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <ff20888b.0305301129.4b2d2fb1@posting.google.com>
"Adam Warner" <······@consulting.net.nz> wrote in message news:<·····························@consulting.net.nz>...
> Hi Michael Park,
> 
> > I can see that there are two kinds of Lispers:
>  
> > 1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
> > threading, standard GUI), and more consistent, so Lisp would be easier
> > and more fun to use and, as a consequence, it could reach new audience.
> 
> Standard socket, GUI and threading interfaces do not require a
> non-compatible rewriting of Common Lisp.

Adam, you suffer from the attention - deficit disorder. Placing "sic"
didn't help. I never said CL needed to be changed in any major way. I
want better Lisp dialects to emerge. And because much can be learned
_from_ CL, I want people to freely discuss both CL's strengths, of
which there are many, and weaknesses, without being flamed for
dissenting. Paul Graham said it best, when he wrote "It's Lisp I like,
not CL"
 
> Teaching a subset of Common Lisp does not require a simpler language.

Nonesense. CL is riddled with "special cases". You get into them the
moment you teach #'- . I'm not flaming you for having a different
opinion, it's just that you are too ignorant.
From: Adam Warner
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <pan.2003.05.30.23.21.27.113946@consulting.net.nz>
Hi Michael Park,

>> Hi Michael Park,
>> 
>> > I can see that there are two kinds of Lispers:
>>  
>> > 1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
>> > threading, standard GUI), and more consistent, so Lisp would be
>> > easier and more fun to use and, as a consequence, it could reach new
>> > audience.
>> 
>> Standard socket, GUI and threading interfaces do not require a
>> non-compatible rewriting of Common Lisp.
> 
> Adam, you suffer from the attention - deficit disorder. Placing "sic"
> didn't help. I never said CL needed to be changed in any major way. I
> want better Lisp dialects to emerge. And because much can be learned
> _from_ CL, I want people to freely discuss both CL's strengths, of which
> there are many, and weaknesses, without being flamed for dissenting.
> Paul Graham said it best, when he wrote "It's Lisp I like, not CL"
>  
>> Teaching a subset of Common Lisp does not require a simpler language.
> 
> Nonesense. CL is riddled with "special cases". You get into them the
> moment you teach #'- . I'm not flaming you for having a different
> opinion, it's just that you are too ignorant.

I was foolish to attempt dialogue with a small minded troll.
comp.lang.scheme readers should realise that "Michael Park" wrote this to
characterise the two kinds of Lispers (posted only to comp.lang.lisp):

   1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
   threading, standard GUI), and more consistent, so Lisp would be easier
   and more fun to use and, as a consequence, it could reach new audience.

   2.) Those who resist all change at all costs for their selfish and
   short-sighted reasons (Perhaps they feel that they are responsible for
   what Lisp/CL is now, so they take anything other than praise as an
   insult, or perhaps they spent ten years learning all of the
   idiosyncrasies of CL, or they have a 1MLOC app to maintain)

Then he followed up to comp.lang.scheme after he got a bite.

"Michael Park" originally broken into the thread with this statement:

   I've been following this discussion very closely, but I haven't seen a
   single valid argument in favor of Lisp2. Maybe I'm thick or something.
   The only arguments I've seen were in the "it's not as bad as everyone
   thinks", "you can get used to it if you really try" and "he made me do
   it" categories.

   P.S. This is a troll. Thanks for noticing.

It has already generated over 180 responses.

Also consider that "Michael Park" posts from Google Groups, uses a legal
but fake email address and only appeared in comp.lang.lisp on 19 May 2003.

He is also the author of this beauty in comp.lang.functional:

   From: Michael Park
   Subject: Haskell: am I missing something?
   Newsgroups: comp.lang.functional
   Date: 2003-05-22 17:56:24 PST

   Apart from wanting a slower compiler, slower code, less useful REPL and
   virtual inability to debug, what motivates some people to use Haskell as
   opposed to eager MLs?

Regards,
Adam
From: Coby Beck
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vSRBa.46621$1s1.620187@newsfeeds.bigpond.com>
"Adam Warner" <······@consulting.net.nz> wrote in message
···································@consulting.net.nz...
> Hi Michael Park,
>
> >> Hi Michael Park,
> >>
> >> > I can see that there are two kinds of Lispers:
> >>
> >> > 1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
> >> > threading, standard GUI), and more consistent, so Lisp would be
> >> > easier and more fun to use and, as a consequence, it could reach new
> >> > audience.
> >>
> >> Standard socket, GUI and threading interfaces do not require a
> >> non-compatible rewriting of Common Lisp.
> >
> > Adam, you suffer from the attention - deficit disorder. Placing "sic"
> > didn't help. I never said CL needed to be changed in any major way. I
> > want better Lisp dialects to emerge. And because much can be learned
> > _from_ CL, I want people to freely discuss both CL's strengths, of which
> > there are many, and weaknesses, without being flamed for dissenting.
> > Paul Graham said it best, when he wrote "It's Lisp I like, not CL"
> >
> >> Teaching a subset of Common Lisp does not require a simpler language.
> >
> > Nonesense. CL is riddled with "special cases". You get into them the
> > moment you teach #'- . I'm not flaming you for having a different
> > opinion, it's just that you are too ignorant.
>
> I was foolish to attempt dialogue with a small minded troll.
> comp.lang.scheme readers should realise that "Michael Park" wrote this to
> characterise the two kinds of Lispers (posted only to comp.lang.lisp):
>
>    1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
>    threading, standard GUI), and more consistent, so Lisp would be easier
>    and more fun to use and, as a consequence, it could reach new audience.
>
>    2.) Those who resist all change at all costs for their selfish and
>    short-sighted reasons (Perhaps they feel that they are responsible for
>    what Lisp/CL is now, so they take anything other than praise as an
>    insult, or perhaps they spent ten years learning all of the
>    idiosyncrasies of CL, or they have a 1MLOC app to maintain)
>
> Then he followed up to comp.lang.scheme after he got a bite.
>
> "Michael Park" originally broken into the thread with this statement:
>
>    I've been following this discussion very closely, but I haven't seen a
>    single valid argument in favor of Lisp2. Maybe I'm thick or something.
>    The only arguments I've seen were in the "it's not as bad as everyone
>    thinks", "you can get used to it if you really try" and "he made me do
>    it" categories.
>
>    P.S. This is a troll. Thanks for noticing.
>
> It has already generated over 180 responses.
>
> Also consider that "Michael Park" posts from Google Groups, uses a legal
> but fake email address and only appeared in comp.lang.lisp on 19 May 2003.
>
> He is also the author of this beauty in comp.lang.functional:
>
>    From: Michael Park
>    Subject: Haskell: am I missing something?
>    Newsgroups: comp.lang.functional
>    Date: 2003-05-22 17:56:24 PST
>
>    Apart from wanting a slower compiler, slower code, less useful REPL and
>    virtual inability to debug, what motivates some people to use Haskell
as
>    opposed to eager MLs?
>

Good investigative work there Adam!  It's easy to recognize a twit but not
always so easy to see deliberate and malicious ignorance.

Who knows, one day I may even start using a killfile....

-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Nikodemus Siivola
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb8est$5l9te$1@midnight.cs.hut.fi>
Michael Park <···········@whoever.com> wrote:
> I can see that there are two kinds of Lispers:

> 1.) Those who want Lisp (sic) to be simpler, more modern (sockets,
> threading, standard GUI), and more consistent, so Lisp would be easier

Simplicity (whatever it means) and modern API's are orthogonal issues.

*Everybody* is in favor of standard API's. (Ok, there may be be people who
aren't but I cannot remember anyone speaking up.)

Simplicity? If you mean Lisp-1, or hygienic macros -- no thanks here.

Cheers,

  -- Nikodemus
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <he78bl9l.fsf@ccs.neu.edu>
Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:

> *Everybody* is in favor of standard API's. (Ok, there may be be people who
> aren't but I cannot remember anyone speaking up.)

It depends on the API.  There are certainly several standard API's
that I would shitcan in a second.  Among them are sockets and the
de-facto lisp multiprocessing api.  I think both are far too low
level.
From: Nikodemus Siivola
Subject: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbfq62$5nmcf$1@midnight.cs.hut.fi>
Joe Marshall <···@ccs.neu.edu> wrote:
> Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:

>> *Everybody* is in favor of standard API's. (Ok, there may be be people who
>> aren't but I cannot remember anyone speaking up.)

> It depends on the API.  There are certainly several standard API's
> that I would shitcan in a second.  Among them are sockets and the
> de-facto lisp multiprocessing api.  I think both are far too low
> level.

But they aren't standard - unless you mean "something that it's
nearly standard to have an API for".

The way I see it is that currently there really aren't *any* standard
API's, though I for one would certainly love to have them. What we have
is:

 - Things every implemementation caters for in a non-compatible way, for
   which there exist least-common denominator portability layers. 
   (Eg. sockets and FFI, and CLOCC-PORT / UFFI).

 - Semi-portable libraries that are more about implementation of
   of functionality then specifying a good API and then implementing it
   (Eg. ASDF, CLX).

 - De facto standards. (Eg. MOP, CLIM)

 - Standard proposals (both old and new) like Gray Streams, and the recent
   POSIX interface and MP proposals by Daniel Barlow.

 - Well designed API's that aren't really standard proposals, but are
   already being adopted as such (Eg. Simple Streams).

I for one am eagerly waiting for a process (eg. substandards) to emerge
to provide a degree of unification...

Cheers,

  -- Nikodemus
From: Joe Marshall
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <u1b8a4cz.fsf@ccs.neu.edu>
Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:

> Joe Marshall <···@ccs.neu.edu> wrote:
> > Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:
> 
> >> *Everybody* is in favor of standard API's. (Ok, there may be be people who
> >> aren't but I cannot remember anyone speaking up.)
> 
> > It depends on the API.  There are certainly several standard API's
> > that I would shitcan in a second.  Among them are sockets and the
> > de-facto lisp multiprocessing api.  I think both are far too low
> > level.
> 
> But they aren't standard - unless you mean "something that it's
> nearly standard to have an API for".

Regardless, I'm not in favor of them.  If someone proposed a network
API based on some sort of socket derivative, I'd be against it.
From: Brian Downing
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <5cKCa.40325$M01.12651@sccrnsc02>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
> Regardless, I'm not in favor of them.  If someone proposed a network
> API based on some sort of socket derivative, I'd be against it.

I'm just curious - what would you have in mind as a good replacement
for sockets that doesn't give up the ability to deal with arbitrary
streams (TCP) and send and receive arbitrary datagrams (UDP, etc.)?

-bcd
--
*** Brian Downing <bdowning at lavos dot net> 
From: Joe Marshall
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <d6hwa3bi.fsf@ccs.neu.edu>
Brian Downing <·············@lavos.net> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
> > Regardless, I'm not in favor of them.  If someone proposed a network
> > API based on some sort of socket derivative, I'd be against it.
> 
> I'm just curious - what would you have in mind as a good replacement
> for sockets that doesn't give up the ability to deal with arbitrary
> streams (TCP) and send and receive arbitrary datagrams (UDP, etc.)?

I think there is a place for streams and datagrams, so I wouldn't toss
them completely.  However, most uses of TCP and UDP put an additional
protocol layer on top of the underlying stream.  I'd rather be working
at this level of abstraction.  Or, if not exactly at this level, at a
level where I can describe the higher protocol as a series of
transactions involving structured objects rather than as a stream that
I shove characters down using FORMAT statements.
From: Raymond Wiker
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <864r38tri0.fsf@raw.grenland.fast.no>
Brian Downing <·············@lavos.net> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
> > Regardless, I'm not in favor of them.  If someone proposed a network
> > API based on some sort of socket derivative, I'd be against it.
> 
> I'm just curious - what would you have in mind as a good replacement
> for sockets that doesn't give up the ability to deal with arbitrary
> streams (TCP) and send and receive arbitrary datagrams (UDP, etc.)?

        The Java equivalent of the sockets library, for example?

-- 
Raymond Wiker                        Mail:  ·············@fast.no
Senior Software Engineer             Web:   http://www.fast.no/
Fast Search & Transfer ASA           Phone: +47 23 01 11 60
P.O. Box 1677 Vika                   Fax:   +47 35 54 87 99
NO-0120 Oslo, NORWAY                 Mob:   +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
From: Pascal Costanza
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbfr3e$o5o$1@f1node01.rhrz.uni-bonn.de>
Joe Marshall wrote:
> Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:

>>>It depends on the API.  There are certainly several standard API's
>>>that I would shitcan in a second.  Among them are sockets and the
>>>de-facto lisp multiprocessing api.  I think both are far too low
>>>level.
>>
>>But they aren't standard - unless you mean "something that it's
>>nearly standard to have an API for".
> 
> Regardless, I'm not in favor of them.  If someone proposed a network
> API based on some sort of socket derivative, I'd be against it.

Could you explain why?

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Joe Marshall
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <7k84a33g.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> Joe Marshall wrote:
>  If someone proposed a network
>  API based on some sort of socket derivative, I'd be against it.
> 
> Could you explain why?

It's too low level.  If I have a URL, I want to dereference it and get
a `document' or `image' or whatever.  If I have some email, I want to
send it.  If I have some files, I want to transfer them.  These
operations are a couple of levels of abstraction above the socket
layer. 

          `transfer some files'
                   |
          `ftp commands OPEN, MPUT, etc.'
                   |
          `transaction grammar'
                   |
          `TCP connection to port 21'
From: Karl A. Krueger
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbfpui$7bn$2@baldur.whoi.edu>
Joe Marshall <···@ccs.neu.edu> wrote:
> Pascal Costanza <········@web.de> writes:
> 
>> Joe Marshall wrote:
>>  If someone proposed a network
>>  API based on some sort of socket derivative, I'd be against it.
>> 
>> Could you explain why?
> 
> It's too low level.  If I have a URL, I want to dereference it and get
> a `document' or `image' or whatever.  If I have some email, I want to
> send it.  If I have some files, I want to transfer them.  These
> operations are a couple of levels of abstraction above the socket
> layer. 

In Python you can get a socket and send raw data over it ... or you can
query a Web server for a URL using an HTTP-specific module ... or you
can write a structured protocol server using Twisted, the de-facto
standard engine for that purpose.  These are not mutually exclusive. :)

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Peter Seibel
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m34r388ek9.fsf@javamonkey.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Pascal Costanza <········@web.de> writes:
> 
> > Joe Marshall wrote:
> >  If someone proposed a network
> >  API based on some sort of socket derivative, I'd be against it.
> > 
> > Could you explain why?
> 
> It's too low level. If I have a URL, I want to dereference it and
> get a `document' or `image' or whatever. If I have some email, I
> want to send it. If I have some files, I want to transfer them.
> These operations are a couple of levels of abstraction above the
> socket layer.

Hmmm. I always figured part of the reason there aren't de facto
standards for these higher level abstractions is because there aren't
de facto standards for the lower levels. If there was a standard
socket API (low level as all get out) I--for one--would be more likely
to implement an FTP, HTTP, or SMTP library knowing that it would be
widely useful.

So it seems that opposing a lower level API because you want a higher
level API is counter productive (that is counter, to your own
desires). What's the bad outcome of having a standardized low-level
API for them that needs it and to serve as a basis for building
higher-level APIs?

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Joe Marshall
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <n0h08asx.fsf@ccs.neu.edu>
Peter Seibel <·····@javamonkey.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > Pascal Costanza <········@web.de> writes:
> > 
> > > Joe Marshall wrote:
> > >  If someone proposed a network
> > >  API based on some sort of socket derivative, I'd be against it.
> > > 
> > > Could you explain why?
> > 
> > It's too low level. If I have a URL, I want to dereference it and
> > get a `document' or `image' or whatever. If I have some email, I
> > want to send it. If I have some files, I want to transfer them.
> > These operations are a couple of levels of abstraction above the
> > socket layer.
> 
> Hmmm. I always figured part of the reason there aren't de facto
> standards for these higher level abstractions is because there aren't
> de facto standards for the lower levels. 

Not at all!  Consider your Lisp.  You call CONS, and a cons cell is
created.  While it is *probably* implemented as a 32-bit machine word
with a special tag in the low 3 bits, it might not be.  But I doubt
you care.  You just want have CONS and CAR and CDR work, and
reasonably fast.

Now suppose there were a proposed standard `memory' API that, for
example, required that all objects have a `vtble'.  This doesn't help
the implementor of CONS, CAR, and CDR at all.  In fact, it is
detrimental because it constrains his implementation to an
inappropriate model.

> If there was a standard socket API (low level as all get out) I ---
> for one --- would be more likely to implement an FTP, HTTP, or SMTP
> library knowing that it would be widely useful.

I'm looking at this from the end-user point of view.  I don't care if
you use sockets or a distributed graph-reduction engine.  I just want
the higher-level FTP to work.

> So it seems that opposing a lower level API because you want a higher
> level API is counter productive (that is counter, to your own
> desires). What's the bad outcome of having a standardized low-level
> API for them that needs it and to serve as a basis for building
> higher-level APIs?

It constrains the low-level implementation.  You are specifying
exactly how to accomplish something rather than specfying exactly what
it is you want accomplished.  Berkely sockets fit in nicely with Unix,
but the model is very difficult to work with in a message-loop
oriented OS such as Windows or the older MacIntoshes.  A higher-level
specification would allow more flexibility to the implementors.
 
From: Peter Seibel
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3vfvo6qwg.fsf@javamonkey.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Peter Seibel <·····@javamonkey.com> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> > 
> > > Pascal Costanza <········@web.de> writes:
> > > 
> > > > Joe Marshall wrote:
> > > >  If someone proposed a network
> > > >  API based on some sort of socket derivative, I'd be against it.
> > > > 
> > > > Could you explain why?
> > > 
> > > It's too low level. If I have a URL, I want to dereference it
> > > and get a `document' or `image' or whatever. If I have some
> > > email, I want to send it. If I have some files, I want to
> > > transfer them. These operations are a couple of levels of
> > > abstraction above the socket layer.
> > 
> > Hmmm. I always figured part of the reason there aren't de facto
> > standards for these higher level abstractions is because there
> > aren't de facto standards for the lower levels.
> 
> Not at all! Consider your Lisp. You call CONS, and a cons cell is
> created. While it is *probably* implemented as a 32-bit machine word
> with a special tag in the low 3 bits, it might not be. But I doubt
> you care. You just want have CONS and CAR and CDR work, and
> reasonably fast.

Yes, sometimes (often, even usually) you want higher-level APIs. I'm
not disagreeing with that. But *my* point is that some of the
high-level APIs that don't currenly exist and which would be nice to
have would be *more likely* to exist if there were standard low-level
APIs on which to build them. I don't need a lower-level API to
implement CONS, et al. because the language standard already requires
all Lisp implementations to provide them for me. But the standard does
not require Lisps to provide an FTP or higher-level network file
transfer API. So somebody's got to build those APIs. And you can't
build them out of CONS and CAR.

> Now suppose there were a proposed standard `memory' API that, for
> example, required that all objects have a `vtble'. This doesn't help
> the implementor of CONS, CAR, and CDR at all. In fact, it is
> detrimental because it constrains his implementation to an
> inappropriate model.

Well, if it's inappropriate for implementing CONS, et al, they
wouldn't have to use it for that. There's a difference between saying:

 1) We have an existing high-level API (e.g. CONS/CAR/CDR) and now we
 should standardize the layer below that so people can muck with it.

 2) There are a variety of high-level APIs (e.g. FTP, HTTP, and future
 internet protocols) that don't exist. We should standardize a layer
 below those APIs so they can be built in a standard way.

I'm talking about (2). Your hypothetical "memory API" is talking about
(1). (BTW, when I say "standardize" I do not envision any particular
standardization process--i.e. it could be ANSI or it could be all the
implementors agreeing on an API or it could be some third party who
provides a uniform API with the appropriate conditional code to make
it work on enough different implementations to be interesting.)

> > If there was a standard socket API (low level as all get out) I
> > --- for one --- would be more likely to implement an FTP, HTTP, or
> > SMTP library knowing that it would be widely useful.
> 
> I'm looking at this from the end-user point of view. I don't care if
> you use sockets or a distributed graph-reduction engine. I just want
> the higher-level FTP to work.

And who's supposed to provide the higher-level API? Each vendor
separately? That seems like a waste of effort. And what about new APIs
that could be easily built upon a standard low-level API?

I'll admit I have no idea what the heck a distributed graph-reduction
engine is, but at some point if I'm writing an FTP API that's going to
talk to actual FTP servers or clients, I need to be able to cause my
networking hardware to put appropriately formatted packets onto the
network. Given the way FTP is specified that will be easier if I have
something like a socket and datagram abstraction (as opposed to say,
an API that lets me poke directly at an ethernet card). If that API is
standardized across Lisp implementations then I can write on version
of the FTP API and have it run on all those Lisps, increasing the
leverage I get from doing it (either because *I* want to use it on
multiple Lisps, because I want to sell it to people who are using
different Lisp implementations than I may be, or because I consider
the reputation I'll gain from distributing a widely used library to be
a benefit.)

> > So it seems that opposing a lower level API because you want a
> > higher level API is counter productive (that is counter, to your
> > own desires). What's the bad outcome of having a standardized
> > low-level API for them that needs it and to serve as a basis for
> > building higher-level APIs?
> 
> It constrains the low-level implementation. You are specifying
> exactly how to accomplish something rather than specfying exactly
> what it is you want accomplished.

Well, what if what I want to accomplis is to send and receive
correctly formatted Internet Protocol packets? Is there some reason
why there *shouldn't* be an API for such a thing. Or are you saying
there should be but you particularly dislike "sockets" as an
abstraction. If the latter, I suppose I could agree--I'd love to have
an API that lets me send and receive IP packets asynchronously. As
long as I can do everything I need with that API to implement both
existing and yet to be invented network protocols that's fine. (Though
it certainly seems that there's some benefit to using (or at least
providing) an API that is as much like the OS-level APIs that people
are used to.)

> Berkely sockets fit in nicely with Unix, but the model is very
> difficult to work with in a message-loop oriented OS such as Windows
> or the older MacIntoshes. A higher-level specification would allow
> more flexibility to the implementors.

Hmmm. If it's such a bad match, why does Windows provides a socket API
based on Berkeley sockets? Yet they also can still provide their own
non-Unixy IO completion ports. Isn't that an argument *for* providing
layered APIs?

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Joe Marshall
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <65nn8bkc.fsf@ccs.neu.edu>
Peter Seibel <·····@javamonkey.com> writes:

> Well, what if what I want to accomplis is to send and receive
> correctly formatted Internet Protocol packets? Is there some reason
> why there *shouldn't* be an API for such a thing. Or are you saying
> there should be but you particularly dislike "sockets" as an
> abstraction. 

That's about right.  It isn't that I *particularly* dislike `sockets',
but when I use them, I keep thinking to myself, there has *got* to be
an easier way.

> If the latter, I suppose I could agree--I'd love to have
> an API that lets me send and receive IP packets asynchronously. 

That's certainly one of the major drawbacks.

> > Berkely sockets fit in nicely with Unix, but the model is very
> > difficult to work with in a message-loop oriented OS such as Windows
> > or the older MacIntoshes. A higher-level specification would allow
> > more flexibility to the implementors.
> 
> Hmmm. If it's such a bad match, why does Windows provides a socket API
> based on Berkeley sockets? 

For compatability.  But it isn't a seamless match, it's a kludge.

> Yet they also can still provide their own non-Unixy IO completion
> ports.

'cause that's how they *really* work in Windows.

> Isn't that an argument *for* providing layered APIs?

Not unless *all* the APIs work correctly.
From: Peter Seibel
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3brxf6rjq.fsf@javamonkey.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Peter Seibel <·····@javamonkey.com> writes:
> 
> > Well, what if what I want to accomplis is to send and receive
> > correctly formatted Internet Protocol packets? Is there some reason
> > why there *shouldn't* be an API for such a thing. Or are you saying
> > there should be but you particularly dislike "sockets" as an
> > abstraction. 
> 
> That's about right. It isn't that I *particularly* dislike
> `sockets', but when I use them, I keep thinking to myself, there has
> *got* to be an easier way.

So I'm still curious what sort of API you *wouldn't* oppose that would
allow me to write arbitrary network protocols that use TCP and UDP
(and possibly even raw IP packets).

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Daniel Barlow
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <873cirf5iz.fsf@noetbook.telent.net>
Peter Seibel <·····@javamonkey.com> writes:

>> That's about right. It isn't that I *particularly* dislike
>> `sockets', but when I use them, I keep thinking to myself, there has
>> *got* to be an easier way.
>
> So I'm still curious what sort of API you *wouldn't* oppose that would
> allow me to write arbitrary network protocols that use TCP and UDP
> (and possibly even raw IP packets).

Perhaps it would help to draw a distinction between "standards that
everyone should implement" and, for want of a better phrase,
"optional standards".

If a platform most conveniently supports network access using
something like the BSD socket api, I think it would be nice if (a)
Lisp access to this api was available, (b) the various Lisp
implementations available for the platform agreed on the interface to
use.  

If a platform has some other 'native' interface for talking to the
network, I'd rather its Lisp implementations support access to that
interface, instead of emulating BSD sockets.  I would _certainly_
oppose in any way mandating or encouraging that the BSD interface be
wedged in on top of the native system it's a bad fit.

Ditto POSIX, for that matter.  The unixlike-operating-systems
interface I circulated for comment a few weeks ago is for Lisps on
unixlike-operating-systems.  If you're {lucky, unfortunate} enough to
be using something else, you shouldn't be trying to do POSIX system
prorgamming on it.


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Peter Seibel
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m37k836n01.fsf@javamonkey.com>
Daniel Barlow <···@telent.net> writes:

> Peter Seibel <·····@javamonkey.com> writes:
> 
> >> That's about right. It isn't that I *particularly* dislike
> >> `sockets', but when I use them, I keep thinking to myself, there has
> >> *got* to be an easier way.
> >
> > So I'm still curious what sort of API you *wouldn't* oppose that would
> > allow me to write arbitrary network protocols that use TCP and UDP
> > (and possibly even raw IP packets).
> 
> Perhaps it would help to draw a distinction between "standards that
> everyone should implement" and, for want of a better phrase,
> "optional standards".
> 
> If a platform most conveniently supports network access using
> something like the BSD socket api, I think it would be nice if (a)
> Lisp access to this api was available, (b) the various Lisp
> implementations available for the platform agreed on the interface
> to use.

I'd go a step further and say that if different platforms provide a
similar enough underlying API (such as most OS's do with regard to
BSD-style sockets) then different Lisp implementations across those
different OS's should also agree on an interface to use, at least to
expose the parts that are common across OS's. This is the Java model
of write once, run anywhere which actually works remarkably well for
things like networking.

> If a platform has some other 'native' interface for talking to the
> network, I'd rather its Lisp implementations support access to that
> interface, instead of emulating BSD sockets. I would _certainly_
> oppose in any way mandating or encouraging that the BSD interface be
> wedged in on top of the native system it's a bad fit.

Just out of curiosity, what OS's that the current crop of Lisp
implementations support don't support BSD-style sockets?

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Mario S. Mommer
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fzr86ba9rk.fsf@cupid.igpm.rwth-aachen.de>
Peter Seibel <·····@javamonkey.com> writes:
> > It's too low level. If I have a URL, I want to dereference it and
> > get a `document' or `image' or whatever. If I have some email, I
> > want to send it. If I have some files, I want to transfer them.
> > These operations are a couple of levels of abstraction above the
> > socket layer.
> 
> Hmmm. I always figured part of the reason there aren't de facto
> standards for these higher level abstractions is because there aren't
> de facto standards for the lower levels. If there was a standard
> socket API (low level as all get out) I--for one--would be more likely
> to implement an FTP, HTTP, or SMTP library knowing that it would be
> widely useful.

Is there something wrong with the PORT stuff over at clocc?

Mario.
From: Matthias
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbhp6r$6hn$1@trumpet.uni-mannheim.de>
Mario S. Mommer wrote:
> Is there something wrong with the PORT stuff over at clocc?

The last time I looked there was no documentation nor testcases or examples.  

A standard API is more than just code.  (It is documentation, a more or less 
general agreement that the API should be used, probably also some code.  
The "agreement" is the difficult part.)
From: Mario S. Mommer
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <fzn0gza3qm.fsf@cupid.igpm.rwth-aachen.de>
Matthias <····@yourself.pl> writes:
> Mario S. Mommer wrote:
> > Is there something wrong with the PORT stuff over at clocc?
> 
> The last time I looked there was no documentation nor testcases or
> examples.

Fair enough. But that could be solved by writing documentation,
examples, and testcases. They really aren't rocket science.

> A standard API is more than just code.  (It is documentation, a more or less 
> general agreement that the API should be used, probably also some code.  
> The "agreement" is the difficult part.)

They seem to me pretty pragmatic, and pretty low-level, appart from
the Gray streams.

Mario.
From: Peter Seibel
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3fzmr6s2g.fsf@javamonkey.com>
Mario S. Mommer <········@yahoo.com> writes:

> Peter Seibel <·····@javamonkey.com> writes:
> > > It's too low level. If I have a URL, I want to dereference it and
> > > get a `document' or `image' or whatever. If I have some email, I
> > > want to send it. If I have some files, I want to transfer them.
> > > These operations are a couple of levels of abstraction above the
> > > socket layer.
> > 
> > Hmmm. I always figured part of the reason there aren't de facto
> > standards for these higher level abstractions is because there aren't
> > de facto standards for the lower levels. If there was a standard
> > socket API (low level as all get out) I--for one--would be more likely
> > to implement an FTP, HTTP, or SMTP library knowing that it would be
> > widely useful.
> 
> Is there something wrong with the PORT stuff over at clocc?

Mostly just that I haven't been able to wrap my head around them.
Which is as much a reflection on how hard I tried as anything else.
Though I did take a look and it wasn't immediately obvious to me how
to get up and running. But I didn't have a pressing need at the time
so I didn't push it very hard. I'd certainly start there if I was
trying to write an FTP library.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Matthias
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbfrqc$1d2$1@trumpet.uni-mannheim.de>
Joe Marshall wrote:

> Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:
> 
>> Joe Marshall <···@ccs.neu.edu> wrote:
>> > Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:
>> 
>> >> *Everybody* is in favor of standard API's. (Ok, there may be be people
>> >> who aren't but I cannot remember anyone speaking up.)
>> 
>> > It depends on the API.  There are certainly several standard API's
>> > that I would shitcan in a second.  Among them are sockets and the
>> > de-facto lisp multiprocessing api.  I think both are far too low
>> > level.
>> 
>> But they aren't standard - unless you mean "something that it's
>> nearly standard to have an API for".
> 
> Regardless, I'm not in favor of them.  If someone proposed a network
> API based on some sort of socket derivative, I'd be against it.

Assume you have sockets standardized and find them to low-level for a task:  
You can always build a higher-level abstraction on top of them and you will 
be assured that it runs on every platform implementing the low-level API.

Now assume you have some higher-level abstraction which doesn't fit your 
needs for a particular application (let's say performance-wise).  Then you 
are stuck. 

Of course, a layered API solves this problem nicely.  As it is presumably 
easier to agree on low-level APIs than on higher-level abstractions it's 
probably a good idea to standardize the low-level stuff first and build the 
high-level stuff afterwards.  Once the latter is found useful it should 
also get standardized.  And so on.
From: Pascal Costanza
Subject: Re: Standard API's (Was: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbfsmu$u9c$1@f1node01.rhrz.uni-bonn.de>
Matthias wrote:

> Of course, a layered API solves this problem nicely.  As it is presumably 
> easier to agree on low-level APIs than on higher-level abstractions it's 
> probably a good idea to standardize the low-level stuff first and build the 
> high-level stuff afterwards.  Once the latter is found useful it should 
> also get standardized.  And so on.

This scheme doesn't necessarily work because of possible crosscutting 
concerns. For example, it can be hard to introduce quality of service 
aspects into a layered approach. (For example, you might want that 
pressing ESC stops all network interactions immediately.)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Daniel Barlow
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87llwkijfd.fsf@noetbook.telent.net>
Joe Marshall <···@ccs.neu.edu> writes:

> It depends on the API.  There are certainly several standard API's
> that I would shitcan in a second.  Among them are sockets and the

Sorry, but which standard are you talking about there?  There's not
even a proposed socket api for lisp that I know of, though ACL enjoys
a vaguely de-facto-standard reputation in some places by virtue of the
existence of acl-compat.

> de-facto lisp multiprocessing api.  I think both are far too low
> level.

Is your objection to low-level interfaces on the grounds that they're
less convenient for the application programmer, or that (like for
example without-preemption) they sometimes constrain the
implementation techniques used?

I'd happily concur with the latter concern, but I worry that the
design of a standard high-level interface that addresses the former
concern would be non-trivial - especially for something like sockets,
where the low-level details often really are important for
interoperability, and where everyone probably wants to do different
things with their sockets anyway.



-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <1xyca2oc.fsf@ccs.neu.edu>
Daniel Barlow <···@telent.net> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > It depends on the API.  There are certainly several standard API's
> > that I would shitcan in a second.  Among them are sockets and the
> 
> Sorry, but which standard are you talking about there?  There's not
> even a proposed socket api for lisp that I know of, though ACL enjoys
> a vaguely de-facto-standard reputation in some places by virtue of the
> existence of acl-compat.

No, but the various lisps that support network code typically
implement a rather thin api on top of the underlying socket library.

> > de-facto lisp multiprocessing api.  I think both are far too low
> > level.
> 
> Is your objection to low-level interfaces on the grounds that they're
> less convenient for the application programmer, or that (like for
> example without-preemption) they sometimes constrain the
> implementation techniques used?

Both.

In addition to them being less convenient, they also tend to influence
the design that the application programmer comes up with.

> I'd happily concur with the latter concern, but I worry that the
> design of a standard high-level interface that addresses the former
> concern would be non-trivial - especially for something like sockets,
> where the low-level details often really are important for
> interoperability, and where everyone probably wants to do different
> things with their sockets anyway.

I don't think so.  The thing I want to do with my sockets (or network,
rather) is to use a resource on a remote computer.  I want an API to
the remote resource, not to the bytes flowing through the wire.  And
I'm sure that many people want the same sort of thing.  You shouldn't
have to get down and dirty with a stream to accomplish a high-level
goal of `get me a list of files'.
From: Nikodemus Siivola
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbftge$5j7qe$1@midnight.cs.hut.fi>
Joe Marshall <···@ccs.neu.edu> wrote:

> I don't think so.  The thing I want to do with my sockets (or network,
> rather) is to use a resource on a remote computer.  I want an API to
> the remote resource, not to the bytes flowing through the wire.  And
> I'm sure that many people want the same sort of thing.  You shouldn't
> have to get down and dirty with a stream to accomplish a high-level
> goal of `get me a list of files'.

Sure, but sometimes you *need* the low level access. Say, when interfacing
with a legacy thingie that speaks in it's own language to a socket.

Seems to me that there are two issues here:

 - We don't have a standard socket API
 - We don't have a standard high-level network API

You want the latter but don't care about the former. For some people it
will be the other way around. Ergo, we should have both.

Or do you mean that no matter what, you would oppose a standard socket
interface?

Cheers,

  -- Nikodemus
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbfsrl$u9c$2@f1node01.rhrz.uni-bonn.de>
Joe Marshall wrote:

> The thing I want to do with my sockets (or network,
> rather) is to use a resource on a remote computer.  I want an API to
> the remote resource, not to the bytes flowing through the wire.  And
> I'm sure that many people want the same sort of thing.  You shouldn't
> have to get down and dirty with a stream to accomplish a high-level
> goal of `get me a list of files'.

Good point - sounds convincing to me.

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-0206031005430001@k-137-79-50-101.jpl.nasa.gov>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<········@web.de> wrote:

> Joe Marshall wrote:
> 
> > The thing I want to do with my sockets (or network,
> > rather) is to use a resource on a remote computer.  I want an API to
> > the remote resource, not to the bytes flowing through the wire.  And
> > I'm sure that many people want the same sort of thing.  You shouldn't
> > have to get down and dirty with a stream to accomplish a high-level
> > goal of `get me a list of files'.
> 
> Good point - sounds convincing to me.

How are you going to implement these high-level APIs without a low-level
API to build on top of?

E.
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <smqs8hx2.fsf@ccs.neu.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
> <········@web.de> wrote:
> 
> > Joe Marshall wrote:
> > 
> > > The thing I want to do with my sockets (or network,
> > > rather) is to use a resource on a remote computer.  I want an API to
> > > the remote resource, not to the bytes flowing through the wire.  And
> > > I'm sure that many people want the same sort of thing.  You shouldn't
> > > have to get down and dirty with a stream to accomplish a high-level
> > > goal of `get me a list of files'.
> > 
> > Good point - sounds convincing to me.
> 
> How are you going to implement these high-level APIs without a low-level
> API to build on top of?

I don't care how the vendors do it.

(Putting on my `employee of vendor' hat)
Oh, I'd probably use sockets, but Berkeley style sockets don't mesh
too well with the Windows event loop, so I'd use the overlapped I/O
ability on that platform.


The point being that as an end user I really don't care how
asynchronous I/O is handled and I *really* don't want to be part of
it.  I just want to talk to other computers.
From: Don Geddis
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87fzmxlhiz.fsf@sidious.geddis.org>
···@jpl.nasa.gov (Erann Gat) writes:
> Common Lisp did not become popular ten years ago when its technical
> advantages over the competition were truly overwhelming.  Since then,
> Common Lisp is unchanged while other langauges have evolved, which has
> eroded Common Lisp's technical lead.  Therefore, I see no reason to expect
> that Common Lisp's popularity will start to increase because I see no
> mechanism that will bring this about.

What evidence do you have that CL's popularity is related to its technical
lead over competitors?

As you say, it had more technical advantages in the past, and less so now.
But, in the past, other new languages grew up and became popular, and Lisp
didn't.

Did you ever consider that market power may be only loosely correlated with
technical merit, and thus all your attempts to "fix" the core of CL will do
nothing to address your real concern of CL's popularity in the world at large?

_______________________________________________________________________________
Don Geddis                    http://don.geddis.org              ···@geddis.org
"If we can put a man on the moon, why can't we put metal in a microwave!"
	-- Dr. Frazier Crane, "Cheers"
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-2905031446100001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@sidious.geddis.org>, Don Geddis
<···@geddis.org> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> > Common Lisp did not become popular ten years ago when its technical
> > advantages over the competition were truly overwhelming.  Since then,
> > Common Lisp is unchanged while other langauges have evolved, which has
> > eroded Common Lisp's technical lead.  Therefore, I see no reason to expect
> > that Common Lisp's popularity will start to increase because I see no
> > mechanism that will bring this about.
> 
> What evidence do you have that CL's popularity is related to its technical
> lead over competitors?

I don't believe it is.  I think there's an inverse relationship actually.

> Did you ever consider that market power may be only loosely correlated with
> technical merit, and thus all your attempts to "fix" the core of CL will do
> nothing to address your real concern of CL's popularity in the world at large?

You have obviously missed a huge amount of context, as you are seriously
confused about what I am trying to do and why I am trying to do it.  And I
don't feel like reiterating it again now.  If you want to know, Google is
your friend.

E.
From: Don Geddis
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87add4kz5i.fsf@sidious.geddis.org>
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > > Common Lisp did not become popular ten years ago when its technical
> > > advantages over the competition were truly overwhelming.  Since then,
> > > Common Lisp is unchanged while other langauges have evolved, which has
> > > eroded Common Lisp's technical lead.  Therefore, I see no reason to
> > > expect that Common Lisp's popularity will start to increase because I see
> > > no mechanism that will bring this about.

I wrote:
> > Did you ever consider that market power may be only loosely correlated with
> > technical merit, and thus all your attempts to "fix" the core of CL will do
> > nothing to address your real concern of CL's popularity in the world at
> > large?

···@jpl.nasa.gov (Erann Gat) writes:
> You have obviously missed a huge amount of context, as you are seriously
> confused about what I am trying to do and why I am trying to do it.  And I
> don't feel like reiterating it again now.  If you want to know, Google is
> your friend.

Well, you do post a lot, and I'll admit that I may have missed some of them.
But on the other hand, I have read a large number of your postings.  You seem
to have consistently stated:

1. You wish that CL were more popular in the world, because then you would be
able to use it at work, whereas your managers prohibit it now because it isn't
common enough.

2. You believe that parts of CL design are complicated/confusing, and this
is a barrier to new people learning and embracing the language.

3. Thus you have proposed "fixes" to the CL standard, which would make it more
accessible to newbies.  In addition, you have a meta-complaint that the CL
community appears to be against any change to the ANSI standard.  Thus you
wonder how to ever fix the problems you see in #2.

Does that characterize your position reasonably well?

If so, let me re-iterate my question: even accepting, for the sake of argument,
that ANSI CL is unnecessarily confusing for newcomers (debatable, but let's
continue), I think you've made a huge leap to believe that anything about the
design of the language has a significant correlation with its global
popularity.

If you look at some of the more recent popular languages (C, Java, Perl), it's
tough to see how the specifics of their design had much to do with their market
penetration.  C rode a Unix and then Microsoft wave; Java was pushed by Sun and
grabbed a bit of the web & browser wave; and Perl started as a glue scripting
language for non-programmer sys admins, with tight connections to the host OS.

It seems a pretty weak argument to suggest that technical features of the
languages (such as that Java has garbage collection) have much to do with their
market penetration.

Hence I wonder, since your real concern seems to be Lisp's popularity, why
you are so worried about technical features in the core of the language?
Surely those decisions, however they turn out, are at best minor third-order
effects on the overall market popularity of the language.  If you care about
the popularity, why don't you spend more of your effort on the non-technical
issues that more directly affect popularity?

        -- Don
_______________________________________________________________________________
Don Geddis                    http://don.geddis.org              ···@geddis.org
I hope life isn't a big joke, because I don't get it.
	-- Deep Thoughts, by Jack Handey [1999]
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005031426210001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@sidious.geddis.org>, Don Geddis
<···@geddis.org> wrote:

[xnip]

> Does that characterize your position reasonably well?

Mostly.  I'd put a different spin on some things, but that's a good first
order approximation.

[xnip] 

> Hence I wonder, since your real concern seems to be Lisp's popularity, why
> you are so worried about technical features in the core of the language?

Because I don't have millions of dollars to put into a marketing campaign,
so I do what I can with what I have.  One of the most effective marketing
strategies is to make a change in your product -- any change -- and then
sell that as something "new".  When it comes to marketing, new is always
good.

> Surely those decisions, however they turn out, are at best minor third-order
> effects on the overall market popularity of the language.

Maybe, maybe not.  I don't think we have nearly enough data to draw
definitive conclusions about what does and does not make a language
popular in the long run.  Besides, when something is on the cusp, even
minor third-order effects can make the difference between success and
failure.

>  If you care about
> the popularity, why don't you spend more of your effort on the non-technical
> issues that more directly affect popularity?

Like what?  I'm always open to suggestions.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <zzQBa.2903$MM4.63268@news0.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@k-137-79-50-101.jpl.nasa.gov...
>
> >  If you care about
> > the popularity, why don't you spend more of your effort on the non-technical
> > issues that more directly affect popularity?
>
> Like what?  I'm always open to suggestions.

You said it just before this, Marketing, Advertising.  One may hate it
but it works.

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005031525300001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@news0.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
> ·························@k-137-79-50-101.jpl.nasa.gov...
> >
> > >  If you care about
> > > the popularity, why don't you spend more of your effort on the
non-technical
> > > issues that more directly affect popularity?
> >
> > Like what?  I'm always open to suggestions.
> 
> You said it just before this, Marketing, Advertising.  One may hate it
> but it works.

What exactly are you recommending I do?  Hire an advertising agency to put
together a campaign?  With what resources?  To deliver what message?

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <DZTBa.704$QF2.142810@news2.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@k-137-79-50-101.jpl.nasa.gov...
> > You said it just before this, Marketing, Advertising.  One may hate it
> > but it works.
>
> What exactly are you recommending I do?  Hire an advertising agency to put
> together a campaign?  With what resources?  To deliver what message?

I do not know for sure, who does? Its not like CL is Coca-Cola.

A few low-cost (and maybe a little sneaky) suggestions.

1) Leave Lisp literature sitting around.  Around your desk, in the
coffee room, maybe even the washroom.  The idea that someone
is using it, evidenced by the written word is very powerful.  If someone
asks who it is, say nothing.  Just let natural human curiousity
do its thing.  The one real thing that got me interested in trying Lisp
was a ACM special magazine edition about Lisp. (about 12 years ago?)

As a collorary leave literature around lamenting the state of the
software development art.

2) Bring an old Lisp machine in, set it up somewhere and leave it on.
Before I really knew Lisp a Co-worker brought a Symbolics LispM
into his office.  I was intrigued that such a modern looking system
existed so long ago.  He also hooked it up to DEC Net and interfaced
with the systems.  My one thought was, wow, it was way ahead of its
time.  Or maybe you have an old DEC alpha machine that has the
Symbolics emulator on it.  In a place like JPL there must be surplus
scrap like that sitting around.

3) Put a big poster up in your office with a ton of Lisp (useful and
not-useful) code for the occasional visitor to see out of the corner
of their eye.  Just put it there, do not point it out to your visitors,
no pressure sales technique.

Wade
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4llwnc0xb.fsf@beta.franz.com>
"Wade Humeniuk" <····@nospam.nowhere> writes:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
> ·························@k-137-79-50-101.jpl.nasa.gov...
> > > You said it just before this, Marketing, Advertising.  One may hate it
> > > but it works.
> >
> > What exactly are you recommending I do?  Hire an advertising agency to put
> > together a campaign?  With what resources?  To deliver what message?
> 
> I do not know for sure, who does? Its not like CL is Coca-Cola.

Interesting you should mention Coca-Cola - isn't that the company that
thought they could fix sagging sales by technical adjustments to their
product?  It turned out to be such a fiasco that they had to re-market
their old, unchanged product as "Classic".

Disclaimer: from the mind of a Pepsi-One drinker.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105030005350001@192.168.1.51>
In article <·············@beta.franz.com>, Duane Rettig <·····@franz.com> wrote:

> "Wade Humeniuk" <····@nospam.nowhere> writes:
> 
> > "Erann Gat" <···@jpl.nasa.gov> wrote in message
> > ·························@k-137-79-50-101.jpl.nasa.gov...
> > > > You said it just before this, Marketing, Advertising.  One may hate it
> > > > but it works.
> > >
> > > What exactly are you recommending I do?  Hire an advertising agency to put
> > > together a campaign?  With what resources?  To deliver what message?
> > 
> > I do not know for sure, who does? Its not like CL is Coca-Cola.
> 
> Interesting you should mention Coca-Cola - isn't that the company that
> thought they could fix sagging sales by technical adjustments to their
> product?  It turned out to be such a fiasco that they had to re-market
> their old, unchanged product as "Classic".

If Lisp had as much market share as Coke did when they introduced New Coke
I might say that you had a point.  But we're not Coke, we're RC.  RC
happens to be my favorite cola, but I don't drink RC any more, I drink
Coke, because you can't find RC any more.  If RC had changed its formula
would things have been different?  We'll never know.

BTW, as you say, Coke's sales were sagging before the introduction of New
Coke.  It is entirely possible that the New Coke fiasco (and it was a
fiasco) did in fact save the brand simply by drawing attention to it.

E.
From: sv0f
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <none-185B88.15073731052003@news.vanderbilt.edu>
In article <····················@192.168.1.51>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

>RC
>happens to be my favorite cola, but I don't drink RC any more, I drink
>Coke, because you can't find RC any more.

I liked RC best as a kid.  The underdog cola.[1]

I like Lisp best as an adult.  The underdog programming language.

I worry sometimes that I'm attracted to underdogs not because of
their underappreciated superiority, but because I then feel
better differentiated from the mob.


[1] I should mention that I live in Nashville.[2]  A small
town not too far away from here -- Bell Buckle -- hosts a
Moon Pie festival every summer, around the fourth of July.
For those who do not know, Moon Pies are baked goods in
the Hostess tradition, but slightly different: more
chemically, sometimes banana-flavored, etc.

The official drink of the festival is RC cola.  So imagine:
You're in a quaint southern small town, it's 100 degrees
with 99% humidity, your mouth is caked with Moon Pie
granules and powedered sugar from that funnel cake you
shouldn't have eaten, and some kind stranger hands you an
ice cold RC.  A small slice of heaven.

[2] If you too are a middle Tennessee Lisper, pipe up
and we'll plan a Lisp-neck meeting 'round here.
From: Andreas Eder
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m37k87nnof.fsf@elgin.eder.de>
···@jpl.nasa.gov (Erann Gat) writes:

> If Lisp had as much market share as Coke did when they introduced New Coke
> I might say that you had a point.  But we're not Coke, we're RC.  RC
> happens to be my favorite cola, but I don't drink RC any more, I drink
> Coke, because you can't find RC any more.  If RC had changed its formula
> would things have been different?  We'll never know.

But if RC had changed its formula and won a big share of the market
you wouldn't be any better off, because it wouldn't be the RC that you
know and love. In all likelyhood it would taste just like the other
crap you put up with today.

Andreas
-- 
Wherever I lay my .emacs, there�s my $HOME.
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105031015080001@192.168.1.51>
In article <··············@elgin.eder.de>, Andreas Eder
<············@t-online.de> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > If Lisp had as much market share as Coke did when they introduced New Coke
> > I might say that you had a point.  But we're not Coke, we're RC.  RC
> > happens to be my favorite cola, but I don't drink RC any more, I drink
> > Coke, because you can't find RC any more.  If RC had changed its formula
> > would things have been different?  We'll never know.
> 
> But if RC had changed its formula and won a big share of the market
> you wouldn't be any better off, because it wouldn't be the RC that you
> know and love. In all likelyhood it would taste just like the other
> crap you put up with today.

That's a good point, but fortunately programming languages are (at least
potentially) different from cola in this regard.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-65C6A9.19590731052003@news.netcologne.de>
In article <··············@elgin.eder.de>,
 Andreas Eder <············@t-online.de> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > If Lisp had as much market share as Coke did when they introduced New Coke
> > I might say that you had a point.  But we're not Coke, we're RC.  RC
> > happens to be my favorite cola, but I don't drink RC any more, I drink
> > Coke, because you can't find RC any more.  If RC had changed its formula
> > would things have been different?  We'll never know.
> 
> But if RC had changed its formula and won a big share of the market
> you wouldn't be any better off, because it wouldn't be the RC that you
> know and love. In all likelyhood it would taste just like the other
> crap you put up with today.
> 

Well, after a while they could have additionally offered RC classic. ;)

Please: No one in this thread actually promotes the notion to turn 
Common Lisp into crap!


Pascal
From: Christopher C. Stacy
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <uisrrnenw.fsf@dtpq.com>
>>>>> On Sat, 31 May 2003 19:59:08 +0200, Pascal Costanza ("Pascal") writes:
 Pascal> Please: No one in this thread actually promotes the notion to turn 
 Pascal> Common Lisp into crap!

Common Crap?
From: Gareth McCaughan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <slrnbdh9jq.fac.Gareth.McCaughan@g.local>
Duane Rettig wrote:

> Interesting you should mention Coca-Cola - isn't that the company that
> thought they could fix sagging sales by technical adjustments to their
> product?  It turned out to be such a fiasco that they had to re-market
> their old, unchanged product as "Classic".
> 
> Disclaimer: from the mind of a Pepsi-One drinker.

Classic Erik Naggum rant[1] on the subject: <················@naggum.net>.
(From comp.lang.lisp, 2001-08-27.)


[1] In case it isn't obvious: this is not a pejorative term.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105030902580001@192.168.1.51>
In article <·······························@g.local>,
················@pobox.com wrote:

> Duane Rettig wrote:
> 
> > Interesting you should mention Coca-Cola - isn't that the company that
> > thought they could fix sagging sales by technical adjustments to their
> > product?  It turned out to be such a fiasco that they had to re-market
> > their old, unchanged product as "Classic".
> > 
> > Disclaimer: from the mind of a Pepsi-One drinker.
> 
> Classic Erik Naggum rant[1] on the subject: <················@naggum.net>.
> (From comp.lang.lisp, 2001-08-27.)
> 
> 
> [1] In case it isn't obvious: this is not a pejorative term.

I think Erik makes some very good points here, however:

>   Microsoft has won exactly nothing.

I wouldn't mind having a piece of that kind of nothing.

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4HJCa.3885$QF2.979194@news2.telusplanet.net>
"Duane Rettig" <·····@franz.com> wrote in message ··················@beta.franz.com...
> Interesting you should mention Coca-Cola - isn't that the company that
> thought they could fix sagging sales by technical adjustments to their
> product?  It turned out to be such a fiasco that they had to re-market
> their old, unchanged product as "Classic".

Yes, indeed.  A Technical adjustment (to reduce production costs) and
a marketing campaign to mitigate the reaction to the taste change.
Huge emotional response, partly rational and most irrational.  But
of course, through Marketing Coca-Cola came back (who knows 
if it ever really suffered?).

Why is one person a Coke drinker and another Pepsi-One? I guess
mostly aesthetics and identification with a certain image.

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3005032356250001@192.168.1.51>
In article <····················@news2.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
> ·························@k-137-79-50-101.jpl.nasa.gov...
> > > You said it just before this, Marketing, Advertising.  One may hate it
> > > but it works.
> >
> > What exactly are you recommending I do?  Hire an advertising agency to put
> > together a campaign?  With what resources?  To deliver what message?
> 
> I do not know for sure, who does? Its not like CL is Coca-Cola.
> 
> A few low-cost (and maybe a little sneaky) suggestions.
> 
> 1) Leave Lisp literature sitting around.  Around your desk, in the
> coffee room, maybe even the washroom.  The idea that someone
> is using it, evidenced by the written word is very powerful.  If someone
> asks who it is, say nothing.  Just let natural human curiousity
> do its thing.  The one real thing that got me interested in trying Lisp
> was a ACM special magazine edition about Lisp. (about 12 years ago?)
> 
> As a collorary leave literature around lamenting the state of the
> software development art.
> 
> 2) Bring an old Lisp machine in, set it up somewhere and leave it on.
> Before I really knew Lisp a Co-worker brought a Symbolics LispM
> into his office.  I was intrigued that such a modern looking system
> existed so long ago.  He also hooked it up to DEC Net and interfaced
> with the systems.  My one thought was, wow, it was way ahead of its
> time.  Or maybe you have an old DEC alpha machine that has the
> Symbolics emulator on it.  In a place like JPL there must be surplus
> scrap like that sitting around.
> 
> 3) Put a big poster up in your office with a ton of Lisp (useful and
> not-useful) code for the occasional visitor to see out of the corner
> of their eye.  Just put it there, do not point it out to your visitors,
> no pressure sales technique.
> 
> Wade

All of these suggestions would only reach an audience of JPLers, which
wouldn't do any good at all.  I doubt there's a single person at JPL who
doesn't know that I think Lisp is the greatest thing since sliced bread. 
In fact, it's a pretty serious problem.  It happens to me fairly often now
that people will say to me, "We don't need to ask you your opinion, we
already know what you think."  (Meaning that they think I'm going to
suggest using Lisp.)  And this happens to me pretty much regardless of
what the actual topic of conversation is.

Besides, even if this were not the case it wouldn't help achieve what I
want to achieve, which is to increase the use of Lisp *outside* of JPL
(which is, at this point, I believe, the only way to increase its use
inside of JPL).

E.
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ED8D9D1.5080804@nyc.rr.com>
Erann Gat wrote:
> Besides, even if this were not the case it wouldn't help achieve what I
> want to achieve, which is to increase the use of Lisp *outside* of JPL
> (which is, at this point, I believe, the only way to increase its use
> inside of JPL).

Yep, sounds like their minds are made up, and they have explained away 
your advocacy as blind fanaticism.

Forget about big, conservative organizations. Watch for a grass roots 
movement starting with early adopters. How can that be encouraged? To a 
degree it has to happen organically, with individuals finding their way 
to Lisp, each in their own way (but all for the same reason: 
dissatisfaction with existing languages). The good news is that is 
already happening and the outcome is inevitable. The bad news is your 
planned retirement date is when? Especially since you are working in a 
big phat organization, and those are the last adopters.

The hopeful news is that languages are reigning supreme for shorter and 
shorter periods, and the new champions are more and more like Lisp all 
the time; Norvig's bit on Python being as Lispy as Lisp (ha!) cuts both 
ways.

To the extent the grassroots thing /can/ be encouraged: The Lisp vendors 
did their bit with free trials (well, MCL has the $99 OS9-only version). 
Ray De Lacaze is single-handedly changing the landscape by arranging 
killer ILC conferences. Attend those.[1] They create this delightful 
illusion that Lisp is being used a lot, and the non-illusion of having a 
great time hanging out with other lispniks. ILC2002 caused the Lisp 
Social Club phenomenon, which encourages the faithful and gives newbies 
a place to be nurtured.

What can Erann do? Nothing, you have pitched eight innings of shutout 
ball but your team has not scored and your pitch count is maxxed out. 
We're bringing in a fresh arm. What you might do is stop walking up and 
down the bench berating the hitters for not scoring. Hit the showers and 
get some ice on that shoulder.

When you feel strong again, flip the sign on your attitude and go piss 
of comp.lang.python with positive messages about CL.[2] You'll get 
reamed over there, too, but while many are called, few are chosen, some 
will come look at CL.

While you are resting up, read up on the Shackleton voyage.

    http://www.pbs.org/wgbh/nova/shackleton/

Four of the men map nicely onto lispniks.

I'll cast you as Thomas Ordes-Lee. While Shackleton fought to keep the 
men's spirits high, and group ethic was to stay positive, Ordes-Lee 
played Chicken Little. He kept a doom and gloom diary which was 
invaluable to historians but did nothing to improve the situation. As 
storekeeper he miserably and endlessly ran calculations which proved 
they were all doomed, hence was the most unpopular guy on the crew.

Ray De Lacaze gets the plum part, Ernest Shackleton, who through sheer 
force of will proved Ordes-Lee wrong and brought every man home alive. 
When reality was against him, he changed it.

I forget the gent's name, but he dropped off the ALU extended board, 
eschewing direct advocacy for the implicit advocacy of doing great 
things with Lisp, in bioinformatics. He can play the part of Frank 
Hurley, the expedition photographer who just went ahead and photographed 
the, um, expedition. The incredible photos of (among other things) the 
ice-bound Endurance allowed Shackleton to do a lecture tour and recoup 
some of the investors' capital.

I guess my interminable cheerleading has me against all odds playing the 
part of seaman Timothy McCarthy (nice last name, that):  "He is the most 
irrepressable optimist I've ever met," Worsley wrote. "When I relieve 
him at the helm, boat iced and seas pouring down your neck, he informs 
me with a happy grin, `It's a grand day, sir.'"

I'd actually rather play one of the sled dogs, but they get euthanized 
when the ice thaws and the crew has to hit the lifeboats.

[1] But don't give a "woe be us" talk this time.

[2] "Pick a fight", said Richard Gabriel at the same ILUG '99.


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105031314330001@192.168.1.51>
In article <················@nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

> Erann Gat wrote:
> > Besides, even if this were not the case it wouldn't help achieve what I
> > want to achieve, which is to increase the use of Lisp *outside* of JPL
> > (which is, at this point, I believe, the only way to increase its use
> > inside of JPL).
> 
> Yep, sounds like their minds are made up, and they have explained away 
> your advocacy as blind fanaticism.

That's right.  Among the many ironies in my life is the fact that some
people think I am a blind fanatic in love with Common Lisp, while others
(mainly in this N.G.) think I hate it.  Both groups can't be right.


> What can Erann do? Nothing, you have pitched eight innings of shutout 
> ball

Heh, not bad for a third-string relief pitcher who never expected to make
it to the major leagues.


> We're bringing in a fresh arm.

That's great.   Who is he (or she)?


> When you feel strong again, flip the sign on your attitude and go piss 
> of comp.lang.python with positive messages about CL.

Been there.  Done that.  (Still am, actually, it's just that I'm not doing
it quite as visibly as before because that seemed to backfire on me.)

> I'll cast you as Thomas Ordes-Lee. While Shackleton fought to keep the 
> men's spirits high, and group ethic was to stay positive, Ordes-Lee 
> played Chicken Little. He kept a doom and gloom diary which was 
> invaluable to historians but did nothing to improve the situation.

<sarcasm>Thanks.</sarcasm>  Will you not even concede, even if you
disagree with my methods, that I am at least trying in my own way to
improve the situation?

> Ray De Lacaze gets the plum part, Ernest Shackleton

Question for any newcomers who may still be following this thread: how
many of you would say that you have been positively influenced towards
Common Lisp by Ray de Lacaze?

> [2] "Pick a fight", said Richard Gabriel at the same ILUG '99.

Isn't that what I was doing?

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <861xye6c69.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <················@nyc.rr.com>, Kenny Tilton
> <·······@nyc.rr.com> wrote:
> 
> > Erann Gat wrote:
> > > Besides, even if this were not the case it wouldn't help achieve what I
> > > want to achieve, which is to increase the use of Lisp *outside* of JPL
> > > (which is, at this point, I believe, the only way to increase its use
> > > inside of JPL).
> > 
> > Yep, sounds like their minds are made up, and they have explained away 
> > your advocacy as blind fanaticism.
> 
> That's right.  Among the many ironies in my life is the fact that some
> people think I am a blind fanatic in love with Common Lisp, while others
> (mainly in this N.G.) think I hate it.  Both groups can't be right.

Sure they can, from your coworkers POV you love something that looks
like CL, to the point where they can not tell the difference.  And 
people here can tell the difference.  Remember that scheme and CL 
damn near identical to C/C++/Java programmers and your changes are
closer to CL then scheme is. 

> > I'll cast you as Thomas Ordes-Lee. While Shackleton fought to keep the 
> > men's spirits high, and group ethic was to stay positive, Ordes-Lee 
> > played Chicken Little. He kept a doom and gloom diary which was 
> > invaluable to historians but did nothing to improve the situation.
> 
> <sarcasm>Thanks.</sarcasm>  Will you not even concede, even if you
> disagree with my methods, that I am at least trying in my own way to
> improve the situation?

From what I have seen in this thread I must say not in a meaningful 
way.  What you have proposed, revising the standard, is so risky 
that it is hard to take it as a serious constructive comment. 
Especialy since you are proposing it for minor details.

> 
> > Ray De Lacaze gets the plum part, Ernest Shackleton
> 
> Question for any newcomers who may still be following this thread: how
> many of you would say that you have been positively influenced towards
> Common Lisp by Ray de Lacaze?

petty, very petty.

marc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105032127380001@192.168.1.51>
In article <··············@bogomips.optonline.net>, Marc Spitzer
<········@optonline.net> wrote:

> What you have proposed, revising the standard, is so risky 
> that it is hard to take it as a serious constructive comment. 
> Especialy since you are proposing it for minor details.

You still don't seem to understand what I was trying to do.

Please distinguish between advocating change in the standard and
advocating particular changes in the standard.  I was advocating the
first, and doing the second only to take the discussion out of the
abstract, to use minor details (which from the standpoint of pedagogy are
actually not so minor IMHO) as test cases to ground the discussion.

Now, you may believe that revising the standard is so risky that it should
not even be considered.  I understand that point of view, but I happen not
to agree with it.  I believe that 1) risk is not necessarily a bad thing
-- there is no reward without risk, and 2) we have a lot of historical
data on the results of keeping the standard unchanged.  It's pretty good
from the point of view of keeping user costs down, and pretty poor in
terms of market penetration.  That latter item has come to affect me
directly, and I don't think I'm alone.  I see a lot of people saying, "I
wish I could get a job programming in Common Lisp."  Well, I wish I could
give them one.  I used to be able to.  Now I can't.  And I'm trying to do
something about it.

(Just so there's no misunderstanding, I am no longer pushing for changes
in the standard, preferring instead now to wait for Kent to publish his
sub-standard process, which I am optimistic will address my concerns.  I
am only rehashing all this because I want to set the record straight about
my motives.)

> > > Ray De Lacaze gets the plum part, Ernest Shackleton
> > 
> > Question for any newcomers who may still be following this thread: how
> > many of you would say that you have been positively influenced towards
> > Common Lisp by Ray de Lacaze?
> 
> petty, very petty.

Petty?  Why?  That was a serious question.

I had never heard of Ray before, so I looked him up to try to figure out
what he had done that would move Kenny to cite him as a role model on a
par with Ernest Shackleton.  The only thing I could find was that he'd
attended some standards meetings.  Since the topic at hand was recruiting
new people to CL I thought I'd go straight to the source.  Why do you have
a problem with that?

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86wug64aka.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@bogomips.optonline.net>, Marc Spitzer
> <········@optonline.net> wrote:
> 
> > What you have proposed, revising the standard, is so risky 
> > that it is hard to take it as a serious constructive comment. 
> > Especialy since you are proposing it for minor details.
> 
> You still don't seem to understand what I was trying to do.

That is true

> 
> Please distinguish between advocating change in the standard and
> advocating particular changes in the standard.  I was advocating the
> first, and doing the second only to take the discussion out of the
> abstract, to use minor details (which from the standpoint of pedagogy are
> actually not so minor IMHO) as test cases to ground the discussion.

There is no need, since both are done by opening up the *entire* standard
to *absolutely any change* the committee votes on and passes.  Furthermore
you have not presented a compelling reason to even risk throwing the baby 
out with the bath water.  

> 
> Now, you may believe that revising the standard is so risky that it should
> not even be considered.  I understand that point of view, but I happen not
> to agree with it.  I believe that 1) risk is not necessarily a bad thing
> -- there is no reward without risk, and 2) we have a lot of historical

in CL there are many places that can have a much greater reward for
much less risk.  The area of layered standards comes to mind, network
connectivity comes to mind, and there is interest in moving this forward.

> data on the results of keeping the standard unchanged.  It's pretty good
> from the point of view of keeping user costs down, and pretty poor in
> terms of market penetration.  That latter item has come to affect me
> directly, and I don't think I'm alone.  I see a lot of people saying, "I
> wish I could get a job programming in Common Lisp."  Well, I wish I could
> give them one.  I used to be able to.  Now I can't.  And I'm trying to do
> something about it.

Well if you want to help out, why not help out in areas the community
is trying to change for the better?  Your ideas about what to change
are just not being bought into by people here, from what I have seen.
So why not work on other areas to help things improve, standardized
sockets for example?  The lack of a standard socket api is a bigger
PR problem then some ugly, or even if you want to call it stupid, 
syntax about special variables.  There are currently some real functional
gaps in standard CL, why not work on those with *layered* standards?

> 
> (Just so there's no misunderstanding, I am no longer pushing for changes
> in the standard, preferring instead now to wait for Kent to publish his
> sub-standard process, which I am optimistic will address my concerns.  I
> am only rehashing all this because I want to set the record straight about
> my motives.)

Thats good

> 
> > > > Ray De Lacaze gets the plum part, Ernest Shackleton
> > > 
> > > Question for any newcomers who may still be following this thread: how
> > > many of you would say that you have been positively influenced towards
> > > Common Lisp by Ray de Lacaze?
> > 
> > petty, very petty.
> 
> Petty?  Why?  That was a serious question.

Because you implied he has done nothing to deserve the "lead" role.  
Since you admit that you do not know what he has done(see your stuff
below) you should not imply that he has done nothing, or even less 
then you have, as you did in the wording of your question.  It was
a loaded question.

> 
> I had never heard of Ray before, so I looked him up to try to figure out
> what he had done that would move Kenny to cite him as a role model on a
> par with Ernest Shackleton.  The only thing I could find was that he'd
> attended some standards meetings.  Since the topic at hand was recruiting
> new people to CL I thought I'd go straight to the source.  Why do you have
> a problem with that?

Well he is the current president of the ALU and is organizing the 2003 
conference, among other things.

marc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105032334200001@192.168.1.51>
In article <··············@bogomips.optonline.net>, Marc Spitzer
<········@optonline.net> wrote:

> > > > > Ray De Lacaze gets the plum part, Ernest Shackleton
> > > > 
> > > > Question for any newcomers who may still be following this thread: how
> > > > many of you would say that you have been positively influenced towards
> > > > Common Lisp by Ray de Lacaze?
> > > 
> > > petty, very petty.
> > 
> > Petty?  Why?  That was a serious question.
> 
> Because you implied he has done nothing to deserve the "lead" role.

No, I implied that he hadn't had much direct influence on newcomers.

You're such an Erik Naggum fan, you seem to have forgotten that he was
constantly exhorting people to pay attention to context.  The whole
context of this conversation is about newcomers.
  
God, I am getting so tired of this.

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86smqu45oi.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@bogomips.optonline.net>, Marc Spitzer
> <········@optonline.net> wrote:
> 
> > > > > > Ray De Lacaze gets the plum part, Ernest Shackleton
> > > > > 
> > > > > Question for any newcomers who may still be following this thread: how
> > > > > many of you would say that you have been positively influenced towards
> > > > > Common Lisp by Ray de Lacaze?
> > > > 
> > > > petty, very petty.
> > > 
> > > Petty?  Why?  That was a serious question.
> > 
> > Because you implied he has done nothing to deserve the "lead" role.
> 
> No, I implied that he hadn't had much direct influence on newcomers.
> 
> You're such an Erik Naggum fan, you seem to have forgotten that he was
> constantly exhorting people to pay attention to context.  The whole
> context of this conversation is about newcomers.

Part of the context is the way you worded the question.

>   
> God, I am getting so tired of this.
> 

so stop

marc
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ED9E377.6020200@nyc.rr.com>
Erann Gat wrote:
> In article <··············@bogomips.optonline.net>, Marc Spitzer
> <········@optonline.net> wrote:
> 
> 
>>>>>>Ray De Lacaze gets the plum part, Ernest Shackleton
>>>>>
>>>>>Question for any newcomers who may still be following this thread: how
>>>>>many of you would say that you have been positively influenced towards
>>>>>Common Lisp by Ray de Lacaze?
>>>>
>>>>petty, very petty.
>>>
>>>Petty?  Why?  That was a serious question.
>>
>>Because you implied he has done nothing to deserve the "lead" role.
> 
> 
> No, I implied that he hadn't had much direct influence on newcomers.

Right, and that is a classic straw man. In the passage you failed to 
absorb, I said grassroot movements pretty much have to happen on their 
own, then said that possible ways to at least encourage the grass 
included the tremendous energy Ray puts into the ILCs. Since you read so 
hastily, you might not be aware I in turn credit the ILCs with the 
popping up of local lispnik social clubs, and if that is not grassroots, 
I do not think you know the word.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Don Geddis
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <874r39lnru.fsf@sidious.geddis.org>
···@jpl.nasa.gov (Erann Gat) writes:
> 2) we have a lot of historical data on the results of keeping the standard
> unchanged.  It's [...] pretty poor in terms of market penetration.

You keep asserting this as though it were accepted fact, but I doubt many
other people believe it, and you haven't yet tried to support it at all.

Yes, the standard has been fixed for some years.  Yes, Lisp has poor market
penetration during that time.

Correlation does not imply causation.  I think people here appear so
antagonistic to your suggestions because they are pretty sure that a change
to the standard would have minimal impact on Lisp's market popularity.

Meanwhile, it would also have well-understood negative impact on the existing
community: a drain on vendor resources to fix their implementations, breaking
most existing and already well-debugged code, etc.

I wish you would spend more of your effort on a much more straightforward:
"Hey, I'd like Lisp to be more popular.  Anyone agree with me?  What do you
think we should do?"  (Yes, I saw that you tried this once recently.  But
the amount of your effort spent here is dwarfed by your attempts to change
the standard.)

Moreover, you seem to interpret resistance to standards change with not caring
that Lisp is unpopular.  Can't you see that others don't accept your
implied connection, and so might be wildly enthusiastic about your goal of
making Lisp more popular, while at the same time against your plan to change
the standard?  You're creating new opponents out of existing allies!

        -- Don
_______________________________________________________________________________
Don Geddis                    http://don.geddis.org              ···@geddis.org
If I were meta-agnostic, I'd be confused over whether I'm agnostic or not---but
I'm not quite sure if I feel that way; hence I must be meta-meta-agnostic (I
guess).  -- Douglas R. Hofstadter, _Godel, Escher, Bach_
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-0106031650480001@192.168.1.51>
In article <··············@sidious.geddis.org>, Don Geddis
<···@geddis.org> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> > 2) we have a lot of historical data on the results of keeping the standard
> > unchanged.  It's [...] pretty poor in terms of market penetration.
> 
> You keep asserting this as though it were accepted fact, but I doubt many
> other people believe it, and you haven't yet tried to support it at all.

That's not true.  I have posted supporting arguments for this position in
the past.  However, you yourself concede the point in your very next
statement:

> Yes, the standard has been fixed for some years.  Yes, Lisp has poor market
> penetration during that time.

That's all I'm saying.

> Correlation does not imply causation.

True, and I do not claim that the standard being static is the cause of
Lisp's poor market penetration.  Nor do I claim that changing the standard
will necessarily help Lisp's market penetration.

All I am saying is that I have yet to see a convincing argument for why we
should expect Lisp's market penetration to change without some kind of
significant change.  These are not necessarily changes in the standard,
but something qualitatively different from what we have had in the last
decade.

(Notwithstanding the above, I am at the moment deferring to Kent Pitman's
judgement on this, partly because I have a lot of faith in Kent, and
partly out of sheer exhaustion.)

> I think people here appear so
> antagonistic to your suggestions because they are pretty sure that a change
> to the standard would have minimal impact on Lisp's market popularity.

Yes, simply changing the standard for the sake of changing the standard
wouldn't help.  But I never advocated that.

> Meanwhile, it would also have well-understood negative impact on the existing
> community: a drain on vendor resources to fix their implementations, breaking
> most existing and already well-debugged code, etc.

That depends on the kind of change.  There are kinds of changes that don't
have these negative effects.

> I wish you would spend more of your effort on a much more straightforward:
> "Hey, I'd like Lisp to be more popular.  Anyone agree with me?  What do you
> think we should do?"  (Yes, I saw that you tried this once recently.  But
> the amount of your effort spent here is dwarfed by your attempts to change
> the standard.)

I've actually spent very little effort trying to change the standard. 
(Certainly very little compared to the amount of effort it would actually
take to change the standard.)  Most of my effort lately has been directed
towards clarifying my position against misunderstandings and
misrepresentation.

> Moreover, you seem to interpret resistance to standards change with not caring
> that Lisp is unpopular.

Not at all.  There is a significant faction (perhaps even a majority) in
the community who say outright that they don't want Lisp to be mainstream,
that they want it to remain the province of an elite group of experts,
that they want to "keep out the Ilias's".  I interpret most of the
resistance to changing the standard to simply being a reasonable position
in service of this goal.  I do not agree with this goal.  I think that the
cost of "keeping out the Ilias's" is higher than the cost of inviting them
in.

> Can't you see that others don't accept your
> implied connection, and so might be wildly enthusiastic about your goal of
> making Lisp more popular, while at the same time against your plan to change
> the standard?  You're creating new opponents out of existing allies!

I do not and never did have a "plan" to change the standard.  I had a
(half-baked) plan to try to lower what seemed to me to be a barrier to
entry to help make Lisp more accessible to newcomers.  A small and
uncritical part of that plan entailed a small change to the standard.  It
was always intended to be exactly what I called it when I first presented
it: a modest proposal.

The original discussion (three years ago) got bogged down on the question
of whether the issue I was addressing was really an issue at all.  The
only evidence I had was the recurrent questions from apparently confused
newcomers about special variables, bindings, how makunbound works, etc.
etc. (and my own experiences in trying to figure all this out).  But it
was at the time not all that important to me, so I just dropped it.

The issue presented itself again in the recent thread on what makunbound
does.  By then the issue of making Lisp more popular had taken on a new
urgency for me because of my own situation, but I still didn't "push" the
proposal, I merely reminded people that it had once been made.

Kent Pitman responded with a rather scathing review, and suggested that I
personally was responsible for Lisp being as unpopular as it is.

The conversation degenerated rapidly from there.

The whole issue about the standard was a tangent that came up because Kent
starting talking about all the horrible things that might happen if the
standard were changed.  I thought this was a non-sequitur because nearly
all of the horrible things were the result of making
non-backwards-compatible changes, and the changes I was proposing were
backwards-compatible.  So I argued with him not because I disagreed with
what he was saying, but merely because I thought what he was saying didn't
apply.  All of a sudden everyone thought I had this agenda to change the
standard come hell or high water.  Much of what I've written since then
(when I've been lucid) has been an effort to dispell this myth.

I still think that the standard could be changed in ways that would be a
net win, but this is a minor side issue for me, and has long since passed
the point of diminishing returns (as evidenced by the fact that I have
long since withdrawn the original proposal).  I am no longer pushing for
change in the standard.  I am no longer pushing any of my proposals.  To
borrow Kenny's baseball metaphor, I struck out.  But at least I stepped up
to the plate.

E.
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86d6hxpa4k.fsf@bogomips.optonline.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@sidious.geddis.org>, Don Geddis
> <···@geddis.org> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > > 2) we have a lot of historical data on the results of keeping the standard
> > > unchanged.  It's [...] pretty poor in terms of market penetration.
> > 
> > You keep asserting this as though it were accepted fact, but I doubt many
> > other people believe it, and you haven't yet tried to support it at all.
> 
> That's not true.  I have posted supporting arguments for this position in
> the past.  However, you yourself concede the point in your very next
> statement:
> 
> > Yes, the standard has been fixed for some years.  Yes, Lisp has poor market
> > penetration during that time.
> 
> That's all I'm saying.
> 
> > Correlation does not imply causation.
> 
> True, and I do not claim that the standard being static is the cause of
> Lisp's poor market penetration.  Nor do I claim that changing the standard
> will necessarily help Lisp's market penetration.

That is not accurate, you did claim that making the changes you asked
for and rebranding CL as "New CL" would improve the market share of CL.

And you did imply, at least, that the static standard was a contributing
factor to low market penetration.

> 
> All I am saying is that I have yet to see a convincing argument for why we
> should expect Lisp's market penetration to change without some kind of
> significant change.  These are not necessarily changes in the standard,
> but something qualitatively different from what we have had in the last
> decade.

People are coming to the language.  There exists a bunch of users groups,
that did not exist last year.  The volume of posts appears to be up here,
from 1 or 2 years ago.

> 
> (Notwithstanding the above, I am at the moment deferring to Kent Pitman's
> judgement on this, partly because I have a lot of faith in Kent, and
> partly out of sheer exhaustion.)

If you have such faith in kent, why did you call his "substandards" proposal
vaporware when he said he was working on it?

> 
> > I think people here appear so
> > antagonistic to your suggestions because they are pretty sure that a change
> > to the standard would have minimal impact on Lisp's market popularity.
> 
> Yes, simply changing the standard for the sake of changing the standard
> wouldn't help.  But I never advocated that.
> 
> > Meanwhile, it would also have well-understood negative impact on the existing
> > community: a drain on vendor resources to fix their implementations, breaking
> > most existing and already well-debugged code, etc.
> 
> That depends on the kind of change.  There are kinds of changes that don't
> have these negative effects.

They are called enhancements not changes, they can be done by layered standards

> 
> > I wish you would spend more of your effort on a much more straightforward:
> > "Hey, I'd like Lisp to be more popular.  Anyone agree with me?  What do you
> > think we should do?"  (Yes, I saw that you tried this once recently.  But
> > the amount of your effort spent here is dwarfed by your attempts to change
> > the standard.)
> 
> I've actually spent very little effort trying to change the standard. 
> (Certainly very little compared to the amount of effort it would actually
> take to change the standard.)  Most of my effort lately has been directed
> towards clarifying my position against misunderstandings and
> misrepresentation.


> 
> > Moreover, you seem to interpret resistance to standards change with not caring
> > that Lisp is unpopular.
> 
> Not at all.  There is a significant faction (perhaps even a majority) in
> the community who say outright that they don't want Lisp to be mainstream,
> that they want it to remain the province of an elite group of experts,
> that they want to "keep out the Ilias's".  I interpret most of the
> resistance to changing the standard to simply being a reasonable position
> in service of this goal.  I do not agree with this goal.  I think that the
> cost of "keeping out the Ilias's" is higher than the cost of inviting them
> in.

No what the majority has said is they just want to deal with non screwed up
individuals who want to use and/or learn CL.  People here seem to want peers
not putzes to associate with on CLL and else where in the CL community.

> 
> > Can't you see that others don't accept your
> > implied connection, and so might be wildly enthusiastic about your goal of
> > making Lisp more popular, while at the same time against your plan to change
> > the standard?  You're creating new opponents out of existing allies!
> 
> I do not and never did have a "plan" to change the standard.  I had a
> (half-baked) plan to try to lower what seemed to me to be a barrier to
> entry to help make Lisp more accessible to newcomers.  A small and
> uncritical part of that plan entailed a small change to the standard.  It
> was always intended to be exactly what I called it when I first presented
> it: a modest proposal.

It was a dangerous proposal because the only way to enact your changes
was to open the entire standard up for arbitrary revision, if the committee
felt like it.

marc
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-0106032315530001@192.168.1.51>
In article <··············@bogomips.optonline.net>, Marc Spitzer
<········@optonline.net> wrote:

> > True, and I do not claim that the standard being static is the cause of
> > Lisp's poor market penetration.  Nor do I claim that changing the standard
> > will necessarily help Lisp's market penetration.
> 
> That is not accurate, you did claim that making the changes you asked
> for and rebranding CL as "New CL" would improve the market share of CL.

No, I didn't.  I said that absent these kinds of changes there is no
reason to expect and dramatic shifts in the market, and that therefore
these kinds of changes were (IMO) necessary.  But I never said they would
be sufficient.

I'm getting very tired of having to defend myself against your
misrepresentations of my position.

> And you did imply, at least, that the static standard was a contributing
> factor to low market penetration.

Yes, I did say that.  And I stand by it too.

> If you have such faith in kent, why did you call his "substandards" proposal
> vaporware when he said he was working on it?

Why are you conflating two issues that have absolutely nothing to do with
one another?  The thing in which I have decided to place my faith in is
Kent's assessment of the market.  That has nothing to do with
substandards.

Nonetheless, I did not intend the pejorative implication.  I just meant to
say that it (the substandards proposal) didn't exist yet.

> > That depends on the kind of change.  There are kinds of changes that don't
> > have these negative effects.
> 
> They are called enhancements not changes, they can be done by layered
standards

Yes.  And your point would be?

> No what the majority has said is they just want to deal with non screwed up
> individuals who want to use and/or learn CL.  People here seem to want peers
> not putzes to associate with on CLL and else where in the CL community.

Who gets to decide who is a putz?

> It was a dangerous proposal because the only way to enact your changes
> was to open the entire standard up for arbitrary revision, if the committee
> felt like it.

No, that is not true.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-CFDD32.03191102062003@news.netcologne.de>
In article <··············@bogomips.optonline.net>,
 Marc Spitzer <········@optonline.net> wrote:


> > True, and I do not claim that the standard being static is the cause of
> > Lisp's poor market penetration.  Nor do I claim that changing the standard
> > will necessarily help Lisp's market penetration.
> 
> That is not accurate, you did claim that making the changes you asked
> for and rebranding CL as "New CL" would improve the market share of CL.
> 
> And you did imply, at least, that the static standard was a contributing
> factor to low market penetration.

Even if this were true, do you know what it means to change one's mind?

> > All I am saying is that I have yet to see a convincing argument for why we
> > should expect Lisp's market penetration to change without some kind of
> > significant change.  These are not necessarily changes in the standard,
> > but something qualitatively different from what we have had in the last
> > decade.
> 
> People are coming to the language.  There exists a bunch of users groups,
> that did not exist last year.  The volume of posts appears to be up here,
> from 1 or 2 years ago.

The question is: Is the glass half empty or half full? That's up to 
everyone's own interpretation. (BTW, I don't agree either with Erann in 
this regard. But don't you think that we have beaten these arguments to 
death? They are not even argments, they are just opinions...)

> > (Notwithstanding the above, I am at the moment deferring to Kent Pitman's
> > judgement on this, partly because I have a lot of faith in Kent, and
> > partly out of sheer exhaustion.)
> 
> If you have such faith in kent, why did you call his "substandards" proposal
> vaporware when he said he was working on it?

From FOLDOC:

"vaporware 

<jargon > /vay'pr-weir/ Products announced far in advance of any release 
(which may or may not actually take place).  The term came from Atari 
users and was later applied by Infoworld to Microsoft 's continuous 
lying about Microsoft Windows."

There's a possibility that Erann wanted to use the term in its neutral 
meaning.

> > That depends on the kind of change.  There are kinds of changes that don't
> > have these negative effects.
> 
> They are called enhancements not changes, they can be done by layered 
> standards

Some changes could be backwards compatible while still requiring a 
change to the standard, because the standard explicitly disallows 
certain extensions. Some examples (well, at least one) have been 
mentioned in this thread, and I am sure there are more.

Why are you denying these?

> > Not at all.  There is a significant faction (perhaps even a majority) in
> > the community who say outright that they don't want Lisp to be mainstream,
> > that they want it to remain the province of an elite group of experts,
> > that they want to "keep out the Ilias's".  I interpret most of the
> > resistance to changing the standard to simply being a reasonable position
> > in service of this goal.  I do not agree with this goal.  I think that the
> > cost of "keeping out the Ilias's" is higher than the cost of inviting them
> > in.
> 
> No what the majority has said is they just want to deal with non screwed up
> individuals who want to use and/or learn CL.  People here seem to want peers
> not putzes to associate with on CLL and else where in the CL community.

I don't like the fact that Ilias is taken as reference in this 
discussion. Ilias was an extreme case. It is just not helpful to argue 
with extreme cases unless one is not interested in a constructive 
discussion.

Yes, hell could break loose. But most of the time it doesn't.

> It was a dangerous proposal because the only way to enact your changes
> was to open the entire standard up for arbitrary revision, if the committee
> felt like it.

This was not Erann's intention. One could imagine a world in which the 
standard could be opened up but with the strict requirement that all 
changes need to be backwards compatible, together with a reasonable 
definition of the term "backwards compatible". That the ANSI doesn't 
seem to offer such a way of opening up the standard is really bad luck. 
You shouldn't make Erann responsible for this.



Pascal
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <868yslp3g6.fsf@bogomips.optonline.net>
Pascal Costanza <········@web.de> writes:

> In article <··············@bogomips.optonline.net>,
>  Marc Spitzer <········@optonline.net> wrote:
> 
> 
> > > True, and I do not claim that the standard being static is the cause of
> > > Lisp's poor market penetration.  Nor do I claim that changing the standard
> > > will necessarily help Lisp's market penetration.
> > 
> > That is not accurate, you did claim that making the changes you asked
> > for and rebranding CL as "New CL" would improve the market share of CL.
> > 
> > And you did imply, at least, that the static standard was a contributing
> > factor to low market penetration.
> 
> Even if this were true, do you know what it means to change one's mind?

There is a difference between changing your mind and implying that
this was always your position.

> 
> > > All I am saying is that I have yet to see a convincing argument for why we
> > > should expect Lisp's market penetration to change without some kind of
> > > significant change.  These are not necessarily changes in the standard,
> > > but something qualitatively different from what we have had in the last
> > > decade.
> > 
> > People are coming to the language.  There exists a bunch of users groups,
> > that did not exist last year.  The volume of posts appears to be up here,
> > from 1 or 2 years ago.
> 
> The question is: Is the glass half empty or half full? That's up to 
> everyone's own interpretation. (BTW, I don't agree either with Erann in 
> this regard. But don't you think that we have beaten these arguments to 
> death? They are not even argments, they are just opinions...)

Well mine are not, posting volume is up and there are a good number of
*new* users groups.  Both are measurable signs of improvement in the
CL community.

> 
> > > (Notwithstanding the above, I am at the moment deferring to Kent Pitman's
> > > judgement on this, partly because I have a lot of faith in Kent, and
> > > partly out of sheer exhaustion.)
> > 
> > If you have such faith in kent, why did you call his "substandards" proposal
> > vaporware when he said he was working on it?
> 
> From FOLDOC:
> 
> "vaporware 
> 
> <jargon > /vay'pr-weir/ Products announced far in advance of any release 
> (which may or may not actually take place).  The term came from Atari 
> users and was later applied by Infoworld to Microsoft 's continuous 
> lying about Microsoft Windows."
> 
> There's a possibility that Erann wanted to use the term in its neutral 
> meaning.

I do not see a neutral definition.  The definition states that the entity
who says the product will be delivered can not be trusted.

> 
> > > That depends on the kind of change.  There are kinds of changes that don't
> > > have these negative effects.
> > 
> > They are called enhancements not changes, they can be done by layered 
> > standards
> 
> Some changes could be backwards compatible while still requiring a 
> change to the standard, because the standard explicitly disallows 
> certain extensions. Some examples (well, at least one) have been 
> mentioned in this thread, and I am sure there are more.
> 
> Why are you denying these?

Because anything that "opens the standard for revision" opens the 
entire standard for revision.  There is nothing so bad that it 
remotely warrants the risk.

> 
> > > Not at all.  There is a significant faction (perhaps even a majority) in
> > > the community who say outright that they don't want Lisp to be mainstream,
> > > that they want it to remain the province of an elite group of experts,
> > > that they want to "keep out the Ilias's".  I interpret most of the
> > > resistance to changing the standard to simply being a reasonable position
> > > in service of this goal.  I do not agree with this goal.  I think that the
> > > cost of "keeping out the Ilias's" is higher than the cost of inviting them
> > > in.
> > 
> > No what the majority has said is they just want to deal with non screwed up
> > individuals who want to use and/or learn CL.  People here seem to want peers
> > not putzes to associate with on CLL and else where in the CL community.
> 
> I don't like the fact that Ilias is taken as reference in this 
> discussion. Ilias was an extreme case. It is just not helpful to argue 
> with extreme cases unless one is not interested in a constructive 
> discussion.
> 
> Yes, hell could break loose. But most of the time it doesn't.

A major reason for that is the twits are generally drummed or ignored
out of the forum, before they reach critical mass and become self
sustaining.  If you have enough twits at one time they set up a 
positive feedback loop and then they do not go away. 

> 
> > It was a dangerous proposal because the only way to enact your changes
> > was to open the entire standard up for arbitrary revision, if the committee
> > felt like it.
> 
> This was not Erann's intention. One could imagine a world in which the 
> standard could be opened up but with the strict requirement that all 
> changes need to be backwards compatible, together with a reasonable 
> definition of the term "backwards compatible". That the ANSI doesn't 
> seem to offer such a way of opening up the standard is really bad luck. 
> You shouldn't make Erann responsible for this.

Well I can imagine any number of possible worlds, it is fun.  But I deal
with the one that is real.  And in the real world making "minor" changes
to the standard is only possible by opening the entire standard up to
complete revision/replacement.  This is a remarkably bad idea and it 
should be treated as such.  And what Erann can be held responsible for is
pushing this idea after it was explained that the only way to get his
changes in was to open the standard to any revision the committee wanted
to make.

marc
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <costanza-07D9DB.09205002062003@news.netcologne.de>
In article <··············@bogomips.optonline.net>,
 Marc Spitzer <········@optonline.net> wrote:



> > The question is: Is the glass half empty or half full? That's up to 
> > everyone's own interpretation. (BTW, I don't agree either with Erann in 
> > this regard. But don't you think that we have beaten these arguments to 
> > death? They are not even argments, they are just opinions...)
> 
> Well mine are not, posting volume is up and there are a good number of
> *new* users groups.  Both are measurable signs of improvement in the
> CL community.

Of course you regard as half full.

> > From FOLDOC:
> > 
> > "vaporware 
> > 
> > <jargon > /vay'pr-weir/ Products announced far in advance of any release 
> > (which may or may not actually take place).  The term came from Atari 
> > users and was later applied by Infoworld to Microsoft 's continuous 
> > lying about Microsoft Windows."
> > 
> > There's a possibility that Erann wanted to use the term in its neutral 
> > meaning.
> 
> I do not see a neutral definition.  The definition states that the entity
> who says the product will be delivered can not be trusted.

Then look closer.

> > Some changes could be backwards compatible while still requiring a 
> > change to the standard, because the standard explicitly disallows 
> > certain extensions. Some examples (well, at least one) have been 
> > mentioned in this thread, and I am sure there are more.
> > 
> > Why are you denying these?
> 
> Because anything that "opens the standard for revision" opens the 
> entire standard for revision.  There is nothing so bad that it 
> remotely warrants the risk.

This is a completely unrelated point.

> > Yes, hell could break loose. But most of the time it doesn't.
> 
> A major reason for that is the twits are generally drummed or ignored
> out of the forum, before they reach critical mass and become self
> sustaining.  If you have enough twits at one time they set up a 
> positive feedback loop and then they do not go away. 

This sounds interesting - do you have empirical evidence for this?

> > > It was a dangerous proposal because the only way to enact your changes
> > > was to open the entire standard up for arbitrary revision, if the 
> > > committee
> > > felt like it.
> > 
> > This was not Erann's intention. One could imagine a world in which the 
> > standard could be opened up but with the strict requirement that all 
> > changes need to be backwards compatible, together with a reasonable 
> > definition of the term "backwards compatible". That the ANSI doesn't 
> > seem to offer such a way of opening up the standard is really bad luck. 
> > You shouldn't make Erann responsible for this.
> 
> Well I can imagine any number of possible worlds, it is fun.  But I deal
> with the one that is real.  And in the real world making "minor" changes
> to the standard is only possible by opening the entire standard up to
> complete revision/replacement.  This is a remarkably bad idea and it 
> should be treated as such.  And what Erann can be held responsible for is
> pushing this idea after it was explained that the only way to get his
> changes in was to open the standard to any revision the committee wanted
> to make.

This only happened in your mind - just read his recent statements about 
this issue.


Pascal
From: Marc Spitzer
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <86smqsjx0o.fsf@bogomips.optonline.net>
Pascal Costanza <········@web.de> writes:

> In article <··············@bogomips.optonline.net>,
>  Marc Spitzer <········@optonline.net> wrote:
> 
> 
> 
> > > The question is: Is the glass half empty or half full? That's up to 
> > > everyone's own interpretation. (BTW, I don't agree either with Erann in 
> > > this regard. But don't you think that we have beaten these arguments to 
> > > death? They are not even argments, they are just opinions...)
> > 
> > Well mine are not, posting volume is up and there are a good number of
> > *new* users groups.  Both are measurable signs of improvement in the
> > CL community.
> 
> Of course you regard as half full.

no I think the cup is getting fuller, things are getting better.

> 
> > > From FOLDOC:
> > > 
> > > "vaporware 
> > > 
> > > <jargon > /vay'pr-weir/ Products announced far in advance of any release 
> > > (which may or may not actually take place).  The term came from Atari 
> > > users and was later applied by Infoworld to Microsoft 's continuous 
> > > lying about Microsoft Windows."
> > > 
> > > There's a possibility that Erann wanted to use the term in its neutral 
> > > meaning.
> > 
> > I do not see a neutral definition.  The definition states that the entity
> > who says the product will be delivered can not be trusted.
> 
> Then look closer.

What closer?  The definition of vaporware is "I will believe it when I 
see it".  The commitment, by Kent in this case, to deliver the item,
"substandards" proposal, is not trusted to be accurate and true.  This
is born out by the "may or may not" bit in the definition.  Where is 
the neutral bit, I do not see it.

> 
> > > Some changes could be backwards compatible while still requiring a 
> > > change to the standard, because the standard explicitly disallows 
> > > certain extensions. Some examples (well, at least one) have been 
> > > mentioned in this thread, and I am sure there are more.
> > > 
> > > Why are you denying these?
> > 
> > Because anything that "opens the standard for revision" opens the 
> > entire standard for revision.  There is nothing so bad that it 
> > remotely warrants the risk.
> 
> This is a completely unrelated point.

No it absolutely is not unrelated.  Risk avoidance is a valid and
reasonable reason for not doing something.  This is one of the major
points in favor of layered standards, you can not change the base
standard, a major risk goes away.

> 
> > > Yes, hell could break loose. But most of the time it doesn't.
> > 
> > A major reason for that is the twits are generally drummed or ignored
> > out of the forum, before they reach critical mass and become self
> > sustaining.  If you have enough twits at one time they set up a 
> > positive feedback loop and then they do not go away. 
> 
> This sounds interesting - do you have empirical evidence for this?

comp.os.linux.*?

> 
> > > > It was a dangerous proposal because the only way to enact your changes
> > > > was to open the entire standard up for arbitrary revision, if the 
> > > > committee
> > > > felt like it.
> > > 
> > > This was not Erann's intention. One could imagine a world in which the 
> > > standard could be opened up but with the strict requirement that all 
> > > changes need to be backwards compatible, together with a reasonable 
> > > definition of the term "backwards compatible". That the ANSI doesn't 
> > > seem to offer such a way of opening up the standard is really bad luck. 
> > > You shouldn't make Erann responsible for this.
> > 
> > Well I can imagine any number of possible worlds, it is fun.  But I deal
> > with the one that is real.  And in the real world making "minor" changes
> > to the standard is only possible by opening the entire standard up to
> > complete revision/replacement.  This is a remarkably bad idea and it 
> > should be treated as such.  And what Erann can be held responsible for is
> > pushing this idea after it was explained that the only way to get his
> > changes in was to open the standard to any revision the committee wanted
> > to make.
> 
> This only happened in your mind - just read his recent statements about 
> this issue.

I do not agree.

marc
From: Steven E. Harris
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <q67el2c447a.fsf@raytheon.com>
···@jpl.nasa.gov (Erann Gat) writes:

> All I am saying is that I have yet to see a convincing argument for
> why we should expect Lisp's market penetration to change without
> some kind of significant change.

More books about Lisp would help. Publishers are of course reluctant,
too chicken to be the egg. But consider: when a programmer visits a
bookstore and sees /new/ books about a language or system, curiosity
is piqued. "I didn't know about this, but maybe I should. Is this
something new? Is there a community growing around this?"

It has to start with the programmers. They have to be intrigued and
impressed enough by tinkering to sneak it onto the job.

-- 
Steven E. Harris        :: ········@raytheon.com
Raytheon                :: http://www.raytheon.com
From: Michael Sullivan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <1fvxnzk.1xhq7vy1166gsxN%michael@bcect.com>
Erann Gat <···@jpl.nasa.gov> wrote:

> In article <··············@sidious.geddis.org>, Don Geddis
> <···@geddis.org> wrote:

> > Correlation does not imply causation.
> 
> True, and I do not claim that the standard being static is the cause of
> Lisp's poor market penetration.  Nor do I claim that changing the standard
> will necessarily help Lisp's market penetration.
 
> All I am saying is that I have yet to see a convincing argument for why we
> should expect Lisp's market penetration to change without some kind of
> significant change.  These are not necessarily changes in the standard,
> but something qualitatively different from what we have had in the last
> decade.

That change can happen in the outside world, as well as in Lisp, and
there is evidence that this is happening.

It's hard to imagine that any core language change will help much.  Lisp
has been vastly technically superior to the competition for a long time,
and has not gotten great market share.  It is already no harder to learn
than C, but it has not gotten great market share.  I fail to see how a
small improvement in its ease of learning, or technical superiority is
likely to make any difference in its popularity.  Clearly, lisp
popularity is not a core language technical issue.  It's either a matter
of pure marketing, or of first-impressions.  I.e. -- we need more slick
library packages for common tasks -- maybe some layered standards are
useful here, but any non-backwards compatible changes to the core
standard are unlikely to be necessary unless the standard somehow
*keeps* these slick packages from being written.

> > I wish you would spend more of your effort on a much more straightforward:
> > "Hey, I'd like Lisp to be more popular.  Anyone agree with me?  What do you
> > think we should do?"  (Yes, I saw that you tried this once recently.  But
> > the amount of your effort spent here is dwarfed by your attempts to change
> > the standard.)

> I've actually spent very little effort trying to change the standard.
> (Certainly very little compared to the amount of effort it would actually
> take to change the standard.)  Most of my effort lately has been directed
> towards clarifying my position against misunderstandings and
> misrepresentation.

I think this is accurate -- I almost responded as such in your defense.
I would only remind you that on usenet, mischaracterization is rampant,
usually from pure misunderstanding.  it's almost impossible to drop a
subject if you are unwilling to let an occasional misrepresentation go
uncommented.  

> > Moreover, you seem to interpret resistance to standards change with not
> > caring that Lisp is unpopular.

> Not at all.  There is a significant faction (perhaps even a majority) in
> the community who say outright that they don't want Lisp to be mainstream,
> that they want it to remain the province of an elite group of experts,
> that they want to "keep out the Ilias's".  I interpret most of the
> resistance to changing the standard to simply being a reasonable position
> in service of this goal.  I do not agree with this goal.  I think that the
> cost of "keeping out the Ilias's" is higher than the cost of inviting them
> in.

From what I've read of the latest thread, I don't think that's accurate
at all.  Oh, I think there is a large group of people here who don't
ever want CL to become as popular as C++/Java, but I think there are few
who would be unhappy if CL market share went up by an order of
magnitude.  I also have read a lot of good arguments against any changes
to the standard that are not specifically quarantined into a "must be
backwards compatible with the original 'constitution'" layer, which have
nothing to do with this precept.  Everyone I've seen argue with your
original proposal did so on one of two grounds:  1) They did not agree
that it would make the language easier for newbies, or 2) they did not
wish to open up the standards process and risk breaking compatibility.

Until there is some kind of standards process that *doesn't* risk
breaking backwards compatibility, this #2 is a pretty good argument
against any change in the standard that doesn't represent a
super-majority desired watershed in the language.  Whether or not you
want CL to be more popular.  Breaking backwards compatibility in any
significant way is extremely likely to make it much *less* popular,
unless it is done to accomodate a major, clamored-for paradigm shift.
So far, all of the clamored-for paradigm shifts I'm familiar with are
coming from the communities of other languages, and most of them seem to
be heading in a lisp-like direction.  

A possible project occurs to me.  Figure out what kinds of buzzwords and
buzz-phrases are being used by field leaders who are getting a clue
about some aspect of language design where lisp would be a big win, and
get some serious lisp-advocacy pages to show up in people's browsers
when they search on those terms, along with Python, Ruby, etc.

 

Michael
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <uP=dPpkjuvvsHmLoqN7220NvMVUy@4ax.com>
On Tue, 3 Jun 2003 13:53:11 -0400, ·······@bcect.com (Michael Sullivan)
wrote:

> at all.  Oh, I think there is a large group of people here who don't
> ever want CL to become as popular as C++/Java, but I think there are few
> who would be unhappy if CL market share went up by an order of
> magnitude.  I also have read a lot of good arguments against any changes

Grow the Lisp community as large as possible, but not larger :)


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ED9E04F.6000507@nyc.rr.com>
In the very message to which you were responding...

Kenny Tilton wrote:
 > Ray De Lacaze is single-handedly changing the landscape by arranging
 > killer ILC conferences....

Erann Gat wrote:
> I had never heard of Ray before, so I looked him up to try to figure out
> what he had done that would move Kenny to cite him as a role model on a
> par with Ernest Shackleton.  The only thing I could find was that he'd
> attended some standards meetings. 

Ray is also president of the ALU. I'd let you look that up, too, but 
given your track record: "Association of Lisp Users".

> Since the topic at hand was recruiting
> new people to CL I thought I'd go straight to the source.  

No, going straight to the source would have been: "Kenny, who the hell 
is Ray?" I am starting to understand why JPL won't use Lisp. Much more 
of this and I might dump it myself, go look for some penguin eggs.

But first I have to go over to comp.lang.java and .c++ and ask how many 
people gave up on Lisp because they were too dumb to put *asterisks* 
around specials like they were told to.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-0106030907590001@192.168.1.51>
In article <················@nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

> > Since the topic at hand was recruiting
> > new people to CL I thought I'd go straight to the source.  
> 
> No, going straight to the source would have been: "Kenny, who the hell 
> is Ray?"

You're right, I'm sorry.  And I apologize for any aspersions I may have
cast on Ray.  Ray, if you're reading this, I'm sorry.

I'm going to go get myself an attitude adjustment.

E.
From: Kenny Tilton
Subject: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has been settled]
Date: 
Message-ID: <3EDA3BEA.6050703@nyc.rr.com>
Erann Gat wrote:
> In article <················@nyc.rr.com>, Kenny Tilton
> <·······@nyc.rr.com> wrote:
> 
> 
>>>Since the topic at hand was recruiting
>>>new people to CL I thought I'd go straight to the source.  
>>
>>No, going straight to the source would have been: "Kenny, who the hell 
>>is Ray?"
> 
> 
> You're right, I'm sorry.  And I apologize for any aspersions I may have
> cast on Ray.  Ray, if you're reading this, I'm sorry.

That is very gracious of you.

Ray is probably too busy with ILC2003, but me and Marc will fill him in 
at the next lisp-nyk meeting. As penance you can field a team in the 
RoboCup competition he persuaded the RoboCup people to endorse (or 
whatever the word is) for ILC2003:

    http://www.robocup.org/02.html

"RoboCup is an international joint project to promote AI, robotics, and 
related field. It is an attempt to foster AI and intelligent robotics 
research by providing a standard problem where wide range of 
technologies can be integrated and examined. RoboCup chose to use soccer 
game as a central topic of research, aiming at innovations to be applied 
for socially significant problems and industries."

The sick thing is that everyone is using C and Java. :(

ILC2003 will have a sim event, not actual doggy-bots. Here is the 
sourceforge for the sim:

    http://sserver.sourceforge.net/

Ray (if I have this right) figures a RoboCup competition will increase 
the visibility of ILC2003 beyond just the faithful and pull in... wait 
for it... newbies! :)

This being an FL conference, competition will be limited to weirdo 
functional languages, but if someday Lisp were to win it all... well, 
you also need to know how huge is RoboCup (I certainly did not). Check 
out the URL above.

Last I heard progress was being made on setting up a RoboCup server for 
ILC entrants ... go get 'em, Erann.

Oh, I just had the best idea. Let's contact all the RoboCup sim 
champions using C and Java and tell them about the FL RoboCup 
competition. They'll port their stuff to CL, kick all our asses for us, 
and never go back to C/Java! Next year: Lisp overall champion!


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Erann Gat
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has been settled]
Date: 
Message-ID: <gat-0106031317080001@192.168.1.51>
In article <················@nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

> Erann Gat wrote:
> > In article <················@nyc.rr.com>, Kenny Tilton
> > <·······@nyc.rr.com> wrote:
> > 
> > 
> >>>Since the topic at hand was recruiting
> >>>new people to CL I thought I'd go straight to the source.  
> >>
> >>No, going straight to the source would have been: "Kenny, who the hell 
> >>is Ray?"
> > 
> > 
> > You're right, I'm sorry.  And I apologize for any aspersions I may have
> > cast on Ray.  Ray, if you're reading this, I'm sorry.
> 
> That is very gracious of you.

I had a really bad day yesterday, but I had no right to take out my
frustrations by casting aspersions on Ray, whether or not I knew who he
was.  FWIW, that wasn't my intention at the time I wrote it.

Notwithstanding, your casting me as a complainer really pissed me off.  If
I were to cast myself in your Endurance metaphor it would be as a paying
passenger who has occasionally taken a crack at plying an oar (mostly to
discover that I wasn't very good at it).  On one occasion I arranged for
the boat to make an excursion to a rather exotic destination from which it
derived some bragging rights.  Occasionally I've said things like, "You
know, we might get more paying passengers if we replace this old rope
ladder with a nice gangway."  But I had no hand in building the ship nor
sailing her.  And despite the odd rope ladder and patch on her hull she is
indeed (still) a fine vessel.

The situation now is that I have fallen overboard, and there are a few
others treading water around me who, along with me, would like to get back
aboard.  I call out, "It would really help us if the ship could change
course a little" and I get back, "You idiot!  Drilling a hole in the boat
would be a disaster."  Well, yeah, but I never suggested drilling a hole. 
(The counter argument to this is that as soon as you turn the wheel all
hell breaks lose and anyone who wants to drill a hole in the hull can do
so.  I don't buy that.)

To push on the analogy, there are also a lot of people in the water around
me who have never seen a boat and who are therefore perfectly content in
swimming.  They've all gotten very strong and brawny from years of doing
the c++rawl stroke.  They have muscles in really weird places where people
shouldn't have muscles.  And they aren't much interested in hearing about
boats.

There are also people back on shore working on building new boats.  Most
of these are ugly monstrosities that belch smoke and steam and have no
sense of aesthetics whatsoever, but they nonetheless get you from A to B
even when the winds aren't favorable.  (I've taken a few short trips on
the S.S. Python.  The hull was full of dents and I got the feeling she'd
heel over in a squall, but she got me to my destination quickly.)

What I really want is a nice fiberglass yacht with a GPS navigation system
and a fully stocked bar.

E.
From: Kenny Tilton
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has been settled]
Date: 
Message-ID: <3EDAC966.5040703@nyc.rr.com>
Erann Gat wrote:
> Notwithstanding, your casting me as a complainer really pissed me off.

Not sure this helps, but the fellow was not complaining, he was running 
calculations which led him to conclude they were all doomed. They were 
probably good calculations, tho he turned out to be wrong. Calculations 
are like that. And he surely was giving his best assessment with the 
best of intentions. Hell, as storekeeper it was his job. It just lacked 
that British stiff upper lip quality.


>  If I were to cast myself in your Endurance metaphor it would be as a paying
> passenger who has occasionally taken a crack at plying an oar...

I started this, didn't I? :)

> What I really want is a nice fiberglass yacht with a GPS navigation system
> and a fully stocked bar.

The bar sounds fine, but throw in radar. Wood is cozier and offers a 
smoother ride if you can afford the maintenance and don't mind going a 
little slower. Holds up better against the ice, as well. Anyway...

Elsewhere you said you had seen no convincing evidence that things were 
improving for Lisp. Plot the course swum so far by the madding crowd: 
from C to C++ to Java, with splinter groups headed for Perl and Python. 
That does not give you heart? How about those C++ gurus predicting they 
would all be using dynamic languages in ten years? And what about the 
backflips of delight early adopter newbies invariably turn once they try 
Lisp? Having used C/C++ and considered Java and maybe Smalltalk and 
found them wanting?

That's not a sea change in the making?


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Erann Gat
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has been settled]
Date: 
Message-ID: <gat-0106032324500001@192.168.1.51>
In article <················@nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

> That's not a sea change in the making?

I certainly hope so.

There is no doubt in my mind that something like what you describe will
happen.  However, I can see this going two different ways.  The first is
that the world finally arrives at Lisp.  The second is that the world
completely reinvents Lisp piecemeal without ever realizing that's what
it's doing, and ends up building a monstrosity that is to Common Lisp in
terms of design aesthetics what Common Lisp is to Scheme, but with an even
larger installed base so that fixing it will be even that much harder
(which is to say, impossible).

I would like to see the CL community exerting more influence in the coming
sea change while it is happening, rather than just waiting for the sea
change to bring the world at long last to Lisp.  There is a sea change
coming, I'm just not sure that the outcome will be what we hope for.

The recent testimonials from prominent C++ people about the benefits of
dynamic languages give me hope.  At the same time XML fills my heart with
dread.

E.
From: Kenny Tilton
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has been settled]
Date: 
Message-ID: <3EDB4A7B.4080504@nyc.rr.com>
Erann Gat wrote:
> In article <················@nyc.rr.com>, Kenny Tilton
> <·······@nyc.rr.com> wrote:
> 
> 
>>That's not a sea change in the making?
> 
> 
> I certainly hope so.
> 
> There is no doubt in my mind that something like what you describe will
> happen.  However, I can see this going two different ways.  The first is
> that the world finally arrives at Lisp.  The second is that the world
> completely reinvents Lisp piecemeal without ever realizing that's what
> it's doing,...

Absolutely. That's how I see Java and Python. But one ineluctable 
conclusion to be drawn from the past is that the language upheaval will 
not end until either some already popular language becomes Lisp 
(including with native compilation and the parentheses, thank you very 
much) or Lisp has won. I don't care which happens, because either way I 
still get (chant after me) macros, sexprs, gc, specials....

One reason I say ineluctable is that we have already had languages reach 
dominance, only to be dumped for a Better Way. So I do not worry about 
the community getting stuck in a false minimum.

Another reason is that when Seekers of the Light try Lisp, they yell 
Eureka!, they don't say, "Well, OK, that's better, but...". And if you 
talk to them you also know it is for the right reasons.

So I do not worry about...

> ... and ends up building a monstrosity that is to Common Lisp in
> terms of design aesthetics what Common Lisp is to Scheme, but with an even
> larger installed base so that fixing it will be even that much harder
> (which is to say, impossible).

Correctimundo. Which is why I do not worry about even the bearable 
Java-becomes-Lisp possibility. It cannot, really.

> 
> I would like to see the CL community exerting more influence in the coming
> sea change while it is happening, rather than just waiting for the sea
> change to bring the world at long last to Lisp.

You can lead a horse to water, but you can't make it drink. People are 
not rational, no matter how hard we cling to that conceit. (See "JPL".) 
Even I flipped dismissively past MCL ads in APDA for several months when 
trying to find a Better Way to do version two of a C app prototyped in 
Logo (which I abandoned reluctabtly for speed).

The good news is that early adopters are finding Lisp already, and they 
are not looking back. The other good news is that the madding crowd 
obediently follows the flag-bearers.

But the local lispnik groups have helped, and your RoboCup effort will 
help even more. As will ILC2003 overall.

>   At the same time XML fills my heart with
> dread.

Lessee. Symbolic, sexprs, reflexive... thanks, I forgot that one in my 
list of wildly popular fad Lisp-wannabe languages not good enough to 
last very long.

:)

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Paolo Amoroso
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <zgfbPnmxw=2ka3vQuEoHEeSChlcW@4ax.com>
On Mon, 02 Jun 2003 03:54:07 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:

> Elsewhere you said you had seen no convincing evidence that things were 
> improving for Lisp. Plot the course swum so far by the madding crowd: 

He is apparently wearing thick sunglasses. At his requests of knowing who
uses Lisp, I posted the following links at least 2 or 3 times in the past,
and explicitly asked him for feedback:

  http://alu.cliki.net/Industry%20Application
  http://alu.cliki.net/Research%20Organizations
  http://alu.cliki.net/Success%20Stories
  http://alu.cliki.net/Evaluate%20Lisp
  http://alu.cliki.net/Consultant

But he did not even bother to say whether this is crap.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Erann Gat
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <gat-0206030836500001@192.168.1.51>
In article <····························@4ax.com>, Paolo Amoroso
<·······@mclink.it> wrote:

> On Mon, 02 Jun 2003 03:54:07 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:
> 
> > Elsewhere you said you had seen no convincing evidence that things were 
> > improving for Lisp. Plot the course swum so far by the madding crowd: 
> 
> He is apparently wearing thick sunglasses. At his requests of knowing who
> uses Lisp, I posted the following links at least 2 or 3 times in the past,
> and explicitly asked him for feedback:
> 
>   http://alu.cliki.net/Industry%20Application
>   http://alu.cliki.net/Research%20Organizations
>   http://alu.cliki.net/Success%20Stories
>   http://alu.cliki.net/Evaluate%20Lisp
>   http://alu.cliki.net/Consultant
> 
> But he did not even bother to say whether this is crap.

Sorry, I've had my hands full dealing with other things lately.

It's definitely not crap.  On the other hand, it's not a slam dunk either.

The good: there's a lot of stuff there, more than I expected to find.  The
industry applications page in particular was a lot longer than I thought
it would be, and contained some surprises (like Brightware).

The bad: in many cases there's not enough information to tell the extent
to which Lisp is used, or even whether it's still being used or was just
used at some time in the past, whether it's in production or just being
used by one person in the research department, etc. etc.  ("Who uses
Lisp?"  "Brightware."  "What do they use it for?"  "I don't know."  "Well,
how do you know they use it?"  "I read it on the Web."  "On Brightware's
Web site?"  "Well, er, no, on the ALU site."  At that point I'd be shown
the door.)

ViaWeb, for example, has now been converted from Lisp to C++, and that
tends to undermine the argument.  It makes it appear that at the end of
the day they decided that Lisp was not a good idea after all.  Now, I know
the details of the story and I understand why things went the way they did
so we don't need to rehash all that.  The point is, the people who make
these decisions evaluate these things at a very superficial level (yes,
that's unfortunate, but that's the way it is.  As another example, RAX is
listed as a Lisp success story, and I certainly think it is, but it is
widely regarded at JPL as the poster child for why *not* to use Lisp.  See
http://www.flownet.com/gat/jpl-list.html if you want to know why.)

E.
From: Daniel Barlow
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <87y90kf5d3.fsf@noetbook.telent.net>
···@jpl.nasa.gov (Erann Gat) writes:

> that's unfortunate, but that's the way it is.  As another example, RAX is
> listed as a Lisp success story, and I certainly think it is, but it is
> widely regarded at JPL as the poster child for why *not* to use Lisp.  See
> http://www.flownet.com/gat/jpl-list.html if you want to know why.)

Can you please stop shifting the goalposts?  We've already determined
that you can't persuade JPL to use Lisp and so need a grassroots Lisp
movement to increase its visibility - so it really doesn't matter
whether JPL-internal people think RAX was successful or not, because
they won't be reading this anyway.  What matters is whether new Lisp
users or potential Lisp users in the rest of the world read this web
page and think they have similar problems that Lisp would help them
solve.


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Erann Gat
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <gat-0206031618430001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@noetbook.telent.net>, Daniel Barlow
<···@telent.net> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > that's unfortunate, but that's the way it is.  As another example, RAX is
> > listed as a Lisp success story, and I certainly think it is, but it is
> > widely regarded at JPL as the poster child for why *not* to use Lisp.  See
> > http://www.flownet.com/gat/jpl-list.html if you want to know why.)
> 
> Can you please stop shifting the goalposts?  We've already determined
> that you can't persuade JPL to use Lisp and so need a grassroots Lisp
> movement to increase its visibility - so it really doesn't matter
> whether JPL-internal people think RAX was successful or not, because
> they won't be reading this anyway.  What matters is whether new Lisp
> users or potential Lisp users in the rest of the world read this web
> page and think they have similar problems that Lisp would help them
> solve.

Hm, good point.

E.
From: Thomas F. Burdick
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <xcvn0gza8wz.fsf@apocalypse.OCF.Berkeley.EDU>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@noetbook.telent.net>, Daniel Barlow
> <···@telent.net> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > that's unfortunate, but that's the way it is.  As another example, RAX is
> > > listed as a Lisp success story, and I certainly think it is, but it is
> > > widely regarded at JPL as the poster child for why *not* to use Lisp.  See
> > > http://www.flownet.com/gat/jpl-list.html if you want to know why.)
> > 
> > Can you please stop shifting the goalposts?  We've already determined
> > that you can't persuade JPL to use Lisp and so need a grassroots Lisp
> > movement to increase its visibility - so it really doesn't matter
> > whether JPL-internal people think RAX was successful or not, because
> > they won't be reading this anyway.  What matters is whether new Lisp
> > users or potential Lisp users in the rest of the world read this web
> > page and think they have similar problems that Lisp would help them
> > solve.
> 
> Hm, good point.

To follow up on that point even a bit more -- as a consultant, I've
used this as a success story.  "Your quote for Lisp is good -- much
lower than for C++".  "Yes."  "But, shouldn't we use Industry Standard
technology?"

For smaller organizations, a success story that was dismissed for
political reasons by a government-associated organization -- well,
that's a success story -- in fact, one with extra kick.  In fact,
although this might be a tragic paragraph for your potential use of
Lisp, it was a boon for mine:

  Now, you might expect that with a track record like that, with one
  technological success afte another, that NASA would be rushing to
  embrace Lisp.  And you would, of course, be wrong.

I had it quoted back to me almost word for word.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Erann Gat
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <gat-0306030747310001@192.168.1.51>
In article <···············@apocalypse.OCF.Berkeley.EDU>,
···@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:

> For smaller organizations, a success story that was dismissed for
> political reasons by a government-associated organization -- well,
> that's a success story -- in fact, one with extra kick.  In fact,
> although this might be a tragic paragraph for your potential use of
> Lisp, it was a boon for mine:

I'm glad to hear that.

E.
From: Joe Marshall
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <brxf8cq1.fsf@ccs.neu.edu>
···@apocalypse.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> "Your quote for Lisp is good -- much lower than for C++".  
> "Yes."
> "But, shouldn't we use Industry Standard technology?"


"um... Yes.  Yes, you should.  Let's do it Visual Basic."
From: Paolo Amoroso
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <ScjcPpfFPKQBku=uj9kx23J=aqH9@4ax.com>
On 03 Jun 2003 00:59:24 -0700, ···@apocalypse.OCF.Berkeley.EDU (Thomas F.
Burdick) wrote:

> To follow up on that point even a bit more -- as a consultant, I've
> used this as a success story.  "Your quote for Lisp is good -- much

You might consider adding an entry to:

  http://alu.cliki.net/Consultant


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Larry Elmore
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <1pUCa.553776$Si4.500188@rwcrnsc51.ops.asp.att.net>
Daniel Barlow wrote:
> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> 
>>that's unfortunate, but that's the way it is.  As another example, RAX is
>>listed as a Lisp success story, and I certainly think it is, but it is
>>widely regarded at JPL as the poster child for why *not* to use Lisp.  See
>>http://www.flownet.com/gat/jpl-list.html if you want to know why.)
> 
> 
> Can you please stop shifting the goalposts?  We've already determined
> that you can't persuade JPL to use Lisp and so need a grassroots Lisp
> movement to increase its visibility - so it really doesn't matter
> whether JPL-internal people think RAX was successful or not, because
> they won't be reading this anyway.  What matters is whether new Lisp
> users or potential Lisp users in the rest of the world read this web
> page and think they have similar problems that Lisp would help them
> solve.
> 
> 
> -dan
> 

Shifting goalposts? I can't even find the first set at the given URL:
"Not Found
The requested URL /gat/jpl-list.html was not found on this server."
From: Erann Gat
Subject: Re: RoboCup or Bust!
Date: 
Message-ID: <gat-0206032206110001@192.168.1.51>
In article <·······················@rwcrnsc51.ops.asp.att.net>, Larry
Elmore <········@attbi.com> wrote:

> Daniel Barlow wrote:
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > 
> >>that's unfortunate, but that's the way it is.  As another example, RAX is
> >>listed as a Lisp success story, and I certainly think it is, but it is
> >>widely regarded at JPL as the poster child for why *not* to use Lisp.  See
> >>http://www.flownet.com/gat/jpl-list.html if you want to know why.)
> > 
> > 
> > Can you please stop shifting the goalposts?  We've already determined
> > that you can't persuade JPL to use Lisp and so need a grassroots Lisp
> > movement to increase its visibility - so it really doesn't matter
> > whether JPL-internal people think RAX was successful or not, because
> > they won't be reading this anyway.  What matters is whether new Lisp
> > users or potential Lisp users in the rest of the world read this web
> > page and think they have similar problems that Lisp would help them
> > solve.
> > 
> > 
> > -dan
> > 
> 
> Shifting goalposts? I can't even find the first set at the given URL:
> "Not Found
> The requested URL /gat/jpl-list.html was not found on this server."

Sorry, typo: it's jpl-lisp.html
                         ^

E.
From: Paolo Amoroso
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <icncPgoKRnfA5cUrHodGfi1ORy0H@4ax.com>
On Mon, 02 Jun 2003 08:36:50 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:

> In article <····························@4ax.com>, Paolo Amoroso
> <·······@mclink.it> wrote:
[...]
> >   http://alu.cliki.net/Industry%20Application
> >   http://alu.cliki.net/Research%20Organizations
> >   http://alu.cliki.net/Success%20Stories
> >   http://alu.cliki.net/Evaluate%20Lisp
> >   http://alu.cliki.net/Consultant
[...]
> It's definitely not crap.  On the other hand, it's not a slam dunk either.
                                                           ^^^^^^^^^
My English disassembler--currently in Beta--chokes on this. Can you help?


> The bad: in many cases there's not enough information to tell the extent
> to which Lisp is used, or even whether it's still being used or was just
> used at some time in the past, whether it's in production or just being
> used by one person in the research department, etc. etc.  ("Who uses

Most of this is a consequence of the way the core of these lists, the
industry applications page, got started.

When I started collecting most of this information, I explicitly decided to
concentrate on "recent" news, for appropriate values of recent--e.g. within
the previous 2-3 years. I explicitly chose to completely ignore the
industry applications page at the ALU site, from which I got inspiration,
because it was probably largely out of date.

I relied on all sorts of sources: newsgroup and mailing list postings,
press releases, personal mail, books, Lisp conference proceedings, journal
and magazine articles, web sites, announcements of job openings,
serendipity, you name it.

So, in short, the information should be reasonably current and up to date.
And, of course, I strongly encourage Lispers to edit those pages for
updates and corrections.

A few months ago, Dan Barlow commented in the industry applications
page--his comment should still be there--that it would have been cool to
learn more about all those Lisp uses. So, I started to retroactively add
"Additional info" sections wherever possible, providing links to the
sources I still kept.

Again, Lispers are encouraged to add more info and references. Note that I
am _not_ the only one who worked on those pages, there have been other
contributors.

As to how exactly Lisp is used within an organization, covering this was
not a major goal when I started working on this. When it comes to tracking
language success stories and industry usage, this kind of information may
be more or less equally difficult to collect for Lisp as it is for
mainstream languages. You may see truckloads of books about mainstream
languages on the shelves of your local computer bookshop, but you still
don't know where and how the language is used. Besides, even knowing that a
single guy uses Lisp within an organization may still provide valuable
information.

Did I mention that Lispers are encouraged to provide more info? :)

I was able to collect a lot of information from the province of the empire:
Milano, Italy. If I understand correctly, you live in California, where
there is possibly the largest concentration of computing and high tech
industries--and the offices of a major Lisp vendor. Can you contribute one
or two more entries? If you are already aware of someone using Lisp, this
takes very little time. Contacts with local Lispniks may help.


> ViaWeb, for example, has now been converted from Lisp to C++, and that
> tends to undermine the argument.  It makes it appear that at the end of

I don't have the list handy, but there should be a note in the success
stories page about the switch to C++.


> that's unfortunate, but that's the way it is.  As another example, RAX is
> listed as a Lisp success story, and I certainly think it is, but it is
> widely regarded at JPL as the poster child for why *not* to use Lisp.  See

You have already seen Dan's comment.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Erann Gat
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <gat-0306030954270001@k-137-79-50-101.jpl.nasa.gov>
In article <····························@4ax.com>, Paolo Amoroso
<·······@mclink.it> wrote:

> > It's definitely not crap.  On the other hand, it's not a slam dunk either.
>                                                            ^^^^^^^^^
> My English disassembler--currently in Beta--chokes on this. Can you help?

A slam dunk is a maneuver in the sport of basketball where the player
jumps high enough to propel the ball downwards through the basket (as
opposed to the more usual technique of having to rely on gravity do part
of the work).  The probability of scoring using this maneuver (for those
players and under those circumstances where it is possible to execute it)
is for all intents and purposes 100%.  Idiomatically, "It's a slam dunk"
means that success is certain.  When referring to arguments it means that
the argument is unimpeachable.

> Again, Lispers are encouraged to add more info and references.

Let me add my voice to this request.  Detailed up-to-date information
about how you are using Lisp in your work, especially if you are using it
in a real industrial application, will be of great use to me (and I'm sure
others as well).  This is not just an empty excercise.  You can really do
some good here.

E.
From: Paolo Amoroso
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <ff4743a5.0306040033.622e54f8@posting.google.com>
···@jpl.nasa.gov (Erann Gat) wrote in message news:<····················@k-137-79-50-101.jpl.nasa.gov>...

> A slam dunk is a maneuver in the sport of basketball where the player
> jumps high enough to propel the ball downwards through the basket (as
> opposed to the more usual technique of having to rely on gravity do part
> of the work).  The probability of scoring using this maneuver (for those

Given your background, you probably mean something like this :-)

  http://lifesci3.arc.nasa.gov/SpaceSettlement/Video/bb.qt


Paolo
From: Joe Marshall
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <wug36r58.fsf@ccs.neu.edu>
Paolo Amoroso <·······@mclink.it> writes:

> On Mon, 02 Jun 2003 08:36:50 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> > In article <····························@4ax.com>, Paolo Amoroso
> > <·······@mclink.it> wrote:
> [...]
> > >   http://alu.cliki.net/Industry%20Application
> > >   http://alu.cliki.net/Research%20Organizations
> > >   http://alu.cliki.net/Success%20Stories
> > >   http://alu.cliki.net/Evaluate%20Lisp
> > >   http://alu.cliki.net/Consultant
> [...]
> > It's definitely not crap.  On the other hand, it's not a slam dunk either.
>                                                            ^^^^^^^^^
> My English disassembler--currently in Beta--chokes on this. Can you help?

It is a term that originally comes from basketball.  A `slam dunk' is
a basketball shot where the player jumps into the air and throws the
ball directly down into the hoop.  Clearly such a shot has no chance
whatsoever of missing.

Erann finds those pages compelling, but not so overwhelmingly
convincing as to sway the kind of people he's been up against.
From: Kenny Tilton
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <3EDCD06D.10701@nyc.rr.com>
Paolo Amoroso wrote:
> On Mon, 02 Jun 2003 08:36:50 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> 
>>In article <····························@4ax.com>, Paolo Amoroso
>><·······@mclink.it> wrote:
> 
> [...]
> 
>>>  http://alu.cliki.net/Industry%20Application
>>>  http://alu.cliki.net/Research%20Organizations
>>>  http://alu.cliki.net/Success%20Stories
>>>  http://alu.cliki.net/Evaluate%20Lisp
>>>  http://alu.cliki.net/Consultant
>>
> [...]
> 
>>It's definitely not crap.  On the other hand, it's not a slam dunk either.
> 
>                                                            ^^^^^^^^^
> My English disassembler--currently in Beta--chokes on this. Can you help?

Run it thru your basketball pre-processor first. :)

    http://www.bartleby.com/61/57/D0425700.html


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Paolo Amoroso
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has     been settled]
Date: 
Message-ID: <ZOjcPsmO21m+HJIX+uOgK2Re9Xb2@4ax.com>
On Tue, 03 Jun 2003 16:44:00 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:

> Paolo Amoroso wrote:
> > On Mon, 02 Jun 2003 08:36:50 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:
[...]
> >>It's definitely not crap.  On the other hand, it's not a slam dunk either.
> > 
> >                                                            ^^^^^^^^^
> > My English disassembler--currently in Beta--chokes on this. Can you help?
> 
> Run it thru your basketball pre-processor first. :)

(disasm (play :off "article.asm"))


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Paolo Amoroso
Subject: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has   been settled]
Date: 
Message-ID: <qObcPvz3jgsimmD3Z1p2x88NQzhx@4ax.com>
On Tue, 03 Jun 2003 18:17:25 +0200, Paolo Amoroso <·······@mclink.it>
wrote:

> So, in short, the information should be reasonably current and up to date.
> And, of course, I strongly encourage Lispers to edit those pages for
> updates and corrections.

In other words: those pages are meant to be current and up to date, and any
inaccuracy should be considered a problem to fix.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Erann Gat
Subject: [offtopic] Yachts (was: Re: RoboCup or Bust! [was Re: ANN: Lisp-2 vs Lisp-1 argument has been settled])
Date: 
Message-ID: <gat-0206031459020001@k-137-79-50-101.jpl.nasa.gov>
In article <················@nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

> > What I really want is a nice fiberglass yacht with a GPS navigation system
> > and a fully stocked bar.
> 
> The bar sounds fine, but throw in radar. Wood is cozier and offers a 
> smoother ride if you can afford the maintenance and don't mind going a 
> little slower. Holds up better against the ice, as well. Anyway...

Here's something to shoot for:

http://www.ibiza-spotlight.com/net-news/lady_moura_020720_i.htm

Note the helicopter on the stern.

I actually saw Lady Moura berthed in Monaco once.  She is quite a sight.

E.
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <1BTaPjDa4QNkjHFgDZdDBMBfuynj@4ax.com>
On Sat, 31 May 2003 21:27:38 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:

    [Erann:]
> > > Question for any newcomers who may still be following this thread: how
> > > many of you would say that you have been positively influenced towards
> > > Common Lisp by Ray de Lacaze?
[...]
> I had never heard of Ray before, so I looked him up to try to figure out
> what he had done that would move Kenny to cite him as a role model on a
> par with Ernest Shackleton.  The only thing I could find was that he'd
> attended some standards meetings.  Since the topic at hand was recruiting
> new people to CL I thought I'd go straight to the source.  Why do you have

This is a hint that in the Lisp world, a lot more is going on than you are
aware of.

Ray is the ALU president and one of the main driving forces behind ILC. He
is working hard on--human--networking, especially for addressing the
"newcomers" problem and bringing more business users to Lisp. And in case
you are also wondering about the "low profile" of ALU, Ray is working on
that too.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Daniel Barlow
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87brxilqy7.fsf@noetbook.telent.net>
···@jpl.nasa.gov (Erann Gat) writes:

> Question for any newcomers who may still be following this thread: how
> many of you would say that you have been positively influenced towards
> Common Lisp by Ray de Lacaze?

For what it's worth, I've had more than a few people ask me about Lisp
when I was wearing my ILC2002 tshirt at local Linux user group events.

I don't know whether they decided to go on and learn the language, nor
how to correctly apportion credit between the conference organizer and
the clothes horse.  But we're all on the same side, right?


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <rJ6Ca.1084789$S_4.1092633@rwcrnsc53>
"Kenny Tilton" <·······@nyc.rr.com> wrote in message ·····················@nyc.rr.com...
>
> While you are resting up, read up on the Shackleton voyage.
>
>     http://www.pbs.org/wgbh/nova/shackleton/
>
> Four of the men map nicely onto lispniks.
>
> I'll cast you as Thomas Ordes-Lee.
>
> Ray De Lacaze gets the plum part, Ernest Shackleton, ...
>
> I guess my interminable cheerleading has me against all odds playing the
> part of seaman Timothy McCarthy....

So Erik would be Amundsen....
From: Timothy Moore
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbbp5l$2av$0@216.39.145.192>
Kenny Tilton <·······@nyc.rr.com> writes:
 
> I guess my interminable cheerleading has me against all odds playing
> the part of seaman Timothy McCarthy (nice last name, that):  "He is
> the most irrepressable optimist I've ever met," Worsley wrote. "When I
> relieve him at the helm, boat iced and seas pouring down your neck, he
> informs me with a happy grin, `It's a grand day, sir.'"
> 
> I'd actually rather play one of the sled dogs, but they get euthanized
> when the ice thaws and the crew has to hit the lifeboats.
> 

Actually, Kenny, I'd mix expedition metaphors and cast you as Apsley
Cherry-Gerard, cheerfully setting off for Cape Crozier in the dead of
the Antartic winter to look for penguin eggs.

Tim
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <S6XBa.3268$MM4.128711@news0.telusplanet.net>
"Wade Humeniuk" <····@nospam.nowhere> wrote in message
·························@news2.telusplanet.net...
> 3) Put a big poster up in your office with a ton of Lisp (useful and
> not-useful) code for the occasional visitor to see out of the corner
> of their eye.  Just put it there, do not point it out to your visitors,
> no pressure sales technique.


Thinking about number 3 a bit more.  How about a nice poster, movie
size with pictures of the JPL hardware systems you worked on that have
Lisp code inside?  Overlay with snippets of your code that are in
each system.  Title it, hang it up on your wall.  I'll send you $20
if I can have a copy.  If you have trouble doing the
poster I am sure someone here with some artistic ability and a
graphics program can help out.  I would be even willing to give it a go.

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-3105030008080001@192.168.1.51>
In article <·····················@news0.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Wade Humeniuk" <····@nospam.nowhere> wrote in message
> ·························@news2.telusplanet.net...
> > 3) Put a big poster up in your office with a ton of Lisp (useful and
> > not-useful) code for the occasional visitor to see out of the corner
> > of their eye.  Just put it there, do not point it out to your visitors,
> > no pressure sales technique.
> 
> 
> Thinking about number 3 a bit more.  How about a nice poster, movie
> size with pictures of the JPL hardware systems you worked on that have
> Lisp code inside?  Overlay with snippets of your code that are in
> each system.  Title it, hang it up on your wall.  I'll send you $20
> if I can have a copy.  If you have trouble doing the
> poster I am sure someone here with some artistic ability and a
> graphics program can help out.  I would be even willing to give it a go.

That's actually not a half-bad idea.  I in fact have no artistic ability
whatsoever, but if someone knows a graphics artist who is willing to step
up to the plate I'll be happy to work with them and make this happen.

Anyone know how much it costs to do a production run of posters?

E.
From: Wade Humeniuk
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <RHJCa.3887$QF2.979761@news2.telusplanet.net>
"Erann Gat" <···@jpl.nasa.gov> wrote in message ·························@192.168.1.51...
> That's actually not a half-bad idea.  I in fact have no artistic ability
> whatsoever, but if someone knows a graphics artist who is willing to step
> up to the plate I'll be happy to work with them and make this happen.
> 
> Anyone know how much it costs to do a production run of posters?


Any luck or takers yet?

Wade
From: Erann Gat
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <gat-0206030816190001@192.168.1.51>
In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
<····@nospam.nowhere> wrote:

> "Erann Gat" <···@jpl.nasa.gov> wrote in message
·························@192.168.1.51...
> > That's actually not a half-bad idea.  I in fact have no artistic ability
> > whatsoever, but if someone knows a graphics artist who is willing to step
> > up to the plate I'll be happy to work with them and make this happen.
> > 
> > Anyone know how much it costs to do a production run of posters?
> 
> 
> Any luck or takers yet?

Nope.

E.
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bbfu9a$pn2$1@f1node01.rhrz.uni-bonn.de>
Erann Gat wrote:
> In article <·····················@news2.telusplanet.net>, "Wade Humeniuk"
> <····@nospam.nowhere> wrote:
> 
>>"Erann Gat" <···@jpl.nasa.gov> wrote in message
> 
> ·························@192.168.1.51...
> 
>>>That's actually not a half-bad idea.  I in fact have no artistic ability
>>>whatsoever, but if someone knows a graphics artist who is willing to step
>>>up to the plate I'll be happy to work with them and make this happen.
>>>
>>>Anyone know how much it costs to do a production run of posters?
>>
>>Any luck or takers yet?
> 
> Nope.

There is probably a print office / print shop near to your home/office. 
Just call and ask them. That's usually pretty simple.

The price of printing posters depends on many factors, starting from the 
amount and quality of colors you want to use up to the size and quality 
of the posters themselves. The number of posters usually doesn't 
contribute too much to the price. However, it's hard to predict the 
actual cost.

If this helps I would be willing to buy one or even a few, depending on 
the price. Perhaps you can find a sponsor - Franz, Xanalys, Digitool, 
Corman, the usual suspects.

Perhaps this could be a competition at the ILC - the attendees of the 
conference vote for a poster design and the one with the most votes gets 
printed and sent to each attendee. There should be people at some local 
groups that are able to design posters.


As most of the time, I am just brainstorming. ;-)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Rainer Joswig
Subject: LispWare
Date: 
Message-ID: <c366f098.0305311250.a26d171@posting.google.com>
"Wade Humeniuk" <····@nospam.nowhere> wrote in message news:<····················@news2.telusplanet.net>...

some comments, wade.

> A few low-cost (and maybe a little sneaky) suggestions.
> 
> 1) Leave Lisp literature sitting around.  Around your desk, in the
> coffee room, maybe even the washroom.  The idea that someone
> is using it, evidenced by the written word is very powerful.  If someone
> asks who it is, say nothing.  Just let natural human curiousity
> do its thing.  The one real thing that got me interested in trying Lisp
> was a ACM special magazine edition about Lisp. (about 12 years ago?)
> 
> As a collorary leave literature around lamenting the state of the
> software development art.

Today it means you have to set up a Lisp written webserver and have
some Lisp stuff on it.

> 2) Bring an old Lisp machine in, set it up somewhere and leave it on.
> Before I really knew Lisp a Co-worker brought a Symbolics LispM
> into his office.  I was intrigued that such a modern looking system
> existed so long ago.  He also hooked it up to DEC Net and interfaced
> with the systems.  My one thought was, wow, it was way ahead of its
> time.  Or maybe you have an old DEC alpha machine that has the
> Symbolics emulator on it.  In a place like JPL there must be surplus
> scrap like that sitting around.

You could even use the Lisp machine for some development. ;-)

Also, always have a running Lisp system on your laptop - in case
somebody looks.

Currently I often using a very nice and comfortable train.
I travel in the office area, where you can use laptops.
Sometimes I do a bit recreational hacking. Recently some guy watched
me doing some Lisp stuff. A few weeks later he asked me on the
same train, how he could be a professional software developer. ;-)
(now I would have to explain him that Objective C is the wrong
tool).

> 3) Put a big poster up in your office with a ton of Lisp (useful and
> not-useful) code for the occasional visitor to see out of the corner
> of their eye.  Just put it there, do not point it out to your visitors,
> no pressure sales technique.

I was thinking a little bit about something related lately.
Some larger poster with a nice overview about the ANSI CL
functionality would be nice to have. Plus, say, one or two
posters about some of the usual libraries. I have a Reference
Card for the Symbolics stuff - now, it would be nice to have some
reference posters for CL and related (say, the implementation
libs of MCL, LispWorks, ACL, ... maybe CLIM, ...).

Btw., the Franz Store is a really good idea: https://secure.franz.com/store/home
From: Joe Marshall
Subject: Re: LispWare
Date: 
Message-ID: <brxgbl2i.fsf@ccs.neu.edu>
······@corporate-world.lisp.de (Rainer Joswig) writes:

> Also, always have a running Lisp system on your laptop - in case
> somebody looks.

That's a good idea.  Wait.  I already do that.  
More than one, in fact.
From: Bob Bane
Subject: Re: LispWare
Date: 
Message-ID: <3EDB745B.9090102@removeme.gst.com>
Rainer Joswig wrote:

> "Wade Humeniuk" <····@nospam.nowhere> wrote in message news:<····················@news2.telusplanet.net>...
> 
>>
>>As a collorary leave literature around lamenting the state of the
>>software development art.
>>
> 
> Today it means you have to set up a Lisp written webserver and have
> some Lisp stuff on it.
> 
> ...
> 
> Also, always have a running Lisp system on your laptop - in case
> somebody looks.
> 


(Opens iBook, sees screen completely obscured by multiple Carbon EMACS 
windows, ILISP buffers connected to OpenMCL, running 
PortableAllegroServe...)

Check.
From: Rainer Joswig
Subject: Re: LispWare
Date: 
Message-ID: <joswig-6210EE.19212802062003@news.fu-berlin.de>
In article <················@removeme.gst.com>,
 Bob Bane <····@removeme.gst.com> wrote:

> Rainer Joswig wrote:
> 
> > "Wade Humeniuk" <····@nospam.nowhere> wrote in message 
> > news:<····················@news2.telusplanet.net>...
> > 
> >>
> >>As a collorary leave literature around lamenting the state of the
> >>software development art.
> >>
> > 
> > Today it means you have to set up a Lisp written webserver and have
> > some Lisp stuff on it.
> > 
> > ...
> > 
> > Also, always have a running Lisp system on your laptop - in case
> > somebody looks.
> > 
> 
> 
> (Opens iBook, sees screen completely obscured by multiple Carbon EMACS 
> windows, ILISP buffers connected to OpenMCL, running 
> PortableAllegroServe...)
> 
> Check.
> 
> 
> 
> 

Hmm, probably similar to this:
http://lemonodor.com/archives/images/openmcl-ide-rainer.png
;-) (OpenMCL, Emacs, Mac OS X Interface Builder)



Also, make sure you also have a 3d graphics demo at hand:

http://lemonodor.com/archives/images/mcl-opengl-bild-6.jpg

(my Powerbook with MCL and Alexander Repenning's cool
OpenGL lib).
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <RqfYPhZPAowuGXn0W+lSz+WWmILA@4ax.com>
On Sat, 31 May 2003 01:56:51 GMT, "Wade Humeniuk" <····@nospam.nowhere>
wrote:

> 1) Leave Lisp literature sitting around.  Around your desk, in the
> coffee room, maybe even the washroom.  The idea that someone

Something similar to what the Bookcrossing community does:

  http://www.bookcrossing.com


> do its thing.  The one real thing that got me interested in trying Lisp
> was a ACM special magazine edition about Lisp. (about 12 years ago?)

It was probably the September 2001 issue.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <0cnYPoTDBfd5XARsUefvb2tSMru5@4ax.com>
On Sat, 31 May 2003 15:00:28 +0200, Paolo Amoroso <·······@mclink.it>
wrote:

> On Sat, 31 May 2003 01:56:51 GMT, "Wade Humeniuk" <····@nospam.nowhere>
> wrote:
> 
> > 1) Leave Lisp literature sitting around.  Around your desk, in the
> > coffee room, maybe even the washroom.  The idea that someone
> 
> Something similar to what the Bookcrossing community does:
> 
>   http://www.bookcrossing.com

Hey, they have even liberated 3 Lisp books!

http://www.bookcrossing.com/search/?title=lisp&author=&category=&isbn=&bcid=&=Search


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Gareth McCaughan
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <slrnbdjljd.1i9.Gareth.McCaughan@g.local>
Paolo Amoroso wrote:

[Wade Humeniuk:]
> > do its thing.  The one real thing that got me interested in trying Lisp
> > was a ACM special magazine edition about Lisp. (about 12 years ago?)
>
> It was probably the September 2001 issue.

Gosh, how time flies. Was 2001 really about 12 years ago?
There was a Lisp edition of CACM in September *1991*. :-)

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <yPbZPjcz4y14Mp2x2ly+W2DFq56r@4ax.com>
On Sun, 1 Jun 2003 11:33:17 +0100, Gareth McCaughan
<················@pobox.com> wrote:

> Paolo Amoroso wrote:
> 
> [Wade Humeniuk:]
> > > do its thing.  The one real thing that got me interested in trying Lisp
> > > was a ACM special magazine edition about Lisp. (about 12 years ago?)
> >
> > It was probably the September 2001 issue.
> 
> Gosh, how time flies. Was 2001 really about 12 years ago?
> There was a Lisp edition of CACM in September *1991*. :-)

Ooops... you are right. I have got my own millennium bug.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Nikodemus Siivola
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb8lp4$5jch6$1@midnight.cs.hut.fi>
Erann Gat <···@jpl.nasa.gov> wrote:

>> If you care about the popularity, why don't you spend more of your
>> effort on the non-technical issues that more directly affect
>> popularity?

> Like what?  I'm always open to suggestions.

Focusing on the positive? I don't mean "see no evil, hear no evil", just a
shift of focus in your postings. 

You are a vocal and experienced lisp user, but a great deal of your
postings focus on improving CL in non-ANSI ways: it is only natural that
this will be reflected in the opinions of newcomers.

Cheers,

  -- Nikodemus
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <B6PYPgONTNoqy2ffkm+aOUBcZVEs@4ax.com>
On Fri, 30 May 2003 14:26:21 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:

> In article <··············@sidious.geddis.org>, Don Geddis
> <···@geddis.org> wrote:
[...]
> > Hence I wonder, since your real concern seems to be Lisp's popularity, why
> > you are so worried about technical features in the core of the language?
> 
> Because I don't have millions of dollars to put into a marketing campaign,
> so I do what I can with what I have.  One of the most effective marketing

What about adding a few more entries to the following pages?

  http://alu.cliki.net/Industry%20Application
  http://alu.cliki.net/Research%20Organizations
  http://alu.cliki.net/Success%20Stories
  http://alu.cliki.net/Evaluate%20Lisp
  http://alu.cliki.net/Consultant

It will cost you a few bucks less.


> >  If you care about
> > the popularity, why don't you spend more of your effort on the non-technical
> > issues that more directly affect popularity?
> 
> Like what?  I'm always open to suggestions.

See above.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdfn9il2enl54b@corp.supernews.com>
Don Geddis <···@geddis.org> wrote in
···················@sidious.geddis.org: 


Sorry to interject here, this thread is huge and finding a good spot to 
begin is not easy :)

As a newcomer to Lisp, perhaps a fresh [newbie] mind would help.

> 
> 2. You believe that parts of CL design are complicated/confusing, and
> this is a barrier to new people learning and embracing the language.

One could argue that C is nothing more than a syntax to call and manipulate 
functions written in assembly. In said environment, to learn C, all one 
needs to do is learn the syntax and a few keywords. Then it is a simple 
matter of learning new libraries of functions for the platform/processor 
you want to program. The same with FORTH, Pascal, BASIC and most other 
mainstream languages.

CL (and Smalltalk) seem to suffer from the concept that the language isn't 
the syntax, but instead the libraries. For me, sitting down to learn Lisp 
is not a matter of learning where to put ()'s, but rather CAR, CDR, 
NUMBERP, etc. And with the shear size of CL, this is just plain daunting. 
Not to mention that in the CL spec there is nothing (that I know of) for 
interfacing with hardware or the OS. So, as a beginner, once I have CL, 
what do I do with it?

That last question is a serious one. Perhaps I really need a project that 
could benefit from using CL -- in which case, CL suffers [debateably] from 
being a niche language (like FORTH on embedded processors).

>
> 3. Thus you have proposed "fixes" to the CL standard, which would make
> it more accessible to newbies.  In addition, you have a meta-complaint
> that the CL community appears to be against any change to the ANSI
> standard.  Thus you wonder how to ever fix the problems you see in #2.

Changing the standard would "break" numerous programs. This is not good. 
But standards are bad (IMHO) anyways. They suddenly limit creativity and 
the evolution of a language.

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4he7bbzrn.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

 [ ... ]

> Not to mention that in the CL spec there is nothing (that I know of) for 
> interfacing with hardware or the OS. So, as a beginner, once I have CL, 
> what do I do with it?

I don't quite understand this position.  A language with no ties to its
surroundings is probably pretty useless.  However, CL has functionality
such as get-universal-time, open, and write, so I disagree with your
contention.

But perhaps what you really mean is that the CL spec makes no provision
for external connection with _libraries_ of other languages.  That is
true (as it tends to be with many languages), but most CL implementations
define a way to access non-lisp functionality and data by way of a foreign
function interface (FFI), and in fact it becomes pseudo-standard if you
program in the UFFI style (http://uffi.b9.com/).

Or perhaps you mean that the CL definition is not well integrated with
window systems.  Perhaps this is a feature; the commercial lisps tend to
provide their own IDEs through which you can have such interaction.  If
CL were to specify such 

> That last question is a serious one. Perhaps I really need a project that 
> could benefit from using CL -- in which case, CL suffers [debateably] from 
> being a niche language (like FORTH on embedded processors).

Grab our Trial version and run through the tutorials.  Maybe that will give
you some ideas.

> > 3. Thus you have proposed "fixes" to the CL standard, which would make
> > it more accessible to newbies.  In addition, you have a meta-complaint
> > that the CL community appears to be against any change to the ANSI
> > standard.  Thus you wonder how to ever fix the problems you see in #2.
> 
> Changing the standard would "break" numerous programs. This is not good. 
> But standards are bad (IMHO) anyways. They suddenly limit creativity and 
> the evolution of a language.

The CL standard is hardly limiting.  In fact, the major concept that
is carried through the CL spec (which follows from the Lisp mindset) is
that Lisp is extensible, and thus a good tool for creating new languages.
Many of our more successful customers are successful in this very way,
by knowing what language they want for their problem domain, and by then
extending the CL language to that domain.  Lisp is indeed a programmable
programming language, and the CL standard does not limit a programmer from
creating his own domain-specific language.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdhhalsgvb8f52@corp.supernews.com>
Duane Rettig <·····@franz.com> wrote in
··················@beta.franz.com: 

> Jeff Massung <···@NOSPAM.mfire.com> writes:
> 
>  [ ... ]
> 
>> Not to mention that in the CL spec there is nothing (that I know of)
>> for interfacing with hardware or the OS. So, as a beginner, once I
>> have CL, what do I do with it?
> 
> I don't quite understand this position.  A language with no ties to
> its surroundings is probably pretty useless.  However, CL has
> functionality such as get-universal-time, open, and write, so I
> disagree with your contention.
> 

Perhaps I am simply showing off my ignorance of the language (as I said, 
I'm still a newbie). :)

My position is in no way geared toward knocking CL, but rather to give an 
outsider's view as to why it isn't "mainstream" or "easy to learn" -- the 
topic I was replying to.

As I read PG's "ANSI Common Lisp" (a great book, btw) and compare the 
knowledge inside it with my knowledge of other languages, the main 
difference I come across is libraries.

When I learned C [a very long time ago], the entire language was built 
around some very simple concepts. Once those concepts were learned, 
"porting" that information to numerous systems was simply a matter of 
learning new include files, function names and what they did.

However, CL appears do differ in this regard. When I read about "Pure 
Lisp", the same seems true: a core set of simple rules, from which numerous 
Lisp dialects and systems could be generated (Scheme, etc). CL is not that 
way. CL /is/ the libraries and macros that make it what it is. And it is 
_huge_. CL doesn't appear to be a matter of learning a few simple rules, 
but rather, memorizing an entire dictionary.

CL's ability to handle macros is phenomenal! Nothing like it! It's grand! 
But the fact that some of the most basic keywords (ie DEFUN) are macros, is 
both a testimate to its greatness, and a setback to those wanting to learn. 
Imagine, instead of Latin being a core set of rules to create words, which 
are used to create sentences, someone sat down and tried to come up with 
every possible sentence you would ever want to say, and made those the 
language. Suddenly it become near impossible to just "pick up".

Perhaps this concept would be squashed by simply breaking up CL into 
manageable pieces (or libraries). I currently feel that when I start up a 
lisp implementation, every single library and function available is loaded 
up already -- as if each one is important for the system as a whole, 
whether I use it or not. This most likely isn't the way it is, but it 
definitely appears that way to a beginner.

I reiterate, perhaps I am displaying ignorance :)

IMHO, Lisp, itself, /should be/ and /is/ nothing but a set of small, simple 
rules for processing lists and symbols. Perhaps this is "Pure Lisp", 
perhaps it is something more [useful]. But CL shouldn't be the standard. 
Pure Lisp should be. CL should be nothing more than a package or dialect 
built off the standard. This would make Lisp easier to learn, more portable 
and much more manageable. As an example, in an offshoot of this thread is 
being discussed COMPLEXP. Comlpex numbers should _not_ be part of the 
standard, but an add-on for those who want or need that support. All it 
does is add complexity to a topic that at its core can be complex for many 
already.

> The CL standard is hardly limiting.  In fact, the major concept that
> is carried through the CL spec (which follows from the Lisp mindset)
> is that Lisp is extensible, and thus a good tool for creating new
> languages. 

Again, I proudly claim ignorance :)


-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4he7bkqnb.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Duane Rettig <·····@franz.com> wrote in
> ··················@beta.franz.com: 
> 
> > Jeff Massung <···@NOSPAM.mfire.com> writes:
> > 
> >  [ ... ]
> > 
> >> Not to mention that in the CL spec there is nothing (that I know of)
> >> for interfacing with hardware or the OS. So, as a beginner, once I
> >> have CL, what do I do with it?
> > 
> > I don't quite understand this position.  A language with no ties to
> > its surroundings is probably pretty useless.  However, CL has
> > functionality such as get-universal-time, open, and write, so I
> > disagree with your contention.
> > 
> 
> Perhaps I am simply showing off my ignorance of the language (as I said, 
> I'm still a newbie). :)
> 
> My position is in no way geared toward knocking CL, but rather to give an 
> outsider's view as to why it isn't "mainstream" or "easy to learn" -- the 
> topic I was replying to.

Understood.

> As I read PG's "ANSI Common Lisp" (a great book, btw) and compare the 
> knowledge inside it with my knowledge of other languages, the main 
> difference I come across is libraries.
> 
> When I learned C [a very long time ago], the entire language was built 
> around some very simple concepts. Once those concepts were learned, 
> "porting" that information to numerous systems was simply a matter of 
> learning new include files, function names and what they did.

Well, this is true if you're only a user of libraries.  Part of the
C gestalt is to learn to create your own libraries, and as part of this
training, you learn not only how to use header files, but how to crate
them.

> However, CL appears do differ in this regard. When I read about "Pure 
> Lisp", the same seems true: a core set of simple rules, from which numerous 
> Lisp dialects and systems could be generated (Scheme, etc). CL is not that 
> way. CL /is/ the libraries and macros that make it what it is. And it is 
> _huge_. CL doesn't appear to be a matter of learning a few simple rules, 
> but rather, memorizing an entire dictionary.

It is very similar to learning English.  I'm not sure in what context you
are reading about "Pure Lisp" - that term tends to be used only by purists,
and CL is not a pure language (doesn't claim to be, doesn't want to be).
It has a set of simple rules, with perhaps some exceptions, and a large
dictionary.  English is similar, with a core set or rules, along with a
large set of exceptions, and a huge dictionary.  Yet English's size, and
its impurity, have never stopped people from learning it, although I
hear people complain about how much harder it is to learn English as a
second language than other languages.  In the end, though, it tends to
become much more powerful, precisely because of its size and impurity.

> CL's ability to handle macros is phenomenal! Nothing like it! It's grand! 
> But the fact that some of the most basic keywords (ie DEFUN) are macros, is 
> both a testimate to its greatness, and a setback to those wanting to learn. 
> Imagine, instead of Latin being a core set of rules to create words, which 
> are used to create sentences, someone sat down and tried to come up with 
> every possible sentence you would ever want to say, and made those the 
> language. Suddenly it become near impossible to just "pick up".

I would characterize it differently.  In Latin, and Latin-based languages,
you can break down words by construction if you know the etymology.
Predefined sentences would be called "slang", and you might have to learn
those in order not to get yourself into trouble.

But CL has a way to break down slang (i.e. the macros) - you can always
macroexpand them to se precisely what they do.  I always recommend to
newbies and experienced CL programmers alike that

(pprint (macroexpand <defining-form>))

is their friend, in determining just what is going on with that macro
and when things are to be done.

> Perhaps this concept would be squashed by simply breaking up CL into 
> manageable pieces (or libraries). I currently feel that when I start up a 
> lisp implementation, every single library and function available is loaded 
> up already -- as if each one is important for the system as a whole, 
> whether I use it or not. This most likely isn't the way it is, but it 
> definitely appears that way to a beginner.
> 
> I reiterate, perhaps I am displaying ignorance :)

Not ignorance, but a predisposition toward thinking about everything in
terms of libraries.  So OK, let's think in terms of libraries: when you
fired up your first Unix or Windows system, you probably didn't say "Wow,
I have all the functionality I need, here, in all of these libraries!
Where do I start?"  Instead, you just started, and perhaps you reinvented
some things that were already available in libraries.  Those libraries you
knew about, you started using, and as you grew into it, you started realizing
how to figure out whether or not some functionality you needed was already
available in a library.  Sometimes, you may have found, just Asking about it
was all that was necessary.

Am I close?

> IMHO, Lisp, itself, /should be/ and /is/ nothing but a set of small, simple 
> rules for processing lists and symbols. Perhaps this is "Pure Lisp", 
> perhaps it is something more [useful]. But CL shouldn't be the standard. 
> Pure Lisp should be. CL should be nothing more than a package or dialect 
> built off the standard. This would make Lisp easier to learn, more portable 
> and much more manageable. As an example, in an offshoot of this thread is 
> being discussed COMPLEXP. Comlpex numbers should _not_ be part of the 
> standard, but an add-on for those who want or need that support. All it 
> does is add complexity to a topic that at its core can be complex for many 
> already.

This is where you will get yourself into trouble.  "Lisp" is huge - much,
much huger than CL is or ever will be.  But part of the history of CL
is that indeed CL was the standard that everyone involved in that process
decided upon.  It does indeed capture the essence of Lisp (although not
of "Pure Lisp", which is something else entirely, and probably
unattainable).  It has, as competitors in the Lisp universe, those lisps
which did not become later absorbed by it - emacs-lisp, Scheme, autolisp,
etc.  It is the case, in all of this, that "Pure Lisp" is not reconcilable;
in the Lisp universe there are fundamental differences about what Lisp
should be in its purest form, as perhaps you've seen in a few other threads
here.  So "Pure Lisp" is not a good goal, in and of itself.  If you value
purity highest, you might want Scheme, which purports to be purer than other
lisps.  It also purports to be simpler to learn, because it doesn't have such
a large dictionary, and it has libraries, like you may be used to.  One
caveat, though - Scheme tends to be more concentrated in academic settings,
and CL tends to give more emphasis on practical and commercializable
applications.  So your direction might be guided by that emphasis.

> > The CL standard is hardly limiting.  In fact, the major concept that
> > is carried through the CL spec (which follows from the Lisp mindset)
> > is that Lisp is extensible, and thus a good tool for creating new
> > languages. 
> 
> Again, I proudly claim ignorance :)

Ignorance need never be a permanent thing.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdign7a4uj6oc1@corp.supernews.com>
Duane Rettig <·····@franz.com> wrote in
··················@beta.franz.com: 

> Jeff Massung <···@NOSPAM.mfire.com> writes:
>> When I learned C [a very long time ago], the entire language was
>> built around some very simple concepts. Once those concepts were
>> learned, "porting" that information to numerous systems was simply a
>> matter of learning new include files, function names and what they
>> did. 
> 
> Well, this is true if you're only a user of libraries.  Part of the
> C gestalt is to learn to create your own libraries, and as part of
> this training, you learn not only how to use header files, but how to
> crate them.
> 

Definitely.
 
>> CL's ability to handle macros is phenomenal! Nothing like it! It's
>> grand! But the fact that some of the most basic keywords (ie DEFUN)
>> are macros, is both a testimate to its greatness, and a setback to
>> those wanting to learn. Imagine, instead of Latin being a core set of
>> rules to create words, which are used to create sentences, someone
>> sat down and tried to come up with every possible sentence you would
>> ever want to say, and made those the language. Suddenly it become
>> near impossible to just "pick up". 
> 
> I would characterize it differently.  In Latin, and Latin-based
> languages, you can break down words by construction if you know the
> etymology. Predefined sentences would be called "slang", and you might
> have to learn those in order not to get yourself into trouble.
> 
> But CL has a way to break down slang (i.e. the macros) - you can
> always macroexpand them to se precisely what they do.  I always
> recommend to newbies and experienced CL programmers alike that
> 

Learning more each time I revisit this newsgroup :) I'll have to look at 
the macro expansion in more detail for certain.

> 
>> Perhaps this concept would be squashed by simply breaking up CL into 
>> manageable pieces (or libraries). I currently feel that when I start
>> up a lisp implementation, every single library and function available
>> is loaded up already -- as if each one is important for the system as
>> a whole, whether I use it or not. This most likely isn't the way it
>> is, but it definitely appears that way to a beginner.
>> 
>> I reiterate, perhaps I am displaying ignorance :)
> 
> Not ignorance, but a predisposition toward thinking about everything
> in terms of libraries.  So OK, let's think in terms of libraries: when
> you fired up your first Unix or Windows system, you probably didn't
> say "Wow, I have all the functionality I need, here, in all of these
> libraries! Where do I start?"  Instead, you just started, and perhaps
> you reinvented some things that were already available in libraries. 
> Those libraries you knew about, you started using, and as you grew
> into it, you started realizing how to figure out whether or not some
> functionality you needed was already available in a library. 
> Sometimes, you may have found, just Asking about it was all that was
> necessary. 
> 
> Am I close?

Yes. Perhaps the trouble I am having is that I am a minimalist at heart. 
And trusting libraries is something I don't like to do :)

But taking that approach a step further. How many people think that 
Windows is too bloated (even Linux now), simply because it is attempting 
two things: backwards compatibility and a be-all for everyone. Both are 
hideously bad reasons to do anything. Backwards compatiblity is good, 
but only to a point. There comes a time where a trashcan and rewrite are 
in order.

My perspective so far has been that Lisp is an interpreter that keeps 
everything in an "image" - much like Smalltalk. When you are done 
programming, you type (compile) so some other command that will take the 
current image and create an executable or library. However, is the 
entire image compiled? or only what it used?

> This is where you will get yourself into trouble.  "Lisp" is huge -
> much, much huger than CL is or ever will be.  

Perhaps this is something I'm not understanding yet. All my research 
(thus far) has been that Lisp consists of 7 functions and a couple 
macros (ie DEFUN)

> Scheme tends to be more concentrated in academic
> settings, and CL tends to give more emphasis on practical and
> commercializable applications.  So your direction might be guided by
> that emphasis. 

I would definitely be pointed to the CL implementation. Thank you for 
your replies. I'm learning a lot - quickly :)

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4y90m9t09.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> My perspective so far has been that Lisp is an interpreter that keeps 
> everything in an "image" - much like Smalltalk.

Well, sort of.  I'd prefer tgo think of it as an "interactive environment"
rather than an interpreted one.  Whether and how much it is interpreted
entirely depends on the system and on the user's coding style.

> When you are done 
> programming, you type (compile) so some other command that will take the 
> current image and create an executable or library. However, is the 
> entire image compiled? or only what it used?

In CL, there are two compiler entry points; compile, and compile-file.
Compile compiles a function, and compile-file compiles a file to a
binary file that can be loaded just as a source file can be loaded.

Many times on this newsgroup you might see something like

 (compile (defun foo (x) (bar x)))

which ensures that the function being defined is compiled.

> > This is where you will get yourself into trouble.  "Lisp" is huge -
> > much, much huger than CL is or ever will be.  
> 
> Perhaps this is something I'm not understanding yet. All my research 
> (thus far) has been that Lisp consists of 7 functions and a couple 
> macros (ie DEFUN)

Part of the history of Lisp is that it had diverged into many different
dialects.  In another part of this thread, you identified yourself
as a Forth expert.  Well, as a user of a highly extensible language, I'm
sure you've seen the trouble that can occur when different Forth gurus
extend their system in different ways.  So a user has just invented this
new whiz-bang word, or even a set of words that make up a module, and he
is really proud of it.  But other Forth users find that they have a better
way of doing things, and so divergence occurs.  It's been a _long_ time
since I've used Forth implementations (Figforth and Forth-78 (?) were the
ones I had used, and I got away from it before hearing that they had added
compilers as well), so I don't know what the state of the Forth world is
in today.  But I imaging that it must be hard to imagine Forth _not_
diverging, if only even a little, from the simple, hundred or so words that
were defined by Figforth.

Well, Lisp diverged quite a bit, and it got to the point where source was
not compatible between two different dialects, and porting was hard.
So the CL concept was developed in the early 80s, and stanmdardized in the
early 90s.  Believe it or not, the CL standard was smaller than the sum of
its parts, although it did include many redundant pieces and techniques
from different dialects.  But the fact that it enclosed many different and
divergent dialects into one language, is why I say that Lisp is much huger
than CL; any slice in time before 1984 will show you a different set of
functions, macros, concepts, and styles that were in use then, which dwarf
CL by comparison.

> > Scheme tends to be more concentrated in academic
> > settings, and CL tends to give more emphasis on practical and
> > commercializable applications.  So your direction might be guided by
> > that emphasis. 
> 
> I would definitely be pointed to the CL implementation. Thank you for 
> your replies. I'm learning a lot - quickly :)

No problem, and welcome.


-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdmplb7lb6e635@corp.supernews.com>
Duane Rettig <·····@franz.com> wrote in
··················@beta.franz.com: 

> 
> Part of the history of Lisp is that it had diverged into many
> different dialects.  In another part of this thread, you identified
> yourself as a Forth expert.  Well, as a user of a highly extensible
> language, I'm sure you've seen the trouble that can occur when
> different Forth gurus extend their system in different ways.  So a
> user has just invented this new whiz-bang word, or even a set of words
> that make up a module, and he is really proud of it.  But other Forth
> users find that they have a better way of doing things, and so
> divergence occurs.  It's been a _long_ time since I've used Forth
> implementations (Figforth and Forth-78 (?) were the ones I had used,
> and I got away from it before hearing that they had added compilers as
> well), so I don't know what the state of the Forth world is in today. 
> But I imaging that it must be hard to imagine Forth _not_ diverging,
> if only even a little, from the simple, hundred or so words that were
> defined by Figforth. 

IMHO, it is better to standardize a small set of words/functions and then let 
individual implementations grow to fit their own needs. In my mind this is 
important because the application is different (less bloat) and the environment 
is different (can make better use of the processor). The ones that are good 
then happen to become reused libraries of code.

One area that I'm still having trouble with is the abstraction that Lisp seems 
to place between the programmer and the hardware. C and Forth (especially) are 
always very closely related to the hardware. In fact, as I'm sure you know, 
Forth is nothing but an interpreted (in threaded cases) macro assembler. This 
makes it easy to understand, create and modify. Once the concepts of Forth are 
understood, creating an implementation for any processor is only symantics. 

I'm hoping that Lisp (what y'all would consider the CORE Lisp) is the same way. 
Common Lisp, however, definitely is not :)

I can understand the necessity to bring many of the diverges of Lisp together 
to make Common Lisp. And with everyone to please, a great job was done doing 
this. However, how much of CL is duplicated within itself?


-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4add0pg7v.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Duane Rettig <·····@franz.com> wrote in
> ··················@beta.franz.com: 
> 
> > 
> > Part of the history of Lisp is that it had diverged into many
> > different dialects.  In another part of this thread, you identified
> > yourself as a Forth expert.  Well, as a user of a highly extensible
> > language, I'm sure you've seen the trouble that can occur when
> > different Forth gurus extend their system in different ways.  So a
> > user has just invented this new whiz-bang word, or even a set of words
> > that make up a module, and he is really proud of it.  But other Forth
> > users find that they have a better way of doing things, and so
> > divergence occurs.  It's been a _long_ time since I've used Forth
> > implementations (Figforth and Forth-78 (?) were the ones I had used,
> > and I got away from it before hearing that they had added compilers as
> > well), so I don't know what the state of the Forth world is in today. 
> > But I imaging that it must be hard to imagine Forth _not_ diverging,
> > if only even a little, from the simple, hundred or so words that were
> > defined by Figforth. 
> 
> IMHO, it is better to standardize a small set of words/functions and then let 
> individual implementations grow to fit their own needs. In my mind this is 
> important because the application is different (less bloat) and the environment 
> is different (can make better use of the processor). The ones that are good 
> then happen to become reused libraries of code.

I think you're making some assumptions here, based on sound (but, as it
happens, incorrect) logic.  It is true that there are smaller systems than
CL systems.  However, there are also much larger ones, written in languages
not accused of having bloat.  I tend to see 4-8 Mb CL systems, and perhaps
with gui and modest application software this might go up to as much as
20 Mb, whereas some equivalent "smaller" languages might require 60-100 Mb
for the same functionality.  I keep versions of our lisp in a development
environment are built with less than a megabyte of heap, and whose total
memory consumption (including shared libraries) is less than 2 megabytes.

> One area that I'm still having trouble with is the abstraction that Lisp seems 
> to place between the programmer and the hardware. C and Forth (especially) are 
> always very closely related to the hardware. In fact, as I'm sure you know, 
> Forth is nothing but an interpreted (in threaded cases) macro assembler. This 
> makes it easy to understand, create and modify. Once the concepts of Forth are 
> understood, creating an implementation for any processor is only symantics. 

Forth has the dictionary as an abstraction.  Lisp has tagged pointers as an
abstraction.  However, most Lisp implementations give you access to the metal,
including shoot-yourself-in-the-foot weaponry for raw access to bits, bytes,
and words in memory without tags.  In Allegro CL, these functions are sys:memref
and sys:memref-int (and their setf-inverses).

> I'm hoping that Lisp (what y'all would consider the CORE Lisp) is the same way. 
> Common Lisp, however, definitely is not :)

I think you've changed your vocabulary, and I think I recognize the word CORE
as what you were earlier calling "Pure",  but you'll still have to define what
you mean by "the CORE Lisp" to me.

> I can understand the necessity to bring many of the diverges of Lisp together 
> to make Common Lisp. And with everyone to please, a great job was done doing 
> this. However, how much of CL is duplicated within itself?

It's not such a big deal - For example, the CAR, CADR, CADDR, NTH, FIRST, SECOND,
THIRD (etc.) functions are all related and there is some redundancy there, but
to have left any of these techniques for naming list access out of the CL
definition would have been a mistake, because those names represent programmer
intent as well as pure functionality.

I would turn the question around and ask you: how much of Forth is duplicated
within itself?  When I first read the August '78 Byte magazine, I have never
seen Forth; it intrigued me, and it had many related articles including the
Breakforth game (a version of Breakout).  I wrote a forth implementation in
a couple of weeks, and then took another couple of weeks to enhance my
implementation to be able to interpret and play Breakforth on the Computer-
Automation mini I was working on (it was a little challenging, because I
only had a tty terminal, and the implementation in the article assumed
memory-mapped terminal, and so I had to implement a sort of backing-store
for my tty terminal).  Once I had my game done I gave away my Forth
implementation and the game, as well as presenting a little ad-hoc Forth
tutorial, at a CA conference in Minnesota.  A couple of years later, I saw
the source to a Figforth implementation on some HP-1000 minis, and was
surprised at how many words were defined trivially in terms of other words
(e.g. why is this word defined? I could have just used the word inline,
instead of taking up a dictionary entry).  The Figforth implementation was
so much larger than mine had been - such a waste of space and so much
redundancy...

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdn21kq0arjp9a@corp.supernews.com>
Duane Rettig <·····@franz.com> wrote in
··················@beta.franz.com: 

> Jeff Massung <···@NOSPAM.mfire.com> writes:

> 
>> One area that I'm still having trouble with is the abstraction that
>> Lisp seems to place between the programmer and the hardware. C and
>> Forth (especially) are always very closely related to the hardware.
> 
> Forth has the dictionary as an abstraction.  Lisp has tagged pointers
> as an abstraction.  However, most Lisp implementations give you access
> to the metal, including shoot-yourself-in-the-foot weaponry for raw
> access to bits, bytes, and words in memory without tags.  In Allegro
> CL, these functions are sys:memref and sys:memref-int (and their
> setf-inverses). 

Okay, I'm getting ahead of myself. I need to shut-up, read, and come back 
with questions instead of blind comments :)

> 
>> I'm hoping that Lisp (what y'all would consider the CORE Lisp) is the
>> same way. Common Lisp, however, definitely is not :)
> 
> I think you've changed your vocabulary, and I think I recognize the
> word CORE as what you were earlier calling "Pure",  but you'll still
> have to define what you mean by "the CORE Lisp" to me.

I switched, because I began getting the impression that I was using "pure 
lisp" incorrectly. 

What I mean by both "pure" and "core" would be these 2 questions: 

What would be the most basic functionality needed in a Lisp environment 
to be called Lisp? 

From which all other definitions and functions could be created?

When those questions are answered, that answer is "pure lisp" (in my 
mind). Obviously, that system would be practically useless in any 
environment other than educational.

> 
>> However, how much of CL is duplicated within itself? 
> 
> It's not such a big deal - For example, the CAR, CADR, CADDR, NTH,
> FIRST, SECOND, THIRD (etc.) functions are all related and there is
> some redundancy there, but to have left any of these techniques for
> naming list access out of the CL definition would have been a mistake,
> because those names represent programmer intent as well as pure
> functionality. 
> 
> I would turn the question around and ask you: how much of Forth is
> duplicated within itself?  [...]

No argument here. The ANSI Forth standard suffers from the same problem 
(if you consider it a problem).

In this area, I would think that Lisp actually has an advantage over 
Forth in many ways. In Lisp, once simply needs to write (I hope I get 
this right):

  (defmacro third (a) `(caddr ,a))

And you have THIRD defined with no additional overhead. Where, in Forth, 
a whole new function is needed:

  : TUCK SWAP OVER ;

The only reason I can think of to make TUCK an ANSI word in Forth, is 
simply that it can be coded trivially in assembly, losing all the 
overhead of a function call, with two subroutines inside. I'm sure many 
of the functions in Lisp fall into that category as well. My only concern 
(based on the original post I was replying to) is whether THIRD is needed 
as part of the CL standard? or does it just add an additional piece to an 
already large puzzle for a beginner? just throw it in a package that 
someone can load if they want.

The more I play with ACL, and the more I read (PG's books) the more I'm 
seeing and experiencing that Lisp is a grand tool. Sure, there may be 
differences of opinion on implementation, but those are symantics. I 
think in the end, our goals are one and the same...

BTW, my curiosity is getting the better of me for far too long on this 
thread to hold back more ignorance... What is Lisp-2? What is Lisp-1? Are 
they purely theoretical implementations?

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <41xycpcws.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Duane Rettig <·····@franz.com> wrote in
> ··················@beta.franz.com: 
> 
> > Jeff Massung <···@NOSPAM.mfire.com> writes:
> 
> > 
> >> One area that I'm still having trouble with is the abstraction that
> >> Lisp seems to place between the programmer and the hardware. C and
> >> Forth (especially) are always very closely related to the hardware.
> > 
> > Forth has the dictionary as an abstraction.  Lisp has tagged pointers
> > as an abstraction.  However, most Lisp implementations give you access
> > to the metal, including shoot-yourself-in-the-foot weaponry for raw
> > access to bits, bytes, and words in memory without tags.  In Allegro
> > CL, these functions are sys:memref and sys:memref-int (and their
> > setf-inverses). 
> 
> Okay, I'm getting ahead of myself. I need to shut-up, read, and come back 
> with questions instead of blind comments :)
> 
> > 
> >> I'm hoping that Lisp (what y'all would consider the CORE Lisp) is the
> >> same way. Common Lisp, however, definitely is not :)
> > 
> > I think you've changed your vocabulary, and I think I recognize the
> > word CORE as what you were earlier calling "Pure",  but you'll still
> > have to define what you mean by "the CORE Lisp" to me.
> 
> I switched, because I began getting the impression that I was using "pure 
> lisp" incorrectly. 

I actually encourage the switch; "core" lisp states your intention, in the
Lisp context, much better than "pure" lisp, which has prior meaning.

> What I mean by both "pure" and "core" would be these 2 questions: 
> 
> What would be the most basic functionality needed in a Lisp environment 
> to be called Lisp? 

Ah yes, the eternal philosphical question.  There has been much discussion
about this, and what makes a Lisp a Lisp.  You might want to start with
http://www.paulgraham.com/diff.html, and there are threads in this
newsgroup that you could google for that reference this article (it is
not uncontroversial).  I'm sure there are other discussions about what
constitutes the core of lisp, including individual operators, that you
could find.

> From which all other definitions and functions could be created?

Since Lisp is a lot like Forth in its extensibility, I will go back to
it once again.  However, since the Forth I knew always had a tradition
of being tree-based (every word is either defined based on a lower-level
word, or on assembler code defining it), there is never any question as to
what the core is; just expand the definitions (as I recall, the disassembler
for Forth was the "source" word?) as far as you can go until they don't
expand in terms of lower-level words.

However, Lisp has a feature that changes everything; as I recall, Forth
words were always defined based on other words that had to already be
defined.  Lisp, on the other hand, has the tradition of allowing functions
to be defined to call other functions that have not yet been defined.
This causes the "core" concept to become clouded, because instead of
a simple tree graph, Lisp is a directed graph with circularities.  In fact,
it can be a pretty tight knot, and that makes it completely up to the
implementor as to what functions are written in terms of the others,
and which compile to primitive code.

> When those questions are answered, that answer is "pure lisp" (in my 
> mind). Obviously, that system would be practically useless in any 
> environment other than educational.

Now that I know what it is you're referring to, I'd prefer to use "core"
instead of "pure".

> >> However, how much of CL is duplicated within itself? 
> > 
> > It's not such a big deal - For example, the CAR, CADR, CADDR, NTH,
> > FIRST, SECOND, THIRD (etc.) functions are all related and there is
> > some redundancy there, but to have left any of these techniques for
> > naming list access out of the CL definition would have been a mistake,
> > because those names represent programmer intent as well as pure
> > functionality. 
> > 
> > I would turn the question around and ask you: how much of Forth is
> > duplicated within itself?  [...]
> 
> No argument here. The ANSI Forth standard suffers from the same problem 
> (if you consider it a problem).

Not at all.  The only time it is a problem is when the implementor is
counting storage down to the bits used (something I've been accused of
on occaision:-) but more and more, memory usage is expanding much faster
on other languages than in CL.

> In this area, I would think that Lisp actually has an advantage over 
> Forth in many ways. In Lisp, once simply needs to write (I hope I get 
> this right):
> 
>   (defmacro third (a) `(caddr ,a))
> 
> And you have THIRD defined with no additional overhead. Where, in Forth, 
> a whole new function is needed:
> 
>   : TUCK SWAP OVER ;

Two issues here:

 1. The macro third would indeed have space overhead in the lisp, because it
is a definition.  True, it would be compiled away, but it does, in Forth
terms, need a Dictionary entry.

 2. Since in Ansi CL THIRD is defined to be a function, it is understood
by programmers that one can funcall or apply it:

 (defun foo (x y)
    (funcall x y))

 (foo #'third a)

As a macro, this is not possible, because macros can't be funcalled or
applied.

However...  with the sacrifice of a little more dictionary space:-),
one can write a third as both a functon and a macro:

 (defun my-third (a) (caddr a))

 (define-compiler-macro my-third (a) `(caddr ,a))

and get the best of both worlds - #'my-third can now be funcalled, and
also direct calls/funcalls to my-third in compiled code can be expanded
away to calls to caddr.

> The only reason I can think of to make TUCK an ANSI word in Forth, is 
> simply that it can be coded trivially in assembly, losing all the 
> overhead of a function call, with two subroutines inside.

If Forth had compiler macros, then TUCK would not even have to be _called_;
it could just be expanded in the code that "called" it...

> I'm sure many 
> of the functions in Lisp fall into that category as well. My only concern 
> (based on the original post I was replying to) is whether THIRD is needed 
> as part of the CL standard? or does it just add an additional piece to an 
> already large puzzle for a beginner? just throw it in a package that 
> someone can load if they want.

We put some CL functions into autoloadable modules in this manner.
But it is hard to know what functions are most used, and we usually
have it wrong.  Also, the amount of space saved for such exclusions
does not tend to be earth-shattering.  Many of our users just include
everything, because it is easier than trying to figure out what to
leave out.

> The more I play with ACL, and the more I read (PG's books) the more I'm 
=======================^^^  Ansi CL, here?   Just say CL, instead.
> seeing and experiencing that Lisp is a grand tool. Sure, there may be 
> differences of opinion on implementation, but those are symantics. I 
> think in the end, our goals are one and the same...
> 
> BTW, my curiosity is getting the better of me for far too long on this 
> thread to hold back more ignorance... What is Lisp-2? What is Lisp-1? Are 
> they purely theoretical implementations?

Lisp-1 places the function definition into the same place in a symbol as
the value definition (thus a name can only have a functional or value
definition, but not both).  A Lisp-2 (or more accurately, a Lisp-N)
has at least two namespaces - a value namespace and a function namespace,
so you can call a function and a variable by the same name without
conflict.  You saw an example of a third [sic] namespace above, with the
compiler-macro definition being placed into a different namespace than
the functional namespace.


-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <znl0a4g5.fsf@ccs.neu.edu>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> One area that I'm still having trouble with is the abstraction that
> Lisp seems to place between the programmer and the hardware.

There is less abstraction there than you might think.

Most lisp systems have mechanisms for getting very close to the
hardware.  Usually, you can do it with the right declarations.
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <878ysm646w.fsf@darkstar.cartan>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Duane Rettig <·····@franz.com> wrote in
> ··················@beta.franz.com: 
> 
> > This is where you will get yourself into trouble.  "Lisp" is
> > huge - much, much huger than CL is or ever will be.
> 
> Perhaps this is something I'm not understanding yet. All my
> research (thus far) has been that Lisp consists of 7 functions
> and a couple macros (ie DEFUN)

You can ask Lisp:

(defun query-symbols ()
  (loop with specials = 0
        with macros = 0
        with functions = 0
        with variables = 0
        with others = 0
        for s being the external-symbols of "CL" do
        (cond ((special-operator-p s) (incf specials))
              ((macro-function s) (incf macros))
              ((fboundp s) (incf functions))
              ((boundp s) (incf variables))
              (t (incf others)))
        finally (format t "~&Found ~D specials, ~D macros, ~
                           ~D functions, ~D variables and ~D others."
                        specials macros functions variables others)))

JUNK 13 > (query-symbols)
Found 25 specials, 91 macros, 633 functions, 111 variables and 118 others.

(Some symbols should be counted several times, perhaps)

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Martin Rubey
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <9qn0h3jfp3.fsf@radon.mat.univie.ac.at>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Perhaps this concept would be squashed by simply breaking up CL into 
> manageable pieces (or libraries). I currently feel that when I start up a 
> lisp implementation, every single library and function available is loaded 
> up already -- as if each one is important for the system as a whole, 
> whether I use it or not. This most likely isn't the way it is, but it 
> definitely appears that way to a beginner.
> 
> I reiterate, perhaps I am displaying ignorance :)
> 
> IMHO, Lisp, itself, /should be/ and /is/ nothing but a set of small, simple 
> rules for processing lists and symbols. Perhaps this is "Pure Lisp", 
> perhaps it is something more [useful]. But CL shouldn't be the standard. 
> Pure Lisp should be. CL should be nothing more than a package or dialect 
> built off the standard. This would make Lisp easier to learn, more portable 
> and much more manageable. As an example, in an offshoot of this thread is 
> being discussed COMPLEXP. Comlpex numbers should _not_ be part of the 
> standard, but an add-on for those who want or need that support. All it 
> does is add complexity to a topic that at its core can be complex for many 
> already.

Maybe you want to confine yourself to a manageable subset of lisp at the
beginning. I don't think that CL will be in your way, and it seems to me that
this is quite what Paul Graham does. (I have to admit though that I didn't
*read* the book, only some sections of it)

Maybe a list of "essential" CL-functions/Macros would be good. From the top of
my head:

Just start with car, cdr, cons, mapcar, defun, apply, do, let, and all those I
forgot...

However, I'm a mathematician, and I use Lisp to do maths, so my Lisp knowledge
is quite superficial. Hence, disclaimer: this was from the top of my head.

Martin
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ED8BD2C.5010200@nyc.rr.com>
Jeff Massung wrote:
> Don Geddis <···@geddis.org> wrote in
> ···················@sidious.geddis.org: 
> 
> 
> Sorry to interject here, this thread is huge and finding a good spot to 
> begin is not easy :)
> 
> As a newcomer to Lisp, perhaps a fresh [newbie] mind would help.
> 
> 
>>2. You believe that parts of CL design are complicated/confusing, and
>>this is a barrier to new people learning and embracing the language.
> 
> 
> One could argue that C is nothing more than a syntax to call and manipulate 
> functions written in assembly. In said environment, to learn C, all one 
> needs to do is learn the syntax and a few keywords. Then it is a simple 
> matter of learning new libraries of functions for the platform/processor 
> you want to program. The same with FORTH, Pascal, BASIC and most other 
> mainstream languages.
> 
> CL (and Smalltalk) seem to suffer from the concept that the language isn't 
> the syntax, but instead the libraries. For me, sitting down to learn Lisp 
> is not a matter of learning where to put ()'s, but rather CAR, CDR, 
> NUMBERP, etc. And with the shear size of CL, this is just plain daunting. 

Indeed. I think the trick is never to learn any new language. Just pick 
a fun little problem and try to solve it with a new language. I wager 
you know this already, but this way I do not read an entire language 
reference, I do the introductory tutorial in the first chapter and then 
attack the problem. I know what I want to do in my algorithm, I just 
have to find out how the new language does it. Lots more fun, except for 
the paper cuts from spending most of the first week tearing around the 
reference looking for the functionality I am after.

Of course this means I end up doing, say, C-style programming in Prolog 
for a week or so until the light bulb goes on, but it's worth it for the 
belly laughs I get reading my own week-old "C"rolog code. And that would 
likely happen anyway if I read the whole book in one go.

> Not to mention that in the CL spec there is nothing (that I know of) for 
> interfacing with hardware or the OS. So, as a beginner, once I have CL, 
> what do I do with it?

Sounds like the first thing you need to do is master is the FFI so you 
can comfortably interface with the OS and any other C-friendly library. 
But if you get the right CL implementation, a lot of the FFI work will 
have been done for you. MCL for OS9 on the Mac, trial CLs or CormanCL on 
win32. Not sure what OS binding are like in CLisp or Linux CLs, but 
others will.

> 
> That last question is a serious one. Perhaps I really need a project that 
> could benefit from using CL -- in which case, CL suffers [debateably] from 
> being a niche language (like FORTH on embedded processors).

I am curious. As a newbie, aren't you grooving on Lisp just as a 
language? I understand you also want to get some work done <g>, but 
hasn't the power of Lisp more than made up for, say, the need to muddle 
thru FFI documentation?

I think if the project involves programming, it will benefit from using 
CL. I've never done a lick of AI in my life, hope never to use any other 
language again. Chant after me: interactivity, GC, dynamism, 
reflexivity, macros, specials, closures, lexical scoping, multi-methods, 
multiple-inheritance, auto-indenting, mature/stable, compiled, ANSI std, 
untyped variables, strong run-time typing, that daunting huge library 
and set of built-in, nifty data structures... that stuff lets me build 
applications fast, whatever the application.


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdhie7b9r6q52@corp.supernews.com>
Kenny Tilton <·······@nyc.rr.com> wrote in
·····················@nyc.rr.com: 

> 
> 
> Jeff Massung wrote:
>> Don Geddis <···@geddis.org> wrote in
>> ···················@sidious.geddis.org: 
>> 

[...]

>> CL (and Smalltalk) seem to suffer from the concept that the language
>> isn't the syntax, but instead the libraries. For me, sitting down to
>> learn Lisp is not a matter of learning where to put ()'s, but rather
>> CAR, CDR, NUMBERP, etc. And with the shear size of CL, this is just
>> plain daunting. 
> 
> Indeed. I think the trick is never to learn any new language. Just
> pick a fun little problem and try to solve it with a new language. I
> wager you know this already, but this way I do not read an entire
> language reference, 

Agreed. Definitely. The simple problem that I seem to be having is 
envisioning how Lisp can help me do program 'A' better than [say] C. 
Most likely, this is for two reasons: I don't know Lisp :) and I'm not 
thinking functionally. Both will be rectified with time.

> 
>> Not to mention that in the CL spec there is nothing (that I know of)
>> for interfacing with hardware or the OS. So, as a beginner, once I
>> have CL, what do I do with it?
> 
> Sounds like the first thing you need to do is master is the FFI so you
> can comfortably interface with the OS and any other C-friendly
> library. But if you get the right CL implementation, a lot of the FFI
> work will have been done for you. 

Again, this is an area for which I have yet to find any information. I 
have to hunt for it. And for beginners, this is bad (IMHO). Is each FFI 
the same across CL? or different for each implementation? They would 
almost have to be different across operating systems (unless you wanted 
Tk/TCL or something like it), but what about for the same OS? Is Corman 
Lisp the same as Allegro CL? I personally don't know. If they are 
different, is this bad? perhaps, maybe not. Maybe one is significantly 
better than the other. Another topic to be sure :)

> I am curious. As a newbie, aren't you grooving on Lisp just as a 
> language? I understand you also want to get some work done <g>, but 
> hasn't the power of Lisp more than made up for, say, the need to
> muddle thru FFI documentation?

The power of Lisp is still eluding me. I still boil down to writing C-
like functions, and I have yet to write a single macro. By profession I 
am an embedded programmer and compiler writer. So while I look at the 
macros as being very cool, I also am very self-concious about what they 
are doing "under the hood". Are they bloating my code unnecessarily? 
What kind of overhead do macros take?

> 
> I think if the project involves programming, it will benefit from
> using CL. I've never done a lick of AI in my life, hope never to use
> any other language again. Chant after me: [...]

I think I will agree with you whole-heartedly the more I learn :)



-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4d6hzkpas.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Kenny Tilton <·······@nyc.rr.com> wrote in
> ·····················@nyc.rr.com: 
> 
> > 
> > 
> > Jeff Massung wrote:
> >> Don Geddis <···@geddis.org> wrote in
> >> ···················@sidious.geddis.org: 
> >> 
> 
> [...]
> 
> >> CL (and Smalltalk) seem to suffer from the concept that the language
> >> isn't the syntax, but instead the libraries. For me, sitting down to
> >> learn Lisp is not a matter of learning where to put ()'s, but rather
> >> CAR, CDR, NUMBERP, etc. And with the shear size of CL, this is just
> >> plain daunting. 
> > 
> > Indeed. I think the trick is never to learn any new language. Just
> > pick a fun little problem and try to solve it with a new language. I
> > wager you know this already, but this way I do not read an entire
> > language reference, 
> 
> Agreed. Definitely. The simple problem that I seem to be having is 
> envisioning how Lisp can help me do program 'A' better than [say] C. 
> Most likely, this is for two reasons: I don't know Lisp :) and I'm not 
> thinking functionally. Both will be rectified with time.

Since you've been mentioning CL here, _please_ understand that CL does _not_
dictate functional programming, and thus it is not necessary to learn
functional programming style in order to learn CL.  CL _can_ be used in
a purely functional style, but it is not mandated, because CL is not
"pure" - it is "general".

> >> Not to mention that in the CL spec there is nothing (that I know of)
> >> for interfacing with hardware or the OS. So, as a beginner, once I
> >> have CL, what do I do with it?
> > 
> > Sounds like the first thing you need to do is master is the FFI so you
> > can comfortably interface with the OS and any other C-friendly
> > library. But if you get the right CL implementation, a lot of the FFI
> > work will have been done for you. 
> 
> Again, this is an area for which I have yet to find any information. I 
> have to hunt for it. And for beginners, this is bad (IMHO). Is each FFI 
> the same across CL?

No, it is not specified by CL.

>  or different for each implementation?

No, unfortunately.  On the bright side, however, they tend to have mostly
similar components, and in fact there is a universal ffi package that
you can use on most CLs.

> They would 
> almost have to be different across operating systems (unless you wanted 
> Tk/TCL or something like it),

Not at all.  The whole purpose for an FFI is to hide system differences
from the user.  So when you write foreign functions in Allegro CL, therefore,
you don't need to worry about syntax and operation as per which operating
system you are on - a foreign call to read() is done the same way across
all platforms, and although you may need something like TCL/Tk or GTK to
unify _libraries_ across operating systems, this is precisely the same
problem you would have in calling out to C-based libraries from C, as
well.

See:

http://www.franz.com/support/documentation/6.2/doc/foreign-functions.htm

> but what about for the same OS? Is Corman 
> Lisp the same as Allegro CL? I personally don't know. If they are 
> different, is this bad? perhaps, maybe not. Maybe one is significantly 
> better than the other. Another topic to be sure :)

They are different, but it's not so bad; you can use UFFI
(http://uffi.b9.com/) if you want a level of portability between CL
implementations.
 
> > I am curious. As a newbie, aren't you grooving on Lisp just as a 
> > language? I understand you also want to get some work done <g>, but 
> > hasn't the power of Lisp more than made up for, say, the need to
> > muddle thru FFI documentation?
> 
> The power of Lisp is still eluding me. I still boil down to writing C-
> like functions, and I have yet to write a single macro. By profession I 
> am an embedded programmer and compiler writer. So while I look at the 
> macros as being very cool, I also am very self-concious about what they 
> are doing "under the hood". Are they bloating my code unnecessarily? 
> What kind of overhead do macros take?

I think the term "macro _expansion_", which is a somewhat universal term
(at least in both C and CL) is painting an incorrect picture.  Usually in
C, macros are indeed used only to stuff a lot of functionality into one
line, and so indeed, macro substitution become turly an "expansion".
Actually, though, macros can tend to _reduce_ overhead.  For example,
defstruct is a huge macro, which funnels structurl information into simpler
access rules.  If speed and safety are conducive to it, an access can be
reduced down to a single instruction in machine code, with no loss of
generality in the source.

One of the most disastrous things C did to mess up macros in the minds
of its users was to make it gutless and non-C.  Think about it;
a _different_ compiler, cpp, is used to expand macros than is used to
process the C code.  There is no interaction with C constructs, no
decision-making ability at macro-expansion time, no ability to call
C user functions during the course of the macro-expansion.  It almost
dictates a stupid, expanding-code-only style.

Do you have to be concerned about macro expansions getting out of hand
in CL?  Of course.  But part of starting to write macros in CL is
getting used to the idea that you _can_ use the whole power of CL
in your macro, and that the resultant substitution form can be
powerfully simple.

> > I think if the project involves programming, it will benefit from
> > using CL. I've never done a lick of AI in my life, hope never to use
> > any other language again. Chant after me: [...]
> 
> I think I will agree with you whole-heartedly the more I learn :)

I agree with this.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Greg Menke
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <m3d6hydh22.fsf@europa.pienet>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Kenny Tilton <·······@nyc.rr.com> wrote in
> ·····················@nyc.rr.com: 
> 
> > 
> >> Not to mention that in the CL spec there is nothing (that I know of)
> >> for interfacing with hardware or the OS. So, as a beginner, once I
> >> have CL, what do I do with it?
> > 
> > Sounds like the first thing you need to do is master is the FFI so you
> > can comfortably interface with the OS and any other C-friendly
> > library. But if you get the right CL implementation, a lot of the FFI
> > work will have been done for you. 
> 
> Again, this is an area for which I have yet to find any information. I 
> have to hunt for it. And for beginners, this is bad (IMHO). Is each FFI 
> the same across CL? or different for each implementation? They would 
> almost have to be different across operating systems (unless you wanted 
> Tk/TCL or something like it), but what about for the same OS? Is Corman 
> Lisp the same as Allegro CL? I personally don't know. If they are 
> different, is this bad? perhaps, maybe not. Maybe one is significantly 
> better than the other. Another topic to be sure :)

FFI isn't for beginners.  In the same way using objdump to figure out
why some C code isn't setting registers correctly for a call into
assembly isn't for beginners either.  FFI isn't part of the language
spec, so implementations vary- though they do correspond with one
another to varying degrees.  For given implementations that have
versions for different operating systems, the FFI will be very
similar, only including variations to handle the idiosyncrasies.


> 
> > I am curious. As a newbie, aren't you grooving on Lisp just as a 
> > language? I understand you also want to get some work done <g>, but 
> > hasn't the power of Lisp more than made up for, say, the need to
> > muddle thru FFI documentation?
> 
> The power of Lisp is still eluding me. I still boil down to writing C-
> like functions, and I have yet to write a single macro. By profession I 
> am an embedded programmer and compiler writer. So while I look at the 
> macros as being very cool, I also am very self-concious about what they 
> are doing "under the hood". Are they bloating my code unnecessarily? 
> What kind of overhead do macros take?

If you're trying to write C style code in Lisp then you're not
"grokking" Lisp yet.  It will come with use.  The only runtime
overhead a macro imposes is in the code that was generated at
compile-time.  To put in a C-like perspective, its similar to the
overhead you would experience by inlining a C function and is subject
to the same tradeoffs.  I think your code will have to reach a certain
complexity before macros become meaningful and at that point your
effectiveness with Lisp will shift gears, so to speak.

Gregm
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdmr2ta7fikp2b@corp.supernews.com>
Greg Menke <··········@toadmail.com> wrote in 
···················@europa.pienet:

> 
> If you're trying to write C style code in Lisp then you're not
> "grokking" Lisp yet.  It will come with use.  The only runtime
> overhead a macro imposes is in the code that was generated at
> compile-time.  To put in a C-like perspective, its similar to the
> overhead you would experience by inlining a C function and is subject
> to the same tradeoffs. 

I think I'm understanding this now. The macros, are executed at compile-
time and not run-time? This would make a huge difference.

Thanks.


-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <isroa3x6.fsf@ccs.neu.edu>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Greg Menke <··········@toadmail.com> wrote in 
> ···················@europa.pienet:
> 
> > 
> > If you're trying to write C style code in Lisp then you're not
> > "grokking" Lisp yet.  It will come with use.  The only runtime
> > overhead a macro imposes is in the code that was generated at
> > compile-time.  To put in a C-like perspective, its similar to the
> > overhead you would experience by inlining a C function and is subject
> > to the same tradeoffs. 
> 
> I think I'm understanding this now. The macros, are executed at compile-
> time and not run-time? This would make a huge difference.

The macros are *expanded* at compile time.  (Excuse me for being
overly pendantic.)  A macro is a lisp program that generates some lisp
code.  When the compiler sees the macro, it invokes it, gets the
generated code, and compiles *that*.  The generated code is run at
run-time, but the thing that generated it is run at compile time.
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4el2cphun.fsf@beta.franz.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Jeff Massung <···@NOSPAM.mfire.com> writes:
> 
> > Greg Menke <··········@toadmail.com> wrote in 
> > ···················@europa.pienet:
> > 
> > > 
> > > If you're trying to write C style code in Lisp then you're not
> > > "grokking" Lisp yet.  It will come with use.  The only runtime
> > > overhead a macro imposes is in the code that was generated at
> > > compile-time.  To put in a C-like perspective, its similar to the
> > > overhead you would experience by inlining a C function and is subject
> > > to the same tradeoffs. 
> > 
> > I think I'm understanding this now. The macros, are executed at compile-
> > time and not run-time? This would make a huge difference.
> 
> The macros are *expanded* at compile time.  (Excuse me for being
> overly pendantic.)  A macro is a lisp program that generates some lisp
> code.  When the compiler sees the macro, it invokes it, gets the
> generated code, and compiles *that*.  The generated code is run at
> run-time, but the thing that generated it is run at compile time.

A nit; macros are expanded at _macroexpand_ time.  When code is compiled,
macroexpand time occurs during the compilation process.  When interpreted
code is evaluated, macroexpand time is part of evaluation, and occurs
for each form just before the evaluation of the result form.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Pekka P. Pirinen
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <ix3cigwpu7.fsf@ocoee.cam.harlequin.co.uk>
Duane Rettig <·····@franz.com> writes:
> A nit; macros are expanded at _macroexpand_ time.  When code is compiled,
> macroexpand time occurs during the compilation process.  When interpreted
> code is evaluated, macroexpand time is part of evaluation, and occurs
> for each form just before the evaluation of the result form.

An even smaller nit: The distinction between interpreted and compiled
is blurry; some implementations have an interpreter that preprocesses
the code before executing it.  In such a situation, function bodies
are macroexpanded when they are defined.  Furthermore - and Duane
knows this well - complex macros sometimes examine their arguments,
macroexpanding them, in order to work some transformation on the code.
-- 
Pekka P. Pirinen
Es ist nicht gesagt das es besser wird wenn es anders wird.  Wenn es aber
besser werden soll muss es anders werden.  -- G. Ch. Lichtenberg
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ED8ECCE.5070503@nyc.rr.com>
Jeff Massung wrote:
> Kenny Tilton <·······@nyc.rr.com> wrote in
>>I am curious. As a newbie, aren't you grooving on Lisp just as a 
>>language? I understand you also want to get some work done <g>, but 
>>hasn't the power of Lisp more than made up for, say, the need to
>>muddle thru FFI documentation?
> 
> 
> The power of Lisp is still eluding me. I still boil down to writing C-
> like functions, and I have yet to write a single macro.

Yeah, it took me a few weeks to get to my first tentative macro, tho 
like you I grokked the potential straightaway.

> By profession I 
> am an embedded programmer and compiler writer.

Pardon the flattery, but you may have become so strong at what you do 
that it is harder for you to "let go". I should have switched to 
snowboarding years before I did, but I never wanted to take the hit and 
give up even my modest skiing proficiency.[1]

> So while I look at the 
> macros as being very cool, I also am very self-concious about what they 
> are doing "under the hood". Are they bloating my code unnecessarily? 

They go the other way, returning to the compiler the best code possible 
given the arguments of each invocation.

> What kind of overhead do macros take?

Whatever it is, it is at compile-time. :) But I know what you mean, it 
is unnerving having someone else handling the low-level stuff. I imagine 
you examine C compiler output to see what kind of ASM is being genned. 
But I had this happen to me doing Mac programming in C. I blithely used 
diff fonts to highlight diff bits of a math expression, and made this 
change to a an educational app just before a trade show. Three days of 
apologizing for the app's slowness. Who knew SetFont was a pig? :)


> I think I will agree with you whole-heartedly the more I learn :)

I'm doing an informal survey: what got an old dawg like you to try the 
new trick of Lisp? It sounds like this is a pretty serious exploration, 
not just a toe dip.

[1] Everyone is wrong! Ski technique /does/ carryover to snowboarding.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Thomas F. Burdick
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <xcvisrrndd5.fsf@famine.OCF.Berkeley.EDU>
Kenny Tilton <·······@nyc.rr.com> writes:

> Jeff Massung wrote:
>
> > What kind of overhead do macros take?
> 
> Whatever it is, it is at compile-time. :) But I know what you mean, it
> is unnerving having someone else handling the low-level stuff. I
> imagine you examine C compiler output to see what kind of ASM is being
> genned.

Even for those of us that sometimes get obsessive about what exactly
is going on under the hood, CL can be quite wonderful:

  - Expand one level of macros (macroexpand-1).  Read, see what
    happened.

  - Expand the form completely (macroexpand).  A little more
    enlightening.

  - If your implementation has a code-walker, do a macroexpand-all.

  - If you're using CMUCL, compile and create a trace file.  Look at
    that.  Other implementations may have something similar for this.

  - Compile and disassemble your function.

You get so many steps between the source and the machine code, you can
see *exactly* what's going on.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdig3b2l058rf9@corp.supernews.com>
Kenny Tilton <·······@nyc.rr.com> wrote in
·····················@nyc.rr.com: 

> 
> 
> Jeff Massung wrote:
>> I think I will agree with you whole-heartedly the more I learn :)
> 
> I'm doing an informal survey: what got an old dawg like you to try the
> new trick of Lisp? It sounds like this is a pretty serious
> exploration, not just a toe dip.

To be honest, FORTH. I'm a big Forth programmer (embedded) and have
written a few Forth compilers. C works, but I hate it - especially C++.
However, in the desktop realm, it appears to be the "way". I remembered
reading a while ago that Chuck Moore had based much of his design of
Forth around many of the concepts in Lisp. This intrigued me. And after
a deeper look, it appeared that Lisp was the "way of the warrior". 

I had begun a new hobby project (compiler for the GameBoy Advance) and
thought I would check out Lisp as something new with a project in mind.
I was blown away. I had a difficult time (and still do) finding a Lisp
environment that wasn't text-based that I could use to work in, so that
was a first setback. Next was just the sheer size. 

My thought process was to break it into manageable pieces. I thought
that I didn't need everything CL had to offer, but I liked the concepts.
I had a "burst of inspiration". For anyone that has every programmed
Forth it boils down to: Forth is nothing but a super macro assembler. CL
seems to be designed around macros, so if I designed a Lisp
implementation that was nothing but an assembler, it should be one hell
of an assembler. However, finding information on the basics of a Lisp
compiler is near non-existant. And after many searches on Google, it
appears few if any people have attempted. 

Recently I found a book online: Lisp In Small Pieces. I have my
bookstore ordering a copy. Any reviews? 

So here I am. A programmer in search of Lisp :) I like what I see, want
to add a contribution, and learn what I can. My life has always been
geared around two concepts: Change and Challenge. This is my new
challenge so to speak. 

BTW, I thank all who have been replying. It's done nothing but
encourage. 

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4u1ba9sk3.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Kenny Tilton <·······@nyc.rr.com> wrote in
> ·····················@nyc.rr.com: 
> 
> > 
> > 
> > Jeff Massung wrote:
> >> I think I will agree with you whole-heartedly the more I learn :)
> > 
> > I'm doing an informal survey: what got an old dawg like you to try the
> > new trick of Lisp? It sounds like this is a pretty serious
> > exploration, not just a toe dip.
> 
> To be honest, FORTH. I'm a big Forth programmer (embedded) and have
> written a few Forth compilers. C works, but I hate it - especially C++.
> However, in the desktop realm, it appears to be the "way". I remembered
> reading a while ago that Chuck Moore had based much of his design of
> Forth around many of the concepts in Lisp. This intrigued me. And after
> a deeper look, it appeared that Lisp was the "way of the warrior". 

Yes, for example, the <builds does> construct seems like a very lispy
concept.  I got my first inkling of class behaviors from that concept
while learning Forth via the Aug, 1978 Byte magazine (and, like any
good hacker, writing my own Forth interpreter for the Computer Automation
mini I was working on at the time).

> I had begun a new hobby project (compiler for the GameBoy Advance) and
> thought I would check out Lisp as something new with a project in mind.
> I was blown away. I had a difficult time (and still do) finding a Lisp
> environment that wasn't text-based that I could use to work in, so that
> was a first setback. Next was just the sheer size. 

Size is sometimes a problem, but less and less so nowadays.

> My thought process was to break it into manageable pieces. I thought
> that I didn't need everything CL had to offer, but I liked the concepts.
> I had a "burst of inspiration". For anyone that has every programmed
> Forth it boils down to: Forth is nothing but a super macro assembler. CL
> seems to be designed around macros, so if I designed a Lisp
> implementation that was nothing but an assembler, it should be one hell
> of an assembler. However, finding information on the basics of a Lisp
> compiler is near non-existant. And after many searches on Google, it
> appears few if any people have attempted. 

Check out the Naughtydog story:

http://www.franz.com/success/customer_apps/animation_graphics/naughtydog.lhtml

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87d6hy64uy.fsf@darkstar.cartan>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> However, finding information on the basics of a Lisp compiler
> is near non-existant. And after many searches on Google, it
> appears few if any people have attempted.
> 
> Recently I found a book online: Lisp In Small Pieces. I have my
> bookstore ordering a copy. Any reviews?

A great book.  You have found what you were looking for.

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Jens Axel Søgaard
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3ed9b339$0$97270$edfadb0f@dread12.news.tele.dk>
Nils Goesche wrote:
> Jeff Massung <···@NOSPAM.mfire.com> writes:
> 
> 
>>However, finding information on the basics of a Lisp compiler
>>is near non-existant. And after many searches on Google, it
>>appears few if any people have attempted.
>>
>>Recently I found a book online: Lisp In Small Pieces. I have my
>>bookstore ordering a copy. Any reviews?
> 
> 
> A great book.  You have found what you were looking for.

Yes. "Lisp In Small Pieces" is excellent.

For papers on this and that related to compiler implementation
of Lisp-like languages see also:

     <http://library.readscheme.org/page8.html>

-- 
Jens Axel S�gaard
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <6xTaPrrJTShvPJ8Y51pn3gT8Sg8q@4ax.com>
On Sat, 31 May 2003 23:53:15 -0000, Jeff Massung <···@NOSPAM.mfire.com>
wrote:

> Recently I found a book online: Lisp In Small Pieces. I have my
> bookstore ordering a copy. Any reviews? 

"Probably the best book ever on how to write Lisp compilers and
interpreters is Christian Queinnec's Lisp in Small Pieces".

                                   -- Peter Norvig


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3EDA33CD.1030404@nyc.rr.com>
Jeff Massung wrote:
> Kenny Tilton <·······@nyc.rr.com> wrote in
> ·····················@nyc.rr.com: 
> 
> 
>>
>>Jeff Massung wrote:
>>
>>>I think I will agree with you whole-heartedly the more I learn :)
>>
>>I'm doing an informal survey: what got an old dawg like you to try the
>>new trick of Lisp? It sounds like this is a pretty serious
>>exploration, not just a toe dip.
> 
> 
> To be honest, FORTH. I'm a big Forth programmer (embedded) and have
> written a few Forth compilers. C works, but I hate it - especially C++.
> However, in the desktop realm, it appears to be the "way". I remembered
> reading a while ago that Chuck Moore had based much of his design of
> Forth around many of the concepts in Lisp. This intrigued me. 

You prompted me to pull out my HOPL II book. Cool! Another one-person 
language. I never knew that. I was puzzled because the Forth paper which 
  listed Moore as an author does not mention Lisp, but this (a rejected 
paper) does:

    http://www.colorforth.com/HOPL.html

> And after
> a deeper look, it appeared that Lisp was the "way of the warrior". 
> 
> I had begun a new hobby project (compiler for the GameBoy Advance) and
> thought I would check out Lisp as something new with a project in mind.
> I was blown away. 

That happens a lot. :) But I am puzzled, earlier you said you had yet to 
grok the power of Lisp.

> I had a difficult time (and still do) finding a Lisp
> environment that wasn't text-based that I could use to work in, so that
> was a first setback. 

That is a big problem for Lisp. ACL has it right.


> Recently I found a book online: Lisp In Small Pieces. I have my
> bookstore ordering a copy. Any reviews? 

I'm just an apps guy, but thems that know better groove on it big time. 
You also motivated me to read a little more (I buy these things, I just 
don't read them.) So you plan to write a Lisp which compiles into, say, 
GameBoy machine language? And since you control the compiler, you can 
leave out the big CL runtime. Cool. Will your Lisp have GC?

> 
> My life has always been
> geared around two concepts: Change and Challenge. 

I find it's the only way I can get out of bed in the afternoon.

> BTW, I thank all who have been replying. It's done nothing but
> encourage. 

I don't know about other CLLers, but newbie stories like yours are 
encouraging me, which is why I pester people for them. It seems there 
are a lot of different things that lead folks to take a look at Lisp, 
but the commonality is:

  -- early adopter (ok, duh)
  -- dissatisfaction with alternatives, entrenched or nouveau
  -- the eureka! reaction once they try it

But don't all those parentheses just drive you nuts? :)


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdmqu47gdkn1d3@corp.supernews.com>
Kenny Tilton <·······@nyc.rr.com> wrote in
·····················@nyc.rr.com: 

> You prompted me to pull out my HOPL II book. Cool! Another one-person 
> language. I never knew that. I was puzzled because the Forth paper
> which 
>   listed Moore as an author does not mention Lisp, but this (a
>   rejected 
> paper) does:
> 
>     http://www.colorforth.com/HOPL.html

Hardly a "rejected paper" :) ColorForth is Moore's latest Forth
evolution or discovery as he would say. Most people today respect Moore,
but think of him as a "crazy old hermit" so to speak. 

> 
> That happens a lot. :) But I am puzzled, earlier you said you had yet
> to grok the power of Lisp.

Sorry, all, I have yet to understand what "grok" means :)

I see possibilities, if that is what you mean. But I have yet to make
use of them. 

> 
> That is a big problem for Lisp. ACL has it right.

I assume ACL is Allegro Common Lisp. I just downloaded it, install and
trying it out. It seems quite nice, but I hesitate to begin using it
simply because of the price tag *shiver*. 

> 
> 
> I'm just an apps guy, but thems that know better groove on it big
> time. You also motivated me to read a little more (I buy these things,
> I just don't read them.) So you plan to write a Lisp which compiles
> into, say, GameBoy machine language? And since you control the
> compiler, you can leave out the big CL runtime. Cool. Will your Lisp
> have GC? 

GC? Probably not. Eventually, maybe, but not in the beginning. For now,
I'm just trying to make a macro assembler with Lisp-ish syntax and
compile-time functions. For example: 

;; assembles a list of instructions
(defun assemble (instructions)
    (if (null instructions) 
        nil
        (progn
            (compile (car instructions))
            (assemble (cdr instructions)))))

;; assembles a function - inline
(defun peek (dest-reg src-reg)
    (assemble (
            (ldrh dest-reg src-reg))))

(peek r0 r1)

I can't find anything on the web where someone has attempted this. So I
think I'm testing new waters. Of course, the example above is
rediculous, being nothing more than an exchange of symbols, but I think
you get the idea. 

Has this been done before? URL? What does everyone think?


> I don't know about other CLLers, but newbie stories like yours are 
> encouraging me, which is why I pester people for them. 

Glad to know I'm contributing!


-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <465nopezr.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> > That is a big problem for Lisp. ACL has it right.
> 
> I assume ACL is Allegro Common Lisp.

Many on this newsgroup use the acronym ACL to mean different things,
including Allegro CL, or Ansi Commpon Lisp.  You have to go by context.
I had assumed that Kenny had meant Ansi Commoon Lisp in this context,
but he'll have to verify or deny that.

> I just downloaded it, install and
> trying it out. It seems quite nice, but I hesitate to begin using it
> simply because of the price tag *shiver*. 

You have to have downloaded the Trial version.  What about Free
makes you shiver?

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdn32a192s7p6d@corp.supernews.com>
Duane Rettig <·····@franz.com> wrote in ··················@beta.franz.com:

> You have to have downloaded the Trial version.  What about Free
> makes you shiver?
> 

I was under the impression from the site that the license I was emailed is 
only good for 60 days, after which time, it ceases to work without 
purchasing.

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4wug4ny97.fsf@beta.franz.com>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> Duane Rettig <·····@franz.com> wrote in ··················@beta.franz.com:
> 
> > You have to have downloaded the Trial version.  What about Free
> > makes you shiver?
> > 
> 
> I was under the impression from the site that the license I was emailed is 
> only good for 60 days, after which time, it ceases to work without 
> purchasing.

Nope.  You can renew as often as you want.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3EDBB37B.2080601@nyc.rr.com>
Jeff Massung wrote:
> Duane Rettig <·····@franz.com> wrote in ··················@beta.franz.com:
> 
> 
>>You have to have downloaded the Trial version.  What about Free
>>makes you shiver?
>>
> 
> 
> I was under the impression from the site that the license I was emailed is 
> only good for 60 days, after which time, it ceases to work without 
> purchasing.
> 

The good news is: "The Trial Edition may be used for 60 days.  After 60 
days, you will need to run the "newlicense" program found in your 
Allegro directory to obtain a new license file."

The bad news is if you want to go commercial with something called, say, 
Dragon Lisp <g>: http://www.franz.com/downloads/license.lhtml
Then you need the even more shivery Enterprise edition /and/ you need to 
negotiate runtime licensing. Brrrrr!

Well, talk to Franz. Or develop under ACl and deliver with LW. Something 
like that.

Nice web site, btw. DragonBasic looks like a hit.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdne0f30bd8e60@corp.supernews.com>
Kenny Tilton <·······@nyc.rr.com> wrote in
·····················@nyc.rr.com: 


> 
> Nice web site, btw. DragonBasic looks like a hit.
> 

Thanks. So far so good... My current dilemna is that I was coding for non 
low-level programmers. They love it, but as they learn more about the 
system now want more low-level access.

My solution so far (in theory, I have yet to code any of it) is to 
develop what I call an "api assembler". Something that can be used a 
little higher level than assembly, but low-level enough to see register 
usage and memory access. 

The end goal is to have DragonBASIC be able to include the api assembler 
code and vice-versa. Letting the end-user make plugins for the language, 
etc.

Still working out the details, though.

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3EDBAD1D.3060404@nyc.rr.com>
Duane Rettig wrote:
> Jeff Massung <···@NOSPAM.mfire.com> writes:
> 
> 
>>>That is a big problem for Lisp. ACL has it right.
>>
>>I assume ACL is Allegro Common Lisp.
> 
> 
> Many on this newsgroup use the acronym ACL to mean different things,
> including Allegro CL, or Ansi Commpon Lisp.  You have to go by context.
> I had assumed that Kenny had meant Ansi Commoon Lisp in this context,
> but he'll have to verify or deny that.

No, I thought I heard Jeff saying he was not crazy about the IDEs he was 
finding, and in that context I recommended Allegro CL (for win32, at 
least -- btw, jes curious, does ACL for other platforms have the same 
IDE?). After doing porting work to LW, CormanCL, and Clisp, and after 
consultation with other lispniks about those products as well as CMUCL, 
I think ACL is the only IDE amongst those that has it Just Right. MCL 
comes in a respectable second.

btw, I was not aware ACL was used by some to refer to Ansi Common Lisp. 
Not that this changes the fact of such usage, but is that not redundant? 
Doesn't CL convey the Ansi thing? No big.


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Duane Rettig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <4smqsnrsj.fsf@beta.franz.com>
Kenny Tilton <·······@nyc.rr.com> writes:

> Duane Rettig wrote:
> > Jeff Massung <···@NOSPAM.mfire.com> writes:
> >
> 
> >>>That is a big problem for Lisp. ACL has it right.
> >>
> >>I assume ACL is Allegro Common Lisp.
> > Many on this newsgroup use the acronym ACL to mean different things,
> 
> > including Allegro CL, or Ansi Commpon Lisp.  You have to go by context.
> > I had assumed that Kenny had meant Ansi Commoon Lisp in this context,
> > but he'll have to verify or deny that.
> 
> No, I thought I heard Jeff saying he was not crazy about the IDEs he
> was finding, and in that context I recommended Allegro CL (for win32,
> at least -- btw, jes curious, does ACL for other platforms have the
> same IDE?).

Not yet :-)

>  After doing porting work to LW, CormanCL, and Clisp, and
> after consultation with other lispniks about those products as well as
> CMUCL, I think ACL is the only IDE amongst those that has it Just
> Right. MCL comes in a respectable second.

Thanks!

> btw, I was not aware ACL was used by some to refer to Ansi Common
> Lisp. Not that this changes the fact of such usage, but is that not
> redundant? Doesn't CL convey the Ansi thing? No big.

Yes, but there are those who use ACL for either usage (as an old 
SysV-er, to me ACL actually means Access Control List) and since
it is faster to type, I doubt that usage of the acronym ACL will ever
go away completely, just as some newbies still use "clisp" as an
abbreviation for "Common Lisp", rather than the specific implementation.

CL is indeed unambiguous, and I always use it unless I am spelling it
out for emphasis.  On the other hand, though I try to take ACL in
context whenever used (even by Franz personnel) I try never to use
it myself, and always spell out or shorten it when writing about
whatever the context is suggesting.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <he788al8.fsf@ccs.neu.edu>
Duane Rettig <·····@franz.com> writes:

> Yes, but there are those who use ACL for either usage (as an old 
> SysV-er, to me ACL actually means Access Control List) and since
> it is faster to type, I doubt that usage of the acronym ACL will ever
> go away completely, just as some newbies still use "clisp" as an
> abbreviation for "Common Lisp", rather than the specific implementation.

Allegro Common Lisp?!  I thought we were discussing 
``Arbortext Command Language''.

Damn.  Ok, just forget everything I ever posted about lisp.


--------

Access Control List    
Anterior Cruciate Ligament (connective tissue of the knee; common injury)    
Accelerator Control Listing    
Access Compatibility Layer    
Active Control List    
Administrative Control Limit    
Advanced CMOS Logic    
Advanced Computing Laboratory    
Aeronautical Computer Laboratory    
Agent Communication Language    
Agent Control Language    
Air Cadet League (of Canada)    
Aircraft Logbook    
Akumiitti Connectivity Library (SMSC connectivity software)    
Allowable Cabin Load    
Allowable Cargo Load    
Allowable Combat Load    
Allowance Components List    
Alternate Concentration Limit    
American Classical League  
American Consultants League    
Analog Configuration Line    
Analytical Chemistry Laboratory    
Anti Communist League    
Application Compatibility List    
Application Control Language    
Applied Color Label    
Arbortext Command Language
ASCII Control Language    
Associated California Loggers    
Association for Computational Linguistics    
Association of Christian Librarians    
Asynchronous Connection-less Link (Bluetooth)    
Atlantic Container Line    
Audio Cleaning Lab (audio noise reduction software)    
Audit Command Language    
Authorized Consumption List    
Auto Correct List (filetype extension)    
Automated Comprehensive Layout    
Automatic Carrier Landing  
Automatic Clutter Mapping    
Available Cabin Load    
Average Contaminant Level  
From: Joe Marshall
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <of1ga41y.fsf@ccs.neu.edu>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> I assume ACL is Allegro Common Lisp.  I just downloaded it, install and
> trying it out.  It seems quite nice, but I hesitate to begin using it
> simply because of the price tag *shiver*. 

There are other lisps with more affordable price tags.

> > I'm just an apps guy, but thems that know better groove on it big
> > time. You also motivated me to read a little more (I buy these things,
> > I just don't read them.) So you plan to write a Lisp which compiles
> > into, say, GameBoy machine language? And since you control the
> > compiler, you can leave out the big CL runtime. Cool. Will your Lisp
> > have GC? 
> 
> GC? Probably not. Eventually, maybe, but not in the beginning. For now,
> I'm just trying to make a macro assembler with Lisp-ish syntax and
> compile-time functions. For example: 
> 
> ;; assembles a list of instructions
> (defun assemble (instructions)
>     (if (null instructions) 
>         nil
>         (progn
>             (compile (car instructions))
>             (assemble (cdr instructions)))))
> 
> ;; assembles a function - inline
> (defun peek (dest-reg src-reg)
>     (assemble (
>             (ldrh dest-reg src-reg))))
> 
> (peek r0 r1)
> 
> I can't find anything on the web where someone has attempted this. So I
> think I'm testing new waters. Of course, the example above is
> rediculous, being nothing more than an exchange of symbols, but I think
> you get the idea. 

Take a look at Corman Common Lisp
  http://www.cormanlisp.com/

It is Windows-only, but it supports embedded assembly code.
From: Kenny Tilton
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3EDB842D.6050409@nyc.rr.com>
Jeff Massung wrote:
> Kenny Tilton <·······@nyc.rr.com> wrote in
> ·····················@nyc.rr.com: 
> 
> 
>>You prompted me to pull out my HOPL II book. Cool! Another one-person 
>>language. I never knew that. I was puzzled because the Forth paper
>>which 
>>  listed Moore as an author does not mention Lisp, but this (a
>>  rejected 
>>paper) does:
>>
>>    http://www.colorforth.com/HOPL.html
> 
> 
> Hardly a "rejected paper" :)  

?? From that page:  "This paper was written for the HOPL II (History of 
programming languages) conference. It was summarily rejected, apparently 
because of its style. Much of the content was included in the accepted 
paper [Rather 1993]."

> 
> 
>>That happens a lot. :) But I am puzzled, earlier you said you had yet
>>to grok the power of Lisp.
> 
> 
> Sorry, all, I have yet to understand what "grok" means :)

You must be younger than I thought. :)

From: http://whatis.techtarget.com/definition/0,,sid9_gci212216,00.html

"grok: To grok (pronounced GRAHK) something is to understand something 
so well that it is fully absorbed into oneself. In Robert Heinlein's 
science-fiction novel of 1961, Stranger in a Strange Land, the word is 
Martian and literally means "to drink" but metaphorically means "to take 
it all in," to understand fully, or to "be at one with." Today, grok 
sometimes is used to include acceptance as well as comprehension - to 
"dig" or appreciate as well as to know."

> I assume ACL is Allegro Common Lisp. I just downloaded it, install and
> trying it out. It seems quite nice, but I hesitate to begin using it
> simply because of the price tag *shiver*. 

:) But c'mon, how much did you spend on your hardware? How often do you 
spend that much to keep up? What are credit cards for!?

> I can't find anything on the web where someone has attempted this. So I
> think I'm testing new waters. Of course, the example above is
> rediculous, being nothing more than an exchange of symbols, but I think
> you get the idea. 
> 
> Has this been done before? URL? What does everyone think?

Hey, I'm just an apps guy, but the $99 MCL has 68k LAP and the $500 beta 
has PowerPC LAP. What's LAP?

    http://green.iis.nsk.su/~vp/doc/lisp1.5/node39.html


> Glad to know I'm contributing!

Make sure you add your name to:

    http://www.cliki.net/YoungLispers

Then go see if you can cheer up Erann.

:)


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdn2kecklmn120@corp.supernews.com>
Kenny Tilton <·······@nyc.rr.com> wrote in news:3EDB842D.6050409
@nyc.rr.com:

> 
> 
> Jeff Massung wrote:
>> Sorry, all, I have yet to understand what "grok" means :)
> 
> You must be younger than I thought. :)

26. Probably very young compared to everyone here.

> [def. of "grok"]

Thanks. I was thinking something similar based on context.

>> I assume ACL is Allegro Common Lisp. I just downloaded it, install and
>> trying it out. It seems quite nice, but I hesitate to begin using it
>> simply because of the price tag *shiver*. 
> 
>:) But c'mon, how much did you spend on your hardware? How often do you 
> spend that much to keep up? What are credit cards for!?

Too much :) - and still saving up to get my Mac again (god I miss OS X).  
Although, I have a couple of old PowerMacs lying around. I was thinking 
of hooking up a monitor and installing Yellow Dog Linux on them (then 
downloading CMUCL). 

And since we're throwing around definitions:

Credit Card: a tool used to help one bend over and get ·@&@ up the @$$. 
They are the bane of my existence (I'm a terrible financial person - my 
wife holds the cards and check book).

> 
> Make sure you add your name to:
> 
>     http://www.cliki.net/YoungLispers
> 
> Then go see if you can cheer up Erann.
> 


Sure thing :)


-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Daniel Barlow
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <874r38i9sd.fsf@noetbook.telent.net>
Jeff Massung <···@NOSPAM.mfire.com> writes:
> Too much :) - and still saving up to get my Mac again (god I miss OS X).  
> Although, I have a couple of old PowerMacs lying around. I was thinking 
> of hooking up a monitor and installing Yellow Dog Linux on them (then 
> downloading CMUCL). 

Won't help - current CMUCL versions have no PPC target.  Either of
SBCL or OpenMCL will do the trick, though.  


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Ivan Boldyrev
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <9fp0rxr78.ln2@elaleph.borges.cgitftp.uiggm.nsc.ru>
Jeff Massung <···@NOSPAM.mfire.com> writes:

>> Jeff Massung wrote:
>>> Sorry, all, I have yet to understand what "grok" means :)
>> 
>> You must be younger than I thought. :)
>
> 26. Probably very young compared to everyone here.

I will be 23 on 5th of June :)

-- 
Ivan Boldyrev

Violets are red, Roses are blue. //
I'm schizophrenic, And so am I.
From: Jens Axel Søgaard
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <3edb6e10$0$97199$edfadb0f@dread12.news.tele.dk>
Jeff Massung wrote:

>  For now,
> I'm just trying to make a macro assembler with Lisp-ish syntax and
> compile-time functions. For example: 
> 
> ;; assembles a list of instructions
> (defun assemble (instructions)
>     (if (null instructions) 
>         nil
>         (progn
>             (compile (car instructions))
>             (assemble (cdr instructions)))))
> 
> ;; assembles a function - inline
> (defun peek (dest-reg src-reg)
>     (assemble (
>             (ldrh dest-reg src-reg))))
> 
> (peek r0 r1)
> 
> I can't find anything on the web where someone has attempted this. So I
> think I'm testing new waters. Of course, the example above is
> rediculous, being nothing more than an exchange of symbols, but I think
> you get the idea. 

I'm not quite sure what you are looking for, but take a look at

     <http://www.cs.indiana.edu/eip/compile/code.html>

Is it the right ball park?

-- 
Jens Axel S�gaard
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdmsmph2defr51@corp.supernews.com>
Jens Axel S�gaard <······@jasoegaard.dk> wrote in news:3edb6e10$0$97199
·········@dread12.news.tele.dk:

> I'm not quite sure what you are looking for, but take a look at
> 
>      <http://www.cs.indiana.edu/eip/compile/code.html>
> 
> Is it the right ball park?
> 

Close, but not quite.

I'm imagining an interactive assembler (so to speak). All lisp code would 
execute at compile time to generate an executable file. A kind of "poor-
man's-compiler".

The advantages (that I see) to doing this, would be the ability to do 
things like importing files that need to be converted to another format 
(ie importing a bitmap image, but changing it to pcx format at compile 
time, then importing it into the assembled file as binary).

In the back of my mind, something tells me that I'm wasting effort (CL 
can already do this is what I'm thinking...), but reinventing the wheel 
when you don't really know what a wheel looks like can sometimes lead to 
fresh, new ideas that those with the shape of the wheel stuck in their 
heads wouldn't come up with (or perhaps a fall, landing flat on my face) 
:)

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Raymond Wiker
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <868ysktrmb.fsf@raw.grenland.fast.no>
Jeff Massung <···@NOSPAM.mfire.com> writes:

> GC? Probably not. Eventually, maybe, but not in the beginning. For now,
> I'm just trying to make a macro assembler with Lisp-ish syntax and
> compile-time functions. For example: 

        Maybe not *quite* what you want, but Henry Baker describes an
assembler implemented in Lisp at

http://home.pipeline.com/~hbaker1/sigplannotices/COMFY.TXT

        There's also a TeX version of this file at

http://home.pipeline.com/~hbaker1/sigplannotices/column04.tex.gz 

--- although this file appears to require a set of TeX macros.

        The rest of Henry Baker's papers are also quite interesting;
see 

http://home.pipeline.com/~hbaker1/
        
-- 
Raymond Wiker                        Mail:  ·············@fast.no
Senior Software Engineer             Web:   http://www.fast.no/
Fast Search & Transfer ASA           Phone: +47 23 01 11 60
P.O. Box 1677 Vika                   Fax:   +47 35 54 87 99
NO-0120 Oslo, NORWAY                 Mob:   +47 48 01 11 60

Try FAST Search: http://alltheweb.com/
From: Jeff Massung
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <vdmtedlgnf1c6a@corp.supernews.com>
Raymond Wiker <·············@fast.no> wrote in 
···················@raw.grenland.fast.no:

> http://home.pipeline.com/~hbaker1/

Great link! Thanks!

-- 
Best regards,
 Jeff                          ··········@mfire.com
                               http://www.simforth.com
From: Rainer Joswig
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <joswig-FF9E67.19282102062003@news.fu-berlin.de>
In article <··············@corp.supernews.com>,
 Jeff Massung <···@NOSPAM.mfire.com> wrote:

> GC? Probably not. Eventually, maybe, but not in the beginning. For now,
> I'm just trying to make a macro assembler with Lisp-ish syntax and
> compile-time functions. For example: 
> 
> ;; assembles a list of instructions
> (defun assemble (instructions)
>     (if (null instructions) 
>         nil
>         (progn
>             (compile (car instructions))
>             (assemble (cdr instructions)))))
> 
> ;; assembles a function - inline
> (defun peek (dest-reg src-reg)
>     (assemble (
>             (ldrh dest-reg src-reg))))
> 
> (peek r0 r1)
> 
> I can't find anything on the web where someone has attempted this. So I
> think I'm testing new waters. Of course, the example above is
> rediculous, being nothing more than an exchange of symbols, but I think
> you get the idea. 
> 
> Has this been done before? URL? What does everyone think?

Several Lisp systems have inline assemblers. Like MCL and OpenMCL.
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <=EzWPubKft5rke8afz7Zrh1LblP+@4ax.com>
On Wed, 28 May 2003 11:23:29 -0700, ···@jpl.nasa.gov (Erann Gat) wrote:

> mentality, especially when it comes to software.  When I say I want to use
> Common Lisp the first question I get asked is, "Who else uses it?" 
> Without a good answer to that question I can't use it.  So I would like to

The beginning of an answer to this question is available at:

  http://alu.cliki.net/Industry%20Application
  http://alu.cliki.net/Research%20Organizations
  http://alu.cliki.net/Success%20Stories
  http://alu.cliki.net/Evaluate%20Lisp
  http://alu.cliki.net/Consultant

More additions/corrections welcome and encouraged. Do you think
this--including all external links--is the order of magnitude of what you
are looking for? If not, how far are we?


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Pascal Costanza
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bb2s93$s3m$1@f1node01.rhrz.uni-bonn.de>
Nils Goesche wrote:

>>>Oh, and none of my macros is going to work anymore because people
>>>will change the macro system to use ``hygienic�� macros
>>>exclusively, because..., well, ``hygienic�� is good and clean,
>>>isn't it?  No, that's not paradise.  It is hell.
>>
>>It might be hell, but it's the reality. The things you describe
>>happen all the time. They just don't happen in the Common Lisp
>>world. And you're right, and this is definitely good from a certain
>>perspective.
> 
> No this does /not/ happen all the time.  Maybe you think so because
> you apparently did a lot of stuff with Java before.  But Java is a
> language /without/ an ANSI standard.  Java /indeed/ changes all the
> time and its users curse it for that.

What I wanted to say is that new languages are invented all the time, on 
a broader scale (Python, Ruby, and so on). We have to live with this to 
a certain degree.

> Stuff like that does /not/ happen with languages that have a real
> standard behind them, OTOH.  Like C, for instance.  Sure, there is
> C99, but it didn't break anything much and there /wasn't/ a C98, C97,
> C96 or anything like that.  The C++ standard isn't going to change any
> time soon, either.  There are thousands of standards on a file server
> in my LAN here, and they /don't/ change all the time.  We use them to
> build things and we know our products will continue to work for years.
> Because standards are stable.  At most, there will be a careful,
> backward compatible addendum every few years.

OK, I get it. ;)

BTW, C++ is currently being reworked. (But this doesn't change your 
argument.)

>>But as with all things in life, this has both advantages and
>>disadvantages. The stability of Common Lisp is also a disadvantage
>>because even the parts of the language that are acknowledged as
>>flaws cannot be changed.
> 
> So far I haven't seen anything like a ``flaw�� in it that seriously
> obstructs my using it, actually.

Sure. But there are flaws in the language that keep people from taking a 
closer look.

(Many people here in c.l.l think that this is those people's fault. I 
disagree. I think there are unnecessary barriers in the Common Lisp 
standard that could be changed and wouldn't overly affect your use of 
CL, but could actually raise the acceptance of CL.)

>>>The only people who like this are theorists who do not write
>>>programs for a living but are sitting in a university being paid
>>>for conducting such experiments.
> 
>>1) I am one of those guys enjoying language experiments. And I
>>highly prefer Common Lisp over Scheme. (Please keep this always in
>>mind: I am a big fan of Common Lisp. It is the best language
>>available at the moment IMHO. Period.)
> 
> Yes, yes, I know.  But that doesn't imply you have to use the CL
> /standard/ a playground for language experiments, and that's exactly
> what we're talking about here. 

OK.

>>2) The dichotomy between theorists and practitioners is a very bad
>>notion IMHO. Several advances in computer science wouldn't have been
>>made if theorists hadn't decided to play around with wild ideas. Of
>>course, the same holds for practitioners.
>>
>>Good ideas come from bad ideas
>>(http://www.uiweb.com/issues/issue08.htm). Only if you are willing
>>to experiment you will eventually find something useful.
> 
> Sure, but where did you get the idea that the ANSI CL standard is the
> right place for this kind of experimentation?  Once anybody comes up
> with a really great idea, which, I repeat, hasn't happened so far, he
> will first prove that it works /without/ any changes in the standard;
> instead perhaps by hacking SBCL or something, without asking anybody
> for approval.  Commercial vendors will adopt it if it's really so
> great, probably switchable if it is really incompatible and breaks
> code.  Then it's time to think about a new standard.  Many years will
> pass.  Finally, certainly more than ten years from now, a new standard
> will be adopted.

OK, agreed.

> /This/ is how things work, and not only in the Lisp world (think about
> IPv6, for just one example).
> 
> For some strange reason Erann seems to believe that if only some
> things in CL's core are changed, his managers and masses of C++
> programmers will be so impressed that they will flock to the Lisp
> vendors and all become new Lispers.  But that is a rather silly idea:
> Both his managers (I suspect) and all those masses of C++ programmers
> (I know) are totally ignorant about CL internals.  They do not even
> know if CL is a Lisp-1 or Lisp-2.  So, if we make any design changes
> there, these changes will go absolutely unnoticed in those circles.
> Thus, it wouldn't help him at all.

Now, you are a little bit unfair. Erann didn't suggest to use the ANSI 
standard as a playground for language experiments. His proposal for the 
inclusion of an alternative way for dealing with special variables was 
meant to be an improvement based on experience. I really believe he 
acted in good faith in this regard.

A proposal to open up ANSI CL for language experiments would be silly - 
you're right about that. But a proposal for improving the language 
isn't. The fact that a discussion of a concrete proposal results in an 
agreement that it would have negative effects doesn't matter in this regard.

> And because changes need such a /long/ time until they'll make it into
> a new standard, these changes won't help /you/ as a language
> experimentator, either, because: ``publish or perish��, remember? :-)

:-)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Nils Goesche
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <lyk7cb7ydz.fsf@cartan.de>
Pascal Costanza <········@web.de> writes:

> Nils Goesche wrote:
> 
> > Thus, it wouldn't help him at all.
> 
> Now, you are a little bit unfair. Erann didn't suggest to use the
> ANSI standard as a playground for language experiments.

I didn't mean /Erann/ did :-)

> His proposal for the inclusion of an alternative way for dealing
> with special variables was meant to be an improvement based on
> experience. I really believe he acted in good faith in this regard.

Of course.  I just think he is wrong :-)

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Kent M Pitman
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <sfw8yssjj0z.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

···········@whoever.com (Michael Park) writes:

> I'm sorry, but I think your logic is faulty.

If your message and its selection of recipients did not look so much
like flamebait, I might have responded in more detail.  But as it is,
I think I'll pass.

This is especially egregious because the subject line of this thread
is so bogus to start with.  The "ANN" label appears to give it a
special "look here for an important announcement" quality.  Abusing
that to escalate a private spat to an unprepared community that has
not been following this discussion seems quite inappropriate.

> P.S. Who voted for the guys who voted for Lisp2?

The ANSI voting processes are open to anyone in the world to read
about, and membership is in fact open to anyone in the world.  (The
only thing non-US people cannot vote on are matters of US voting
position for the US International Representative.)  This is all no
doubt adequately documented at www.ansi.org or www.itic.org.  It was
possible for you to have participated, but I guess you "didnt_bother".
ANSI members are normally companies, but (as long as not employed by
other companies already represented) individuals can participate and
several did.

It was possible for you to have responded to the relatively well-advertised
public review. In addition to the normal ANSI notification lists, notice of
the public review also went to comp.lang.lisp, comp.lang.scheme, comp.ai,
comp.object, comp.lang.clos, comp.std.misc

 http://www.google.com/groups?selm=19qij5INN8q4%40early-bird.think.com

Also, after ANSI CL became available, no one was forced to use it.
People do not use it under duress--almost the opposite.  The committee
made decisions it thought the community could live with, and so far
the community has basically agreed.  When the language came up for 
reconfirmation in 1999, it was reconfirmed "as is".

It's not like the Scheme community nor any other language community enjoys
any greater degree of concensus.  Ultimately, every language makes a bunch
of decisions and then sits back and sees if anyone wants it.

If there's been commercial opposition to CL, you can be quite sure it's
not over anything as silly as the namespace issue.
From: Jeff Caldwell
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <bNVAa.522$cp6.337591@news1.news.adelphia.net>
My best belly laugh of the day! How true.

Kent M Pitman wrote:
...

> Also, after ANSI CL became available, no one was forced to use it.
> People do not use it under duress--almost the opposite.
...
From: Daniel Barlow
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <87el2ktfj5.fsf@noetbook.telent.net>
[ comp.lang.scheme removed from newsgroups line, because I don't read
that group so this message is unlikely to be relevant to it ]

···········@whoever.com (Michael Park) writes:

> I'm sorry, but I think your logic is faulty. This is like saying "We
> voted for Bush, now you can not criticize him or his party. We need to
> stay united. You are welcome to create your own country, however". As
> a matter of fact, it's worse. Since Lisp <> ANSI CL, what you wrote is
> logically akin to demanding that GOP should not be criticized on the
> whole American continent.

Nonsense.  You are free to criticise who and what you want to.  Kent
is simply explaining why the comp.lang.lisp community is on the whole
not interested in your criticism.

It's generally considered rude on Usenet to post questions to a
newsgroup that have been repeatedly answered in the past.  Once upon a
time, many newsgroups had FAQ lists so that people could see what had
gone before.  These days that's less often the case than it used to
be, but on the other hand we have Google so that you can see _exactly_
what's gone before.  And Lisp-1 vs Lisp-2 is definitely one of those
things.

Perhaps if you had _constructive_ criticism you'd receive a less
hostile reaction.  What outcome would you like to see happen, and what
do you intend to offer to facilitate it?


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: William D Clinger
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <b84e9a9f.0305271709.7752dab8@posting.google.com>
Kent M Pitman wrote, among other things:
> We could spend time talking about...why
> we allow left to right order of evaluation instead of leaving it 
> unspecified;

More accurately, about why CL requires the arguments to be evaluated
from left to right but does not specify whether the operator position
is evaluated before or after the arguments.  (Come to think of it, it
might be legal to evaluate the operator position _during_ evaluation
of the arguments, but I don't remember.  Kent probably does.)

Lest someone think I am criticizing CL here, let me explain that I am
reminding Kent of a semi-private technical joke that became part of
ANSI CL.  You had to have been there.

> I mean, who cool
> would it have been to get to be Newton (at least, the grade school
> version of him that you learn before you realize he had to know some
> math to do what he did) and to sit there and see an apple fall and say
> "Wow. There must be gravity."

Actually, before you realize that Newton had to invent calculus to do
what he did.

Will
From: Kalle Olavi Niemitalo
Subject: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87of1naqa8.fsf_-_@Astalo.kon.iki.fi>
Kent M Pitman <······@world.std.com> writes:

> We could spend time talking [...] about why (COMPLEXP real)
> returns true instead of false;

Does it really?  On Debian's CMUCL 18e-1:

(loop for number in '(1   #C(1   0)   #C(0   1)   #C(1   1)
                      1/2 #C(1/2 0)   #C(0   1/2) #C(1/2 1/2)
                      0.5 #C(0.5 0.0) #C(0.0 0.5) #C(0.5 0.5))
      collect (list number (complexp number) (realp number)))

=> ((1   NIL T) (1           NIL T) (#C(0   1)   T NIL) (#C(1   1)   T NIL)
    (1/2 NIL T) (1/2         NIL T) (#C(0   1/2) T NIL) (#C(1/2 1/2) T NIL)
    (0.5 NIL T) (#C(0.5 0.0) T NIL) (#C(0.0 0.5) T NIL) (#C(0.5 0.5) T NIL))
From: Kent M Pitman
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <sfwisruu946.fsf@shell01.TheWorld.com>
Kalle Olavi Niemitalo <···@iki.fi> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > We could spend time talking [...] about why (COMPLEXP real)
> > returns true instead of false;
> 
> Does it really?  On Debian's CMUCL 18e-1:
> 
> (loop for number in '(1   #C(1   0)   #C(0   1)   #C(1   1)
>                       1/2 #C(1/2 0)   #C(0   1/2) #C(1/2 1/2)
>                       0.5 #C(0.5 0.0) #C(0.0 0.5) #C(0.5 0.5))
>       collect (list number (complexp number) (realp number)))
> 
> => ((1   NIL T) (1           NIL T) (#C(0   1)   T NIL) (#C(1   1)   T NIL)
>     (1/2 NIL T) (1/2         NIL T) (#C(0   1/2) T NIL) (#C(1/2 1/2) T NIL)
>     (0.5 NIL T) (#C(0.5 0.0) T NIL) (#C(0.0 0.5) T NIL) (#C(0.5 0.5) T NIL))

Sorry, I said it backwards.  Didn't mean to cause you this trouble.

The point is that in mathematics, a real is a complex.  In most
computer languages, COMPLEX is a representation type, not a set,
and in Lisp it's even stranger because we require rational complexes 
with an exact zero imaginary part to be reduced to reals.
This means that (complexp (complex 3 0)) and (complexp #C(3 0)) 
both yield false.

Seems there's always someone in the crowd that wants to "fix" that,
so I figured I'd just offer it as one of many things that might get
diddled with.

Fixing it would cause havoc to heavily type-declared numerical code.
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <874r3divo0.fsf@thalassa.informatimago.com>
Kent M Pitman <······@world.std.com> writes:

> Kalle Olavi Niemitalo <···@iki.fi> writes:
> 
> > Kent M Pitman <······@world.std.com> writes:
> > 
> > > We could spend time talking [...] about why (COMPLEXP real)
> > > returns true instead of false;
> > 
> > Does it really?  On Debian's CMUCL 18e-1:
> > 
> > (loop for number in '(1   #C(1   0)   #C(0   1)   #C(1   1)
> >                       1/2 #C(1/2 0)   #C(0   1/2) #C(1/2 1/2)
> >                       0.5 #C(0.5 0.0) #C(0.0 0.5) #C(0.5 0.5))
> >       collect (list number (complexp number) (realp number)))
> > 
> > => ((1   NIL T) (1           NIL T) (#C(0   1)   T NIL) (#C(1   1)   T NIL)
> >     (1/2 NIL T) (1/2         NIL T) (#C(0   1/2) T NIL) (#C(1/2 1/2) T NIL)
> >     (0.5 NIL T) (#C(0.5 0.0) T NIL) (#C(0.0 0.5) T NIL) (#C(0.5 0.5) T NIL))
> 
> Sorry, I said it backwards.  Didn't mean to cause you this trouble.
> 
> The point is that in mathematics, a real is a complex. 

Well, perhaps  in your  mathematics, but in  the mathematics  I learnt
back in France, complex is very  different to real. Not the same thing
at all.

The  set  of Reals  is  the  set of  the  classes  of equivalences  of
converging series or rationals.

The set of Complexes is the product set of the set of Reals by the set
of Reals.

So complexes are  couples, while reals are classes  of equivalences. I
don't see that they're the same thing.


Now, it's true that there exist bijections between the set of real and
some  subsets of the  set of  complexes, and  what's more,  there even
exist  isomorphims  translating  the  "corps"*  (R,+,�)  to  a  stable
sub-"corps" of  C.  That's very  convenient, but that should  not make
you  confound  the set  R  and  the  subset of  C:
                                      {c | c in C and imagpart(c)=0}.


I  find  it  unfortunate  that   (realp  #C(1  0))  may  return  T  in
Common-Lisp, because it can do it consistently only for subsets of the
Complex, namely  those who  are isomorphs to  the subsets of  the real
isomorph  to the  rationals or  to the  integers.  Since  the floating
point numbers,  which are a  subset of representable  decimal numbers,
are used as an approximation of the real number, they avoid making any
inference as soon as the imaginary part of a complex is floating point
instead of natural 0, integer 0-0 or rational 0/1.


[10]> (realp #c(0.5 0))
T
[11]> (realp #c(0.5 0.0))
NIL

More  exactly,  I  don't  mind  realp  returning  whatever  it  wants,
following its definition.  What's  unfortunate is Lisp doing automatic
but arbitrary conversions:

[12]> #c(0.12 0)
0.12

[13]> #c(0.12 0.0)
#C(0.12 0.0)


(But I  can agree that's convenient  to get [12]  though. Just realize
that  it's doing  a  complex mathematical  inference, involving  quite
heary  mathematical objects  and relations,  encapsulated  in theorems
that took a couple of weeks or months to study back in school).



* I don't remember how it's called in English:
    (R,+,�) is a commutative "corps" 
        <=> (R,+) is a commutative group with neutral 0, and
            (R-{0},�) is a commutative group with neutral 1, and
            for all x,y,z in R, (x+y)�z = (x�z)+(y�z)

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Karl A. Krueger
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <bb5alk$j31$1@baldur.whoi.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> wrote:
> 
> Now, it's true that there exist bijections between the set of real and
> some  subsets of the  set of  complexes, and  what's more,  there even
> exist  isomorphims  translating  the  "corps"*  (R,+,?)  to  a  stable
> sub-"corps" of  C.  That's very  convenient, but that should  not make
> you  confound  the set  R  and  the  subset of  C:
>                                      {c | c in C and imagpart(c)=0}.

I think many people would say that for each real x there exists a
complex c such that x = c; or in other words that x+0i = x.

The question then is whether "=" can be taken to mean "is the same as"
as well as "is mathematically-equivalent under a certain construction".
This is really a metaphysics question, not a mathematics one.

If x+0i = x means that the complex (x, 0) is "the same as" the real x,
then we must take it as true that the reals are a proper subset of the
complexes, and that every real "is" a complex, not merely "maps to" a
complex.

In programming languages, this may be considered a problem of types vs.
values.  In some flavors of strictly and statically typed language it
may make sense for this to be an error:

	real x := 5.0
	real y := imagpart(x)

... since the imagpart function might accept only variables of the type
"complex" not "real" as arguments.  In such a language you might have to
cast or convert a real to get a non-error, even if the sensible value is
the same.

Do mathematical values carry around type in and of themselves, or is
type something which we impose upon them in order to write programming
languages which use them?  If mathematical values do carry around type,
then the fact that x = x+0i does not mean that x+0i is x.  If they do
not, and if type is merely a programmers' convention, then we may
validly declare *either* that reals are a subtype of complexes, or that
they are not.

It makes sense to me to say that reals are a subtype of complexes from a
user standpoint, since I want to be able to use a real anywhere I can
use a complex:  imagpart(x) above should return 0, not error.  However I
can see that from the standpoint of some implementations, "subtype"
might imply inherited implementation, which might give reals more
baggage than they need to carry around.

However I didn't think that was the case in Lisp -- correct me if I am
wrong, I am pretty new to Lisp. :)

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Barry Margolin
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <AurBa.14$y02.455@paloalto-snr1.gtei.net>
In article <············@baldur.whoi.edu>,
Karl A. Krueger <········@example.edu> wrote:
>Do mathematical values carry around type in and of themselves, or is

In mathematics, types are sets, not properties of values.

>type something which we impose upon them in order to write programming
>languages which use them?  

In programming languages, types are often used to indicate representation
methods.  In Common Lisp, we actually use types in multiple ways: some
types indicate representation (e.g. FIXNUM and all the xxx-FLOAT types),
while others are abstract and approximate the mathematical equivalents
(e.g. INTEGER and REAL).

>			    If mathematical values do carry around type,
>then the fact that x = x+0i does not mean that x+0i is x.  If they do
>not, and if type is merely a programmers' convention, then we may
>validly declare *either* that reals are a subtype of complexes, or that
>they are not.
>
>It makes sense to me to say that reals are a subtype of complexes from a
>user standpoint, since I want to be able to use a real anywhere I can
>use a complex:  imagpart(x) above should return 0, not error.  However I
>can see that from the standpoint of some implementations, "subtype"
>might imply inherited implementation, which might give reals more
>baggage than they need to carry around.

In mathematics, real numbers are a subset of complex numbers -- the set of
real numbers is all the complex numbers whose imaginary part is 0.

>However I didn't think that was the case in Lisp -- correct me if I am
>wrong, I am pretty new to Lisp. :)

The standard specifically says that REAL and COMPLEX are pairwise disjoint,
so you're correct.  In CL we've chosen to use COMPLEX as the name for a
representation type (the form that requires two slots, for both real and
imaginary parts) rather than the abstract number type.  If you want a type
that corresponds to the mathematical complex numbers, use NUMBER -- it
encompasses both of them (REAL and COMPLEX don't necessarily form an
exhaustive partition of it -- we wanted to leave room for extensions that
provide other types of numbers).

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Karl A. Krueger
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <bb5kvj$mgn$1@baldur.whoi.edu>
Barry Margolin <··············@level3.com> wrote:
> In article <············@baldur.whoi.edu>,
> Karl A. Krueger <········@example.edu> wrote:
>>Do mathematical values carry around type in and of themselves, or is
> 
> In mathematics, types are sets, not properties of values.

That's fair, and it's a distinction from M. Bourguignon's evident
position.


>>It makes sense to me to say that reals are a subtype of complexes from a
>>user standpoint, since I want to be able to use a real anywhere I can
>>use a complex:  imagpart(x) above should return 0, not error.  However I
>>can see that from the standpoint of some implementations, "subtype"
>>might imply inherited implementation, which might give reals more
>>baggage than they need to carry around.
> 
> In mathematics, real numbers are a subset of complex numbers -- the set of
> real numbers is all the complex numbers whose imaginary part is 0.

It seemed to me that it was M. Bourguignon's contention that no given
real number is a complex number, even though it might be equal to a
complex number.  Perhaps I was overanalyzing.

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Coby Beck
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <Y9ABa.46022$1s1.615441@newsfeeds.bigpond.com>
"Karl A. Krueger" <········@example.edu> wrote in message
·················@baldur.whoi.edu...
> It seemed to me that it was M. Bourguignon's contention that no given
> real number is a complex number,

That depends on what the meaning of "is" is.  (I always wanted to find an
excuse to use that snetence ;)

> even though it might be equal to a
> complex number.

and the meaning of equal...

-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Karl A. Krueger
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <bb7p2j$e5q$1@baldur.whoi.edu>
Coby Beck <·····@mercury.bc.ca> wrote:
> "Karl A. Krueger" <········@example.edu> wrote in message
> ·················@baldur.whoi.edu...
>> It seemed to me that it was M. Bourguignon's contention that no given
>> real number is a complex number,
> 
> That depends on what the meaning of "is" is.  (I always wanted to find an
> excuse to use that snetence ;)

A good deal of the branch of philosophy known as metaphysics is about
what the meaning of "is" is ... :)


>> even though it might be equal to a complex number.
> 
> and the meaning of equal...

=, eq, eql, equal, equalp -- I think that is enough for me to keep track
of, thanks!

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <87isrqs1mk.fsf@thalassa.informatimago.com>
"Karl A. Krueger" <········@example.edu> writes:

> Barry Margolin <··············@level3.com> wrote:
> > In article <············@baldur.whoi.edu>,
> > Karl A. Krueger <········@example.edu> wrote:
> >>Do mathematical values carry around type in and of themselves, or is
> > 
> > In mathematics, types are sets, not properties of values.
> 
> That's fair, and it's a distinction from M. Bourguignon's evident
> position.

I think  it's both.  A value alone  has no property per  se.  But once
you consider  a value as an element  of a given structure  (a set with
some operations, that is, a type), then you have this relation between
the value and  the structure and it can be construed  as a property of
the value.  What's more, other  properties can then be attached to the
value, and  different properties  depending on the  type to  which you
attach the value!

For  example, if  you  consider the  values  2 and  -2,  they have  as
property that 2 has  a square root and -2 has no  square root when you
consider them to  be of the type (R,+,*).  But  when you consider them
to be 2+0i  and -2+0i, that is,  to be of the type  (C,+,*), then both
have square roots!

On the other hand, you could say that -2 has no square root in (R,+,*)
(1st  property) and  has 2  square  roots in  (C,+,*) (2nd  property),
without  attaching  a-priory  any  type.   I  would  say  that  you're
attaching  the type  implicitely citing  properties relative  to these
types...

 
> >>It makes sense to me to say that reals are a subtype of complexes from a
> >>user standpoint, since I want to be able to use a real anywhere I can
> >>use a complex:  imagpart(x) above should return 0, not error.  However I
> >>can see that from the standpoint of some implementations, "subtype"
> >>might imply inherited implementation, which might give reals more
> >>baggage than they need to carry around.
> > 
> > In mathematics, real numbers are a subset of complex numbers -- the set of
> > real numbers is all the complex numbers whose imaginary part is 0.
> 
> It seemed to me that it was M. Bourguignon's contention that no given
> real number is a complex number, even though it might be equal to a
> complex number.  Perhaps I was overanalyzing.

Well, since I've been  reminded that there exist several constructions
of  the various  classes  of numbers,  I'm  rejoining the  mainstream,
building equivalence classes of  these constructions, so I accept that
there's  no  difference between  these  different  classes of  numbers
(they're effectively  subsets one  of the other,  without the  need of
isomorphims).  It remains the differences of representation.

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Michael Sullivan
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <1fvzkyp.lxszyv1obee67N%michael@bcect.com>
Pascal Bourguignon <····@thalassa.informatimago.com> wrote:

> For  example, if  you  consider the  values  2 and  -2,  they have  as
> property that 2 has  a square root and -2 has no  square root when you
> consider them to  be of the type (R,+,*).  But  when you consider them
> to be 2+0i  and -2+0i, that is,  to be of the type  (C,+,*), then both
> have square roots!

No.

If you say unqualified that the real number -2 has no square roots, you
are wrong.  It does -- those roots are not real numbers, but they
certainly exist.  Whether -2 has square roots does not depend on whether
you consider it as real number, or as a complex number with a zero
i-coefficient (these are either defined or proven to be equal under all
derivations of real and complex with which I am familiar).  It depends
solely on whether you are considering only real numbers as solutions. So
you can say correctly that "-2 has no *real* square roots", or you can
say "in the domain of real numbers, -2 has no square roots."  You can
even say, "Assume real numbers [...] -2 has no square roots."

Now, I grant that when you say "no square roots", most people will
*assume* that you mean "no real square roots", but it's not formally
true.

This is the meaning of the mathematical abstraction.


Michael
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <878ysphca3.fsf@thalassa.informatimago.com>
"Karl A. Krueger" <········@example.edu> writes:

> Do mathematical values carry around type in and of themselves, or is
> type something which we impose upon them in order to write programming
> languages which use them?  If mathematical values do carry around type,
> then the fact that x = x+0i does not mean that x+0i is x.  If they do
> not, and if type is merely a programmers' convention, then we may
> validly declare *either* that reals are a subtype of complexes, or that
> they are not.

The way  I learnt  maths, yes, mathematical  values carry  their type.
They're instances  of a set with  a given structure,  constructed in a
precise way.

I think this is justified, in the context of computer, since the types
are   implemented  with   different  representations,   different  bit
patterns. 0, 0/1, 0.0 and #c(0  0) (even #c(0.0 0.0) are not stored in
memory with  the same  bit pattern.  We,  with our  mathematical mind,
knowing some theorems about the existance of isomorphims may choose to
consider them one thing, but actually we "unconsciouly" are constantly
going thru  the isomorphism.   I agree  that we may  want to  have the
computer to  the same, hence  the automatic type conversions.   To bad
that they're not  consistent, viz. the OP question  and the difference
between #c(1 0) and #c(1 0.0).


> I think many people would say that for each real x there exists a
> complex c such that x = c; or in other words that x+0i = x.

So, the correct notation would be:
    Let f be an isomorphim between the field (R,+,�) 
        and the subfield ({c,c in C,imag(c)=0},+,�)

    x = f^-1(c) and f^-1(x+0i)=x

When  you   write  (x+0i)  =  x,  you're   invoking  the  isomorphism,
unconsciously and automatically.

When you  write #c(1 0),  Lisp does the  same, has 1+0i,  notices that
imagpart(1+0i)  is zero  and then  applies the  isomorphism, returning
realpart(1+0i) = 1.  (Don't ask me if it's  unconsciously that it does
that, at least it's automatic).


But why  should it prefer the  reals to the complexes?   Why not doing
the reverse:

[19]>  1
#C(1 0)

(Because we're lazy and don't want to write +0i too often!)


> It makes sense to me to say that reals are a subtype of complexes from a
> user standpoint, since I want to be able to use a real anywhere I can
> use a complex:  imagpart(x) above should return 0, not error. 

Subtype is not subset!


> However I can see that from the standpoint of some implementations,
> "subtype" might imply inherited implementation, which might give
> reals more baggage than they need to carry around.
> 
> However I didn't think that was the case in Lisp -- correct me if I am
> wrong, I am pretty new to Lisp. :)

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Joe Marshall
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <y90pboui.fsf@ccs.neu.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> The way I learnt maths, yes, mathematical values carry their type.

I think it depends on whether you prefer Russel to Wittgenstein.
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <87el2es1ju.fsf@thalassa.informatimago.com>
Joe Marshall <···@ccs.neu.edu> writes:
>
> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> 
> > The way I learnt maths, yes, mathematical values carry their type.
> 
> I think it depends on whether you prefer Russel to Wittgenstein.

I believe one point of  view allows to construct the structure leading
to the other point of view!

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Karl A. Krueger
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <bb5mfq$mtr$2@baldur.whoi.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> wrote:
> "Karl A. Krueger" <········@example.edu> writes:
>> Do mathematical values carry around type in and of themselves, or is
>> type something which we impose upon them in order to write programming
>> languages which use them?  If mathematical values do carry around type,
>> then the fact that x = x+0i does not mean that x+0i is x.  If they do
>> not, and if type is merely a programmers' convention, then we may
>> validly declare *either* that reals are a subtype of complexes, or that
>> they are not.
> 
> The way  I learnt  maths, yes, mathematical  values carry  their type.
> They're instances  of a set with  a given structure,  constructed in a
> precise way.

Ah.  I would say that values belong to sets, and that all the following
are notations for the same value, which point out its membership in
different sets:

        3               { a natural number }
        +3              { an integer }
        3.0             { a real number }
        3 + 0i          { a complex number where i^2 = -1 }
        3 + 0e          { a dual number where e^2 = 0, e =/= 0 }

... and arguably, so might be ...
        
        0x^2 + 0x + 3   { a polynomial }
        [ 3 ]           { a 1x1 matrix }

Would it reduce your position to absurdity to say that you're claiming
that although 3 and 3.0 are isomorphic they are not the same, in that
one is a member of the integers and the other of the reals?

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <87add2s1bi.fsf@thalassa.informatimago.com>
"Karl A. Krueger" <········@example.edu> writes:

> Pascal Bourguignon <····@thalassa.informatimago.com> wrote:
> > "Karl A. Krueger" <········@example.edu> writes:
> >> Do mathematical values carry around type in and of themselves, or is
> >> type something which we impose upon them in order to write programming
> >> languages which use them?  If mathematical values do carry around type,
> >> then the fact that x = x+0i does not mean that x+0i is x.  If they do
> >> not, and if type is merely a programmers' convention, then we may
> >> validly declare *either* that reals are a subtype of complexes, or that
> >> they are not.
> > 
> > The way  I learnt  maths, yes, mathematical  values carry  their type.
> > They're instances  of a set with  a given structure,  constructed in a
> > precise way.
> 
> Ah.  I would say that values belong to sets, and that all the following
> are notations for the same value, which point out its membership in
> different sets:
> 
>         3               { a natural number }
>         +3              { an integer }
>         3.0             { a real number }
>         3 + 0i          { a complex number where i^2 = -1 }
>         3 + 0e          { a dual number where e^2 = 0, e =/= 0 }
> 
> ... and arguably, so might be ...
>         
>         0x^2 + 0x + 3   { a polynomial }
>         [ 3 ]           { a 1x1 matrix }
> 
> Would it reduce your position to absurdity to say that you're claiming
> that although 3 and 3.0 are isomorphic they are not the same, in that
> one is a member of the integers and the other of the reals?

It's effectively my position. I don't think that it would reduce it to
absurdity.   I  agree  that  we  can build  an  equivalence  class  of
equivalence classes and that we could name all these different objects
representations of the same equivalnce  classe named 3.  (I agree on a
common mathematical point  of view). But I don't  forget that when you
have numbers  in computers, you  have to choose a  representation, and
then they are all different.  Just try to do:

    (+ (make-array '(1) :initial-element 3)  1)

or even:

    (+ (make-array '(1) :initial-element 3) 
       (make-array '(1) :initial-element 1))


Do  you know  one language  specification  that would  rquire all  the
representations to be usable equivalently? (Perhaps APL?)

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Karl A. Krueger
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <bbcvjb$7rg$1@baldur.whoi.edu>
Pascal Bourguignon <····@thalassa.informatimago.com> wrote:
> It's effectively my position. I don't think that it would reduce it to
> absurdity.   I  agree  that  we  can build  an  equivalence  class  of
> equivalence classes and that we could name all these different objects
> representations of the same equivalnce  classe named 3.  (I agree on a
> common mathematical point  of view). But I don't  forget that when you
> have numbers  in computers, you  have to choose a  representation, and
> then they are all different.  Just try to do:
> 
>    (+ (make-array '(1) :initial-element 3)  1)

Oh, sure, but Lisp arrays aren't mathematical matrices and are not
advertised as such.  (Nor for that matter do many matrix implementations
permit adding a matrix to a scalar, even if the matrix is a multiple of
the identity and thus equivalent to a scalar for some purposes -- it's
type checking; doing that is too likely to be an error.)


> or even:
>    (+ (make-array '(1) :initial-element 3) 
>       (make-array '(1) :initial-element 1))

No, but I would expect this to work in a Lisp which defined matrices as
a built-in type:

(+ (make-matrix :rank 1 :initial-element 1)
   (make-matrix :rank 1 :initial-element 1))

As it stands, CL:+ is defined over numbers not arrays though, and there
are no matrices in the core language.  Phooey.  :)


IIRC, the original question had to do with whether a real is a complex,
and the mingling of abstract types and representation types.  I think
the distinction between RATIO and RATIONAL is informative:  RATIO is a
representation type while RATIONAL is abstract, so 3 is the latter but
not the former -- it is a rational; it is not implemented as a ratio.

In a language with built-in matrices it might make sense to implement
complexes as their 2x2 matrix representation rather than as ordered
pairs.  :)


> Do  you know  one language  specification  that would  rquire all  the
> representations to be usable equivalently? (Perhaps APL?)

All?  I don't think that would be possible, mathematicians have come up
with too many types.  Sure, it would be amusing to have tensors and dual
numbers and all ... :)

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Chris Riesbeck
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <riesbeck-ECCDB1.12411902062003@news.it.northwestern.edu>
In article <············@baldur.whoi.edu>,
 "Karl A. Krueger" <········@example.edu> wrote:

>... but Lisp arrays aren't mathematical matrices and are not
>advertised as such.  (Nor for that matter do many matrix implementations
>permit adding a matrix to a scalar, even if the matrix is a multiple of
>the identity and thus equivalent to a scalar for some purposes -- it's
>type checking; doing that is too likely to be an error.)

In *a* Lisp, namely XlispStat, this works:

> (+ 3 #(1 2 3))
#(4 5 6)

> (* 3 #(1 2 3))
#(3 6 9)

> (+ #(1 2 4) #(1 2 3))
#(2 4 7)

Same thing for lists. Not Common Lisp, though.
From: Nils Goesche
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87u1bd67uu.fsf@darkstar.cartan>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > The point is that in mathematics, a real is a complex. 
> 
> Well, perhaps in your mathematics, but in the mathematics I
> learnt back in France, complex is very different to real. Not
> the same thing at all.
> 
> The set of Reals is the set of the classes of equivalences of
> converging series or rationals.

That's one way of defining them.  Or actually it isn't because a
real is an equivalence class of Cauchy-sequences of rationals.
If they were all convergent, the whole construction would be
unnecessary.  But you can also define them in totally different
ways; with Dedekind cuts, for instance.  What counts is only that
the fields you define in such ways are all isomorphic.

> The set of Complexes is the product set of the set of Reals by
> the set of Reals.
> 
> So complexes are couples, while reals are classes of
> equivalences. I don't see that they're the same thing.

Reals are equivalence classes of Cauchy sequences, or they are
Dedekind cuts.  The same thing?

> Now, it's true that there exist bijections between the set of
> real and some subsets of the set of complexes, and what's more,
> there even exist isomorphims translating the "corps"* (R,+,�)
> to a stable sub-"corps" of C.  That's very convenient, but that
> should not make you confound the set R and the subset of C: {c
> | c in C and imagpart(c)=0}.

The word you are looking for is ``field��.  When we say
``reals��, we mean any one of a number of isomorphic
constructions.  As R can be isomorphically embedded into C, we
can also regard it as a subset of C, and this is in fact commonly
done whenever we deal with complex numbers, for instance when we
say ``We are going to prove now that the complex number r is in
fact real...��.  When we are done with the proof, we do not
suddenly think that r is not a complex number anymore.

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87d6i1hd8q.fsf@thalassa.informatimago.com>
Nils Goesche <···@cartan.de> writes:

> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> 
> > Kent M Pitman <······@world.std.com> writes:
> > 
> > > The point is that in mathematics, a real is a complex. 
> > 
> > Well, perhaps in your mathematics, but in the mathematics I
> > learnt back in France, complex is very different to real. Not
> > the same thing at all.
> > 
> > The set of Reals is the set of the classes of equivalences of
> > converging series or rationals.
> 
> That's one way of defining them.  Or actually it isn't because a
> real is an equivalence class of Cauchy-sequences of rationals.
> If they were all convergent, the whole construction would be
> unnecessary.  But you can also define them in totally different
> ways; with Dedekind cuts, for instance.  What counts is only that
> the fields you define in such ways are all isomorphic.
> 
> > The set of Complexes is the product set of the set of Reals by
> > the set of Reals.
> > 
> > So complexes are couples, while reals are classes of
> > equivalences. I don't see that they're the same thing.
> 
> Reals are equivalence classes of Cauchy sequences, or they are
> Dedekind cuts.  The same thing?

No. But  perhaps we  could construct an  equivalence class of  all the
possible representation/constructions of the reals. There would remain
the distinction  between the equivalence class and  the instances, but
that would be more the same thing than two sets in an isomorphism...

But in anycase,  two different instances would still  be two different
representations, perhaps  widely different, and could  justify that no
automatic conversion is done.
 
> > Now, it's true that there exist bijections between the set of
> > real and some subsets of the set of complexes, and what's more,
> > there even exist isomorphims translating the "corps"* (R,+,�)
> > to a stable sub-"corps" of C.  That's very convenient, but that
> > should not make you confound the set R and the subset of C: {c
> > | c in C and imagpart(c)=0}.
> 
> The word you are looking for is ``field��. 
Thank you.

>  When we say
> ``reals��, we mean any one of a number of isomorphic
> constructions.  As R can be isomorphically embedded into C, we
> can also regard it as a subset of C, and this is in fact commonly
> done whenever we deal with complex numbers, for instance when we
> say ``We are going to prove now that the complex number r is in
> fact real...��.  When we are done with the proof, we do not
> suddenly think that r is not a complex number anymore.

In  maths, no.   Because  we have  the  wetware to  keep  in mind  the
isomorphism.  But  in computer the  differences are always  nagging us
because  they're  implemented  differently  as different  types.   The
problem occurs  when the isomorphism can  be applied in  some case and
not in others. (eq #c(1 0) 1) but (not (eq #c(1 0.0) 1))


-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Adam Warner
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <pan.2003.05.29.22.43.16.679926@consulting.net.nz>
Hi Pascal Bourguignon,

> In  maths, no.   Because  we have  the  wetware to  keep  in mind  the
> isomorphism.  But  in computer the  differences are always  nagging us
> because  they're  implemented  differently  as different  types.   The
> problem occurs  when the isomorphism can  be applied in  some case and
> not in others. (eq #c(1 0) 1) but (not (eq #c(1 0.0) 1))

You chose a bad example because EQ should not be used to compare integers:
(eq 1234567890 1234567890) => true or false (probably false on 32-bit
hardware)

Integer fixnums and bignums are also implemented differently. Do you
consider the differences are always nagging or do you simply use the
appropriate operator to compare their equivalence? If just checking
whether two integers are equal, EQL can be used. When comparing whether a
complex and an integer are equal, = should be used.

If you want to ignore differences your only solution is to use the
appropriate abstraction. Are the differences still nagging in this case:
(= #c(1 0) 1) and (= #c(1 0.0) 1)

Regards,
Adam
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87r86es2zx.fsf@thalassa.informatimago.com>
"Adam Warner" <······@consulting.net.nz> writes:

> Hi Pascal Bourguignon,
> 
> > In  maths, no.   Because  we have  the  wetware to  keep  in mind  the
> > isomorphism.  But  in computer the  differences are always  nagging us
> > because  they're  implemented  differently  as different  types.   The
> > problem occurs  when the isomorphism can  be applied in  some case and
> > not in others. (eq #c(1 0) 1) but (not (eq #c(1 0.0) 1))
> 
> You chose a bad example because EQ should not be used to compare integers:
> (eq 1234567890 1234567890) => true or false (probably false on 32-bit
> hardware)

You're right but it does not  matter. I wanted to mean, "have the same
representation" insteand of they're  the same object. Even with equal,
they're  different because  (not  (equal  0 0.0))  (at  least in  some
implementations).

> Integer fixnums and bignums are also implemented differently. Do you
> consider the differences are always nagging or do you simply use the
> appropriate operator to compare their equivalence? If just checking
> whether two integers are equal, EQL can be used. When comparing whether a
> complex and an integer are equal, = should be used.

Note that when  you consider integers vs. fixnum, as  in say short and
long, or  even signed  char and signed  long, you cannot  compare them
directly, but have to upgrade the smallest. It's done automatically in
Lisp even  for bignums while  in C it's  done automatically only  on a
smaller range.
 

> If you want to ignore differences your only solution is to use the
> appropriate abstraction. Are the differences still nagging in this case:
> (= #c(1 0) 1) and (= #c(1 0.0) 1)
> 
> Regards,
> Adam

What I'm  saying is  that lisp  goes thru the  isomorphims to  get the
wanted result between:

[28]> (equal #c(1 0.0) 1)
NIL

and:

[29]> (= #c(1 0.0) 1)
T

equal says that  #c(1 0.0) and 1  are not the same thing  while = says
that there is an isomorphism f between their sets and that:

     (equal (f #c(1 0.0)) 1).

(or f(1+0o)=1 )


-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Jens Axel Søgaard
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <3ed633d7$0$97201$edfadb0f@dread12.news.tele.dk>
Pascal Bourguignon wrote:
> Kent M Pitman <······@world.std.com> writes:

>>The point is that in mathematics, a real is a complex. 

> Well, perhaps  in your  mathematics, but in  the mathematics  I learnt
> back in France, complex is very  different to real. Not the same thing
> at all.
> 
> The  set  of Reals  is  the  set of  the  classes  of equivalences  of
> converging series or rationals.

That's one implementation (read construction).

The cuts of Dedekind is another.

So if you think "a real is a complex" should have been,
"R is isomorphic to a subset of C", then you ought to
write "R is isomorphic to the set of equvalence classes of ..".

> The set of Complexes is the product set of the set of Reals by the set
> of Reals.

In algebra one uses R[x]/(x^2+1). Thus your "is" really is "is 
isomorphic to".

> So complexes are  couples, while reals are classes  of equivalences. I
> don't see that they're the same thing.

Even in math we say that R is a subset of C without lossing our sleep�

-- 
Jens Axel S�gaard
From: Gareth McCaughan
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <slrnbdct0o.2eb1.Gareth.McCaughan@g.local>
Pascal Bourguignon wrote:

[Kent Pitman:]
> > The point is that in mathematics, a real is a complex. 

[Pascal:]
> Well, perhaps  in your  mathematics, but in  the mathematics  I learnt
> back in France, complex is very  different to real. Not the same thing
> at all.
> 
> The  set  of Reals  is  the  set of  the  classes  of equivalences  of
> converging series or rationals.
> 
> The set of Complexes is the product set of the set of Reals by the set
> of Reals.
> 
> So complexes are  couples, while reals are classes  of equivalences. I
> don't see that they're the same thing.

You're suffering from foundationitis, also known as
implementationosis. Mathematicians were doing perfectly
good work with complex numbers, and thinking of C as a
superset of R, long before it occurred to anyone
that maybe we could define R in terms of Q by taking
equivalence classes of Cauchy sequences or Dedekind cuts.
Even in these enlightened times, it's commonplace for
a mathematician to write something like "Q c R c C"
(where the "c" is the nearest I can get in ASCII to
the usual symbol for the subset relation).

Do you consider the non-negative integers to be a subset
of the integers? The usual construction of the numeric
tower begins by building non-negative integers out of
sets, then arbitrary integers out of non-negative ones
(typically as equivalence classes of pairs where (a,b)
is meant to mean a-b).

Should (RATIONALP 1) return NIL? The usual construction
of the numeric tower builds Q from Z, typically using
equivalence classes of pairs where (a,b) is meant to mean
a/b.

We should distinguish interface from implementation,
meaning from construction. You can build C out of R
as RxR or R[x]/(x^2+1) or the algebraic completion
of R (all three of these lead to different sets, so
maybe 1+i isn't the same thing as 1+i), but it's
essentially *always* best to think of R as a subset
of C. Unless you're working on the set-theoretical
foundations of analysis, I suppose. Most mathematicians
aren't.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87n0h2s2ae.fsf@thalassa.informatimago.com>
Gareth McCaughan <················@pobox.com> writes:

> Pascal Bourguignon wrote:
> 
> [Kent Pitman:]
> > > The point is that in mathematics, a real is a complex. 
> 
> [Pascal:]
> > Well, perhaps  in your  mathematics, but in  the mathematics  I learnt
> > back in France, complex is very  different to real. Not the same thing
> > at all.
> > 
> > The  set  of Reals  is  the  set of  the  classes  of equivalences  of
> > converging series or rationals.
> > 
> > The set of Complexes is the product set of the set of Reals by the set
> > of Reals.
> > 
> > So complexes are  couples, while reals are classes  of equivalences. I
> > don't see that they're the same thing.
> 
> You're suffering from foundationitis, also known as
> implementationosis.

Yes I  admit it. Fact is  that these differences  matter in computers,
because they correspond to differences of implementation. unsigned int
is not  implemented the same way  as signed int. The  bit patterns are
not interpreted the same way.


> Should (RATIONALP 1) return NIL? 

It all depends whether we want to know who it's implemented or what it
means, or  what mathematical  properties it has  (what set  it belongs
to).


From the various constructions of N, Z, Q, (D), R and C, to be able to
say N c Z c Q c D  c R c C you'd have to introduce equivalence classes
between  the  different  constructions  (I've  never  seen  this  done
formally), and  they you'd have built  the numbers as  they're used by
the mathematicians.  So  I can agree that in mathematics,  a real is a
complex.   But  in  computers,   there  remains  the  problem  of  the
representations.   The primitives  of the  languages should  be sorted
between  primitives about  the representation  or primitive  about the
abstract,  mathematical  object.   We  can  say that  =  is  abstract,
mathematical  equality, while  eq,  eql and  equal are  representation
equalities.  Other  places of the  languages are not so  precise about
what they mean.  For example, the fact that:

     (not (equal #c(1 0) #c(1 0.0)))

leads me to  think that they try to talk  about mathematical real 0.0,
while we have  in the computer only a subset  of rational numbers, not
even of decimals! (the floating points).  So when you do a computation
with this  representation, you should compare two  floating points for
equality  but   you  should  test  if   abs(a-b)<epsilon  because  the
computations involved  errors and rounding  at all stage. So  when you
get 0.0  that should mean actually ]-epsilon,+epsilon[  and you should
not say that it's equal to 0.

In that sense, it's prudent to have: (not (equal #c(1 0) #c(1 0.0)))

but I don't think that it's entirely deliberate and formaly specified...


-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Gareth McCaughan
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <slrnbdjodm.1i9.Gareth.McCaughan@g.local>
Pascal Bourguignon wrote:

> Gareth McCaughan <················@pobox.com> writes:
>
> > You're suffering from foundationitis, also known as
> > implementationosis.
> 
> Yes I  admit it. Fact is  that these differences  matter in computers,
> because they correspond to differences of implementation. unsigned int
> is not  implemented the same way  as signed int. The  bit patterns are
> not interpreted the same way.

Feh. When I want to think about bit patterns, I program in
C or C++. This was comp.lang.lisp last time I checked :-).
Anyway: in some CL systems, odd integers and even integers
have different type codes. Should we regard *those* as
different types?



> > Should (RATIONALP 1) return NIL? 
> 
> It all depends whether we want to know how it's implemented or what it
> means, or  what mathematical  properties it has  (what set  it belongs
> to).

For the most part I don't care how it's implemented. I care
what it means -- and, at least as I see it, the natural number
1, the rational number 1, the real number 1, the complex
number 1, and the quaternion 1 all mean the same thing.
I care what mathematical properties it has -- and, at least
as I see it, they all have the same mathematical properties
too. As for "what set it belongs to", it belongs to exactly
as many sets as there are sets (subject to various mild
set-theoretic axioms). If I want to consider 1 "as a member
of such-and-such a set", then I should do that by making
the set explicit, not by pretending I've got several different
1s of different types.

Think about set theory again. One early attempt at building
a set-theoretic foundation for mathematics was that of
Russell and Whitehead. Their "theory of types" has lots
of different empty sets. No one uses the theory of types
any more, and that's one of the reasons: things that are
obviously "the same" having to be regarded as different
because they're in different sets.

> From the various constructions of N, Z, Q, (D), R and C, to be able to
> say N c Z c Q c D  c R c C you'd have to introduce equivalence classes
> between  the  different  constructions  (I've  never  seen  this  done
> formally), and  they you'd have built  the numbers as  they're used by
> the mathematicians.

What's "D"? It can't be the dyadic rationals because that would
come between Z and Q. You used the word "decimals" later, but
if you mean terminating decimals that would be between Z and Q,
if you mean terminating-or-repeating decimals that would be
the same as Q, and if you mean possibly-non-terminating-decimals
then that would be the same as R (or, if you don't quotient out
by ...x999999999... = ...(x+1)000000000..., it would be a superset
of R not contained in C). So I'm confused.

I'm not sure whether I've seen the necessary identifications
done formally. If not, I think I know why not: because they're
easy and obvious, so instead of wasting time on it one might
as well say "and now we can just take all those canonical
injections for granted".

>                      So  I can agree that in mathematics,  a real is a
> complex.   But  in  computers,   there  remains  the  problem  of  the
> representations.   The primitives  of the  languages should  be sorted
> between  primitives about  the representation  or primitive  about the
> abstract,  mathematical  object.
[SNIP: in CL, = is abstract and eq* are representational]

Sure. But this has nothing to do with any mathematical
construction of the various objects involved.

> leads me to  think that they try to talk  about mathematical real 0.0,
> while we have  in the computer only a subset  of rational numbers, not
> even of decimals! (the floating points).  So when you do a computation
> with this  representation, you should compare two  floating points for
> equality  but   you  should  test  if   abs(a-b)<epsilon  because  the
> computations involved  errors and rounding  at all stage. So  when you
> get 0.0  that should mean actually ]-epsilon,+epsilon[  and you should
> not say that it's equal to 0.

Well, yes, kinda. (There are applications in which you can
use floating-point numbers and *know* there are no errors.
And when there are errors, the value of epsilon you should
use can vary from one problem to another. And even when
there are errors in the calculation, it can sometimes be
correct to test for exact equality, and when it isn't
it's often correct to test for something much broader than
the possible accumulated errors would suggest.)

> In that sense, it's prudent to have: (not (equal #c(1 0) #c(1 0.0)))
> 
> but I don't think that it's entirely deliberate and formaly specified...

It's certainly quite formally specified in the spec.
I think it's deliberate too, but you'd need to ask
the people involved in making the standard.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Joe Marshall
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <TvpCa.536756$Si4.464315@rwcrnsc51.ops.asp.att.net>
"Gareth McCaughan" <················@pobox.com> wrote in message
····································@g.local...
>
> Think about set theory again. One early attempt at building
> a set-theoretic foundation for mathematics was that of
> Russell and Whitehead. Their "theory of types" has lots
> of different empty sets. No one uses the theory of types
> any more, and that's one of the reasons: things that are
> obviously "the same" having to be regarded as different
> because they're in different sets.

Programming languages seem to.  The empty string, a vector with
no elements, and an empty list are all different empty sets.
From: Gareth McCaughan
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <slrnbdl576.1i9.Gareth.McCaughan@g.local>
Joe Marshall wrote:

[I said:]
> > Think about set theory again. One early attempt at building
> > a set-theoretic foundation for mathematics was that of
> > Russell and Whitehead. Their "theory of types" has lots
> > of different empty sets. No one uses the theory of types
> > any more, and that's one of the reasons: things that are
> > obviously "the same" having to be regarded as different
> > because they're in different sets.
> 
> Programming languages seem to.  The empty string, a vector with
> no elements, and an empty list are all different empty sets.

Different empty sequences, anyway. Good point, though.
On the other hand, it feels (to me) right in a programming
language and wrong in mathematics, so let's try to work out
why.

The key difference for me is that the various types
of set all support the same operations, modulo changes
of type; whereas different kinds of sequence support
different operations. This is especially true in Lisp,
where lists aren't *really* a kind of sequence :-).
And it's especially true for mutable objects. When you
stick an #\x onto the end of "" you get "x"; when you
stick one onto the end of (an adjustable version of)
#() you get #(#\x). Therefore they aren't the same
object. Or else the "stick-this-onto-the-end-of-that"
operations are different, in which case it's no longer
true that an empty string and an empty vector are
just different incarnations of "the empty sequence".

Another way to look at it: In the theory of types,
the complexity of the type system doesn't actually
buy you anything much. (It was supposed to avoid
paradoxes, but it turns out that there are other
better approaches that avoid them just as well.)
In programming languages, having lists and vectors
and strings as separate kinds of thing *does* buy
you something: each is useful for different purposes.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87u1b9j6zq.fsf@thalassa.informatimago.com>
Gareth McCaughan <················@pobox.com> writes:

> Joe Marshall wrote:
> 
> [I said:]
> > > Think about set theory again. One early attempt at building
> > > a set-theoretic foundation for mathematics was that of
> > > Russell and Whitehead. Their "theory of types" has lots
> > > of different empty sets. No one uses the theory of types
> > > any more, and that's one of the reasons: things that are
> > > obviously "the same" having to be regarded as different
> > > because they're in different sets.
> > 
> > Programming languages seem to.  The empty string, a vector with
> > no elements, and an empty list are all different empty sets.
> 
> Different empty sequences, anyway. Good point, though.
> On the other hand, it feels (to me) right in a programming
> language and wrong in mathematics, so let's try to work out
> why.
> 
> The key difference for me is that the various types
> of set all support the same operations, modulo changes
> of type; whereas different kinds of sequence support
> different operations. This is especially true in Lisp,
> where lists aren't *really* a kind of sequence :-).
> And it's especially true for mutable objects. When you
> stick an #\x onto the end of "" you get "x"; when you
> stick one onto the end of (an adjustable version of)
> #() you get #(#\x). Therefore they aren't the same
> object. 

They  were in  lisp  where the  strings  were represented  as list  of
characters.

Hence the importance of the representation.  Common-Lisp is not "pure"
in the sense that the objects it offers us are not "pure" mathematical
objects,  that  is,  there  are  not  super  high  level  mathematical
constructions  with automatic  isomorphims in  every  directions.  The
objects Common-Lisp offers us  are elements of constructed sets where,
when  some isomorphims  exists, sometimes  it's  automatically invoked
sometimes not. (=: yes, equal: no, concatenate: yes, append: no)

[5]> (concatenate 'string '(#\x #\y) '(#\a #\b))
"xyab"
[8]> (concatenate 'string "xy" "ab")
"xyab"
[9]> (append '(#\x #\y) '(#\a #\b))
(#\x #\y #\a #\b)
[10]> (append "xy" "ab")

*** - APPEND: "xy" is not a list
1. Break [11]> 

[12]> (equal 0 0.0)
NIL
[13]> (= 0 0.0)
T

> Or else the "stick-this-onto-the-end-of-that"
> operations are different, in which case it's no longer
> true that an empty string and an empty vector are
> just different incarnations of "the empty sequence".
> 
> Another way to look at it: In the theory of types,
> the complexity of the type system doesn't actually
> buy you anything much. (It was supposed to avoid
> paradoxes, but it turns out that there are other
> better approaches that avoid them just as well.)
> In programming languages, having lists and vectors
> and strings as separate kinds of thing *does* buy
> you something: each is useful for different purposes.

That is, each has different performance coefficients.

If  we  had  smarter  compilers,  they  could  choose  themselve  what
representation to use and we could stay at a mathematical high level.

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Gareth McCaughan
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <slrnbdsprq.1i9.Gareth.McCaughan@g.local>
Pascal Bourguignon wrote:

[I said, about treating 0,0/1,0.0 as the same but
"", (), #() as different:]

> > The key difference for me is that the various types
> > of set all support the same operations, modulo changes
> > of type; whereas different kinds of sequence support
> > different operations. This is especially true in Lisp,
> > where lists aren't *really* a kind of sequence :-).
> > And it's especially true for mutable objects. When you
> > stick an #\x onto the end of "" you get "x"; when you
> > stick one onto the end of (an adjustable version of)
> > #() you get #(#\x). Therefore they aren't the same
> > object. 

[Pascal:]
> They  were in  lisp  where the  strings  were represented  as list  of
> characters.

In a dialect of Lisp in which strings are lists of characters,
indeed "" and () would be the same object, and there would be
no issue here. In Common Lisp they are different objects, and
they also have different behaviour.

> Hence the importance of the representation.  Common-Lisp is not "pure"
> in the sense that the objects it offers us are not "pure" mathematical
> objects,  that  is,  there  are  not  super  high  level  mathematical
> constructions  with automatic  isomorphims in  every  directions.  The
> objects Common-Lisp offers us  are elements of constructed sets where,
> when  some isomorphims  exists, sometimes  it's  automatically invoked
> sometimes not. (=: yes, equal: no, concatenate: yes, append: no)

Yes. Though most of these things aren't really isomorphisms.

> > Another way to look at it: In the theory of types,
> > the complexity of the type system doesn't actually
> > buy you anything much. (It was supposed to avoid
> > paradoxes, but it turns out that there are other
> > better approaches that avoid them just as well.)
> > In programming languages, having lists and vectors
> > and strings as separate kinds of thing *does* buy
> > you something: each is useful for different purposes.
> 
> That is, each has different performance coefficients.

Not just that. Lisp's lists are *not* just arrays with
different performance characteristics. To what array
does (1 . 2) correspond? If your answer is #(1 2) then
(1 2 3) is no longer a sequence :-).

> If  we  had  smarter  compilers,  they  could  choose  themselve  what
> representation to use and we could stay at a mathematical high level.

The programming language of my dreams has this feature,
kind of, as part of a different approach to object
orientation. Like most dreams, it isn't very coherent. :-)

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Pascal Bourguignon
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87y90lj7gq.fsf@thalassa.informatimago.com>
Gareth McCaughan <················@pobox.com> writes:
> > From the various constructions of N, Z, Q, (D), R and C, to be able to
> > say N c Z c Q c D  c R c C you'd have to introduce equivalence classes
> > between  the  different  constructions  (I've  never  seen  this  done
> > formally), and  they you'd have built  the numbers as  they're used by
> > the mathematicians.
> 
> What's "D"? It can't be the dyadic rationals because that would
> come between Z and Q. You used the word "decimals" later, but
> if you mean terminating decimals that would be between Z and Q,
> if you mean terminating-or-repeating decimals that would be
> the same as Q, and if you mean possibly-non-terminating-decimals
> then that would be the same as R (or, if you don't quotient out
> by ...x999999999... = ...(x+1)000000000..., it would be a superset
> of R not contained in C). So I'm confused.

Sorry, I meant the terminating decimals: N c Z c D c Q c R c C 

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Jens Axel Søgaard
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <3edb360d$0$97241$edfadb0f@dread12.news.tele.dk>
Pascal Bourguignon wrote:

> Sorry, I meant the terminating decimals: N c Z c D c Q c R c C 

But the terminating decimals D is not a field.

-- 
Jens Axel S�gaard
From: Jens Axel Søgaard
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <3edb37b9$0$97240$edfadb0f@dread12.news.tele.dk>
Jens Axel S�gaard wrote:
> Pascal Bourguignon wrote:
> 
>> Sorry, I meant the terminating decimals: N c Z c D c Q c R c C 
> 
> 
> But the terminating decimals D is not a field.

Forget that. My comment is completely irrelevant.

-- 
Jens Axel S�gaard
From: Joe Marshall
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <hqpCa.798786$OV.727344@rwcrnsc54>
"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message
···················@thalassa.informatimago.com...

> So when you do a computation
> with this [floating point] representation, you should [not] compare two
> floating points for equality but you should test if abs(a-b)< epsilon
> because the computations involved errors and rounding at all stage.

Not necessarily.  Floating point is *exact* when it can be.  For instance,
if you do integer arithmetic on double-precision floating point numbers
less than 2^53, there will be no error or rounding at all.  (Yes, if you
divide 2 by 3 you get a rounded answer, but it ain't an integer.)

> So  when you get 0.0  that should mean actually ]-epsilon,+epsilon[
> and you should not say that it's equal to 0.

This is wrong.  Floating point arithmetic is definitely *not*
interval arithmetic.  If you get 0.0, the answer is either exactly
0.0, or the `inexact' flag will be set to indicate that 0.0 is the
closest representable value.

>
> In that sense, it's prudent to have: (not (equal #c(1 0) #c(1 0.0)))
>
> but I don't think that it's entirely deliberate and formaly specified...

It is deliberately and formally specified.
EQUAL on numbers is the same as EQL on numbers, and EQL on
numbers is only true if they are the same type (representation)
and the same value.  If EQUAL doesn't do what you want vis-a-vis
complex numbers, then don't use it.
From: Johan Kullstam
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87llwkp7zx.fsf@sysengr.res.ray.com>
Gareth McCaughan <················@pobox.com> writes:

> Should (RATIONALP 1) return NIL? The usual construction
> of the numeric tower builds Q from Z, typically using
> equivalence classes of pairs where (a,b) is meant to mean
> a/b.

[3] (rationalp 1)
T

Sometimes you want to handle something differently depending upon if
it is complex or not.

Check this brain damage

[4]> (complexp #c(1.0 3.0))
T
[5]> (complexp 1.0)
NIL
[6]> (complexp #c(1 0))
NIL

If complex is a superset of the reals then [6] doesn't make sense.  If
you like this behaviour, why shouldn't (rationalp 1) return nil?

So now i can add two complex numbers and get something which is not deemed
complex

[7]> (complexp (+ #c(2 1) #c(2 -1)))
NIL

For a system which can handle #c(1 0) as a complex number, if I wanted
#c(1 0) to be a real, I could always coerce it.  However, Common Lisp
*removes* the option as there is no way as far as I can tell to have a
complex #c(1 0).

-- 
Johan KULLSTAM <··········@attbi.com> sysengr
From: Michael Sullivan
Subject: Re: (COMPLEXP real)
Date: 
Message-ID: <1fvzlzi.1pysemrk6uun9N%michael@bcect.com>
Johan Kullstam <··········@attbi.com> wrote:

> Check this brain damage
> 
> [4]> (complexp #c(1.0 3.0))
> T
> [5]> (complexp 1.0)
> NIL
> [6]> (complexp #c(1 0))
> NIL
> 
> If complex is a superset of the reals then [6] doesn't make sense.  If
> you like this behaviour, why shouldn't (rationalp 1) return nil?

I don't know who actually likes this behavior.  Mathematically, it's
abhorrent, which (I think) is why Kent brought it up as something as a
niggling thing one might want changed.  

I can guess why it might be useful in numerically intensive computation
where speed is a major factor. In those cases, it might be more
important to use "complexp" as an implementational query than as a
mathematical one.  Personally, I'd have opted for two separate
functions, but fortunately, I can always write my own mathematical
version mcomplexp if I want.  In practice, there are probably relatively
few cases where numberp would not suffice, so I'm starting to see the
point of the standard's decision...

 
Michael
From: Kaz Kylheku
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <cf333042.0306032108.32ccc81c@posting.google.com>
Johan Kullstam <··········@attbi.com> wrote in message news:<··············@sysengr.res.ray.com>...
> Gareth McCaughan <················@pobox.com> writes:
> 
> > Should (RATIONALP 1) return NIL? The usual construction
> > of the numeric tower builds Q from Z, typically using
> > equivalence classes of pairs where (a,b) is meant to mean
> > a/b.
> 
> [3] (rationalp 1)
> T

However: (ratio 1) -> NIL

There is a set of rational numbers which is exhaustively partitioned
into ratios and integers.

Similarly, numbers are partitioned into complex and real. Real numbers
have an imaginary part of zero.

> Sometimes you want to handle something differently depending upon if
> it is complex or not.
> 
> Check this brain damage
> 
> [4]> (complexp #c(1.0 3.0))
> T
> [5]> (complexp 1.0)
> NIL
> [6]> (complexp #c(1 0))
> NIL

The damage is in your own brain.

> If complex is a superset of the reals then [6] doesn't make sense.  If
> you like this behaviour, why shouldn't (rationalp 1) return nil?

See above. The complex type is like ratio, rather than like rational.

> So now i can add two complex numbers and get something which is not deemed
> complex

So what? You can add two ratios to get something which is not a ratio.
Or for that matter, divide two integers to get something which is not
an integer.
 
> [7]> (complexp (+ #c(2 1) #c(2 -1)))
> NIL

(ratiop (+ 1/2 1/2)) -> NIL
 
> For a system which can handle #c(1 0) as a complex number, if I wanted
> #c(1 0) to be a real, I could always coerce it.  However, Common Lisp
> *removes* the option as there is no way as far as I can tell to have a
> complex #c(1 0).

But you have a type which represents the union of the complex and real
numbers, namely the type NUMBER. If you want to treat #c(1 0) and #c(1
1) as the same type, then work with that type.

You seem to want COMPLEX to be a synonym for NUMBER, and some
additional type, say, NON-REAL for numbers not on the real number
line.
There is some ambiguity in how mathematicians use the term complex.
For example, a polynomial may be said to have two complex roots and a
real root. In such a sentence, the semantics are that the complex and
real sets are mutually exclusive. When is it ever said this way: the
polynomial has three complex roots, of which one is real. :)
From: Johan Kullstam
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <87el29prrx.fsf@sysengr.res.ray.com>
···@ashi.footprints.net (Kaz Kylheku) writes:

> Johan Kullstam <··········@attbi.com> wrote in message news:<··············@sysengr.res.ray.com>...
> > Gareth McCaughan <················@pobox.com> writes:
> > 
> > > Should (RATIONALP 1) return NIL? The usual construction
> > > of the numeric tower builds Q from Z, typically using
> > > equivalence classes of pairs where (a,b) is meant to mean
> > > a/b.
> > 
> > [3] (rationalp 1)
> > T
> 
> However: (ratio 1) -> NIL
> 
> There is a set of rational numbers which is exhaustively partitioned
> into ratios and integers.
> 
> Similarly, numbers are partitioned into complex and real. Real numbers
> have an imaginary part of zero.
> 
> > Sometimes you want to handle something differently depending upon if
> > it is complex or not.
> > 
> > Check this brain damage
> > 
> > [4]> (complexp #c(1.0 3.0))
> > T
> > [5]> (complexp 1.0)
> > NIL
> > [6]> (complexp #c(1 0))
> > NIL
> 
> The damage is in your own brain.
> 
> > If complex is a superset of the reals then [6] doesn't make sense.  If
> > you like this behaviour, why shouldn't (rationalp 1) return nil?
> 
> See above. The complex type is like ratio, rather than like rational.
> 
> > So now i can add two complex numbers and get something which is not deemed
> > complex
> 
> So what? You can add two ratios to get something which is not a ratio.
> Or for that matter, divide two integers to get something which is not
> an integer.

Complex numbers are a field.  This means (among other things) that it
is closed under addition and multiplication.

The integers are not a field, but a ring.  Division is therefore not
expected to be closed.

-- 
Johan KULLSTAM <··········@attbi.com> sysengr
From: Alexander Schmolck
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <yfsznkxvcw7.fsf@black132.ex.ac.uk>
Johan Kullstam <··········@attbi.com> writes:
> > So what? You can add two ratios to get something which is not a ratio.
> > Or for that matter, divide two integers to get something which is not
> > an integer.
> 
> Complex numbers are a field.  This means (among other things) that it
> is closed under addition and multiplication.
> 
> The integers are not a field, but a ring.  Division is therefore not
> expected to be closed.

This suggestion might reveal my ignorance, but does your problem go away if
you pretend that the type NUMBER is really called COMPLEX and the type COMPLEX
is actually called COMPLEX-WITH-NONZERO-IMAGINARY-PART?

'as
From: Ray Blaak
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <ullwhlh8x.fsf@STRIPCAPStelus.net>
Alexander Schmolck <··········@gmx.net> writes:
> This suggestion might reveal my ignorance, but does your problem go away if
> you pretend that the type NUMBER is really called COMPLEX and the type COMPLEX
> is actually called COMPLEX-WITH-NONZERO-IMAGINARY-PART?

I would suggest COMPLEX-WITH-EXPLICIT-IMAGINARY-FIELD instead. After all, the
imaginary part could well have a 0.0 value.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Kaz Kylheku
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <cf333042.0306051240.4ba4b91b@posting.google.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...
> Alexander Schmolck <··········@gmx.net> writes:
> > This suggestion might reveal my ignorance, but does your problem go away if
> > you pretend that the type NUMBER is really called COMPLEX and the type COMPLEX
> > is actually called COMPLEX-WITH-NONZERO-IMAGINARY-PART?
> 
> I would suggest COMPLEX-WITH-EXPLICIT-IMAGINARY-FIELD instead. After all, the
> imaginary part could well have a 0.0 value.

Then it wouldn't be a NONZERO-IMAGINARY-PART. :)
From: Kaz Kylheku
Subject: Re: (COMPLEXP real) (was: ANN: Lisp-2 vs Lisp-1 argument has been settled)
Date: 
Message-ID: <cf333042.0306051239.38a0038@posting.google.com>
Johan Kullstam <··········@attbi.com> wrote in message news:<··············@sysengr.res.ray.com>...
> ···@ashi.footprints.net (Kaz Kylheku) writes:
> 
> > So what? You can add two ratios to get something which is not a ratio.
> > Or for that matter, divide two integers to get something which is not
> > an integer.
> 
> Complex numbers are a field.  This means (among other things) that it
> is closed under addition and multiplication.

I agree that there is a field of numbers that are like 2-vectors, but
the word ``complex'' is just a word. We can use it to denote that
whole field, or a partition thereof. It's purely a game of assigning
semantics to words.

We could just as well say that the union of complex and real numbers
form a field.

Clearly when a mathematician says that a cubic polynomial has one real
root and two complex ones, the word complex is being used to denote
just that partition of the field which contains imaginary components.
At other times, it may mean the whole thing, as in ``function w(z) is
a mapping from the complex plane to the complex plane''.

> The integers are not a field, but a ring.  Division is therefore not
> expected to be closed.

This kind of observation is completely irrelevant to the design of a
programming language and its function library, particularly when that
language has a flexible type system.  Lisp functions don't have
type-restricted return values; they return an object whose type can
depend on any properties of the input arguments, or even some dynamic
state.
From: pentaside asleep
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <7d98828b.0305312347.5c6cf2b4@posting.google.com>
Kent M Pitman <······@world.std.com> wrote in message news:<···············@shell01.TheWorld.com>...
> But the spin on the message I see above is that we are being elitist in not
> wanting to reconsider old decisions.  We are not.  We are simply (re)applying
> the design guidelines that brought us the language at all.

If you are referring to my post, you're reading spin into it, as if
I'm someone who came down from the mountain to pronounce judgement on
the entire lisp community.  No, I simply notice from the trolling
(which started this thread) that people make their points a little
rougher and more irrational than otherwise, when going up against
certain, political opposition.  After all, users who stare at code all
day will have strong feelings about how it looks.

I don't feel particularly biased because I do not care about the
issue.  Though I'm a "lisp user" for small things, I think totally
differently about CL than I do Scheme.  On one hand, Scheme seems
somehow egalitarian and enlightening with its lisp-1.  But there are
obvious constraints you run into much sooner.  And if I'm already
treating functions differently by their position in a list, is it
surprising when there's special syntax involved for handling these
creatures?
From: Paolo Amoroso
Subject: Re: ANN: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <Ka3TPof9s4zx1yojNj7ZQrEnXRB0@4ax.com>
On 27 May 2003 00:28:19 -0700, ················@yahoo.com (pentaside
asleep) wrote:

> Paolo Amoroso <·······@mclink.it> wrote in message news:<····························@4ax.com>...
> > Everybody who doesn't like the outcome of this standardization process, or
> > who doesn't accept any kind of process for deciding which features should
> > go in a language, is welcome not to use ANSI Common Lisp, and maybe
> > design/use his favorite Lisp-1 dialect.
> 
> To an outside observer, it appears that if flamewars concerning how
> Lisp "should be" are unwanted, perhaps this newsgroup shouldn't be
> named comp.lang.lisp.  No doubt this has come up before, but a quick
> search turns up little.
[...]
> with lexical scoping, influencing CL.  Looks like there is a group of
> people who are interested in discussing a lisp that builds on CL's
> power, but with a modified syntax.

I wouldn't personally mind if that group of people discussed a new Lisp-1
dialect in comp.lang.lisp--and I didn't imply it in my article.

But that group of people shouldn't be surprised by the fact that a number
of users have settled, for whatever reason, for a Lisp-2 dialect such as
ANSI Common Lisp, and are productively using that language to solve actual
problems. It's just a different set of goals, tradeoffs, priorities,
and--why not?--preferences.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Wade Humeniuk
Subject: Re: Lisp-2 vs Lisp-1 argument has been settled
Date: 
Message-ID: <aRLAa.18$R02.36395@news2.telusplanet.net>
"Michael Park" <···········@whoever.com> wrote in message
·································@posting.google.com...
> Kent M Pitman <······@world.std.com> wrote:
>
> > Fights over lisp1/lisp2 are common although this is a long-settled issue.
>
>
> Saeed al-Shahhhaf <·····@information.gov.iq> wrote
>
> > We chased them... and they ran away...
>
> I've been following this discussion very closely, but I haven't seen a
> single valid argument in favor of Lisp2. Maybe I'm thick or something.
> The only arguments I've seen were in the "it's not as bad as everyone
> thinks", "you can get used to it if you really try" and "he made me do
> it" categories.

As Kent as said the CL committe did not make the decision based on purely
technical grounds.  The important point is that by choosing a Lisp-N,
this allows users of CL to use CL as a Lisp-1, IF THEY WHAT TO.  All
they have to do is apply a little self discipline, a few macros and you
have a Lisp-1.  If they had choosen a Lisp-1 then those people who
wanted to do Lisp-N (I am among them) would have great difficutly
in easily changing the language from Lisp-1 to Lisp-N. The
choice of Lisp-N increased the freedom of expression of the
programmer.

Wade

Just the other day I coded something full of Lisp-2ishness

Just a snippet

(defpseudo uri-reference
       (:documentation "URI-reference = [ absoluteURI | relativeURI ] [ \"#\" fragment ]")
  (and (~optional~ (or (absoluteuri) (relativeuri)))
       (~optional~ (and (~match~ #\#) (fragment)))))

(defun parse-uri (uri-string)
  (let (uri-properties)
    (pseudo-parse ((copy-seq uri-string)
                   (uri-reference scheme authority net-path abs-path
                                  opaque-part path-segments
                                  rel-path query fragment host port
                                  absoluteuri relativeuri)
                   :collector (lambda (pseudo value)
                                (setf (getf uri-properties pseudo) value))
                   :return
                   (if (equal (getf uri-properties 'uri-reference) uri-string)
                       (apply #'make-instance 'uri 'string uri-string uri-properties)
                     (error 'uri-parsing-error :uri-string uri-string
                            :parser-info uri-properties)))
      (uri-reference))))
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0305261304370.13996-100000@gwen.sixfingeredman.net>
On 25 May 2003, Kent M Pitman wrote:
> So when people using natural languages go and create several or even
> dozens of meanings of the same word (according to context), especially
> without a resultant "mass [webster.com entry #4, defintion 1b] public
> [entry #1, def 1a] outcry [def 1b]", one is led to believe that this
> is on the intuitively "approved list" of language building tools.

I'm one of those scheme users who finds lisp2 confusing, but I think the
source of my (and others') confusion is misrepresented here, so I'll try
and clarify it.

Separate namespaces for /names/, such as one finds in Java, where methods
and fields are different, is not confusing. What I find confusing is
seperate namespaces for /values/. I find it non-intuitive, that if I pass
one kind of value in a variable, I have to use it differently than if I
passed another kind of value.

The correct analogy in English would be illustrated by the following:

Flying is fun. I wish I could do *it*.
   as opposed to
That camera is nice. I wish I had *it*.

In English, pronouns must take a noun-ified version of a verb, like the
gerund, so we have to add a "do" when we want to use it in a verb-like
way. You might call this closer to lisp2 than lisp1, though it's clearly
quite different from either. I suppose it's like Elisp, where you pass the
symbol name of the function as a value and then funcall it.
From: Matthew Danish
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <20030526210342.GI17564@lain.cheme.cmu.edu>
On Mon, May 26, 2003 at 01:19:30PM -0500, Adrian Kubala wrote:
> I'm one of those scheme users who finds lisp2 confusing, but I think the
> source of my (and others') confusion is misrepresented here, so I'll try
> and clarify it.
> 
> Separate namespaces for /names/, such as one finds in Java, where methods
> and fields are different, is not confusing. What I find confusing is
> seperate namespaces for /values/. I find it non-intuitive, that if I pass
> one kind of value in a variable, I have to use it differently than if I
> passed another kind of value.

This confusion smells of what I have long suspected is the root of
Scheme users' confusion with Lisp-2.

In Scheme, when a list is evaluated, every element of the list is first
evaluated and then the value of the first element is applied with the
values of the rest of the elements.

The key to eliminating the confusion is to understand that in Common
Lisp this is not the case.  In Common Lisp, when a list is evaluated,
the first element is not evaluated, but taken to be a special operator
name, macro name, function name or lambda expression.  

If it is a function name or lambda expression, the rest of the elements
are evaluated in left-to-right order and then applied to the function
designated by the first element.

The argument to the FUNCTION special operator is also expected to be a
function name or lambda expression.

Thus there is nothing `different' about function values.  It is just
that the first element of a list to be evaluated is not treated the same
as it is in Scheme.

Examples:

(let ((f 1))
  (flet ((f (x) x))
    (f f)))

The first element of (f f) is considered to be a function designator and
is looked up in the function namespace.  The second element of (f f) is
evaluated in the `usual' manner and looked up in the lexical variable
namespace.

(defun f (g)
  (funcall g))

When F is called with a function value, FUNCALL is looked up in the
function namespace, and G is looked up in the lexical variable
namespace, thus allowing you to call functions which are stored in
variables.

((lambda (x) x) 1)

Lambda expressions are also permitted.



Chapter 3.1.2.1.2 of the Hyperspec has all the details.


[1] Actually, it may be a special operator name, macro name, function
name, or lambda expression.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bavdjg$p9g$1@f1node01.rhrz.uni-bonn.de>
Matthew Danish wrote:

> In Scheme, when a list is evaluated, every element of the list is first
> evaluated and then the value of the first element is applied with the
> values of the rest of the elements.

BTW, how does Scheme handle macros? Are they an exception to this rule, 
or do they somehow fit in this model?

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Jens Axel Søgaard
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3ed35041$0$97274$edfadb0f@dread12.news.tele.dk>
Pascal Costanza wrote:
> Matthew Danish wrote:
> 
>> In Scheme, when a list is evaluated, every element of the list is first
>> evaluated and then the value of the first element is applied with the
>> values of the rest of the elements.
> 
> 
> BTW, how does Scheme handle macros? Are they an exception to this rule, 
> or do they somehow fit in this model?

I have this simple mental model of the evaluation of (f e1 e2)

If f is bound to a macro transformer named transform-f:

   (eval (f e1 e2)) = (eval (transform-f e1 e2))

If f is bound to a function:

   (eval (f e1 e2)) = (apply (eval f) (eval e1) (eval e2))

In other words, to evaluate a form (f e1 e2) rewrite
it using macrotransformers until the first element is
bound to a function, then apply that function to the
evaluated arguments.

It isn't entirely correct but it's works for me.


For the more rigorous I have have found relevant pieces
of the R5RS:

   The macro definition facility consists of two parts:

      * A set of expressions used to establish that certain identifiers
        are macro keywords, associate them with macro transformers, and
        control the scope within which a macro is defined, and

      * a pattern language for specifying macro transformers.

   The syntactic keyword of a macro may shadow variable bindings, and
   local variable bindings may shadow keyword bindings.

And here is the description for one of the binding forms of

   let-syntax <bindings> <body> syntax
   Syntax: <Bindings> should have the form

     ((<keyword> <transformer spec>) ...,)

   Each <keyword> is an identifier, each <transformer spec> is an
   instance of syntax-rules, ...

   Semantics: The <body> is expanded in the syntactic environment
   obtained by extending the syntactic environment of the let-syntax
   expression with macros whose keywords are the <keyword>s, bound to the
   specified transformers. Each binding of a <keyword> has <body> as its
   region.

One way to make a <transformer spec> is to use syntax-rules.
You can specify the rewriting as a pair of a pattern and and
a template.

I'll cut straight to the semantics:

   Semantics: An instance of syntax-rules produces a new macro
   transformer by specifying a sequence of hygienic rewrite rules. A use
   of a macro whose keyword is associated with a transformer specified by
   syntax-rules is matched against the patterns contained in the <syntax
   rule>s, beginning with the leftmost <syntax rule>. When a match is
   found, the macro use is transcribed hygienically according to the
   template.

Note that the defacto standard syntax-case system opposed to the R5RS
syntax-rules is more flexible (and allows non-hygienic rules too :-) ).

An excellent introduction is:

   * Dybvig. "Writing Hygenic Macros in Scheme with Syntax-Case".
   <http://www.cs.utah.edu/plt/publications/macromod.pdf>

Another interesting read is:

   * Matthew Flatt.
     "Composable and Compilable Macros: You Want it    When?".
     International Conference on Functional Programming (ICFP'2002).
     2002.
     <http://www.cs.utah.edu/plt/publications/macromod.pdf>

A comprehensive bibliography on the research of macros is found at:

   <http://library.readscheme.org/page3.html>

-- 
Jens Axel S�gaard
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bavls6$jfk$1@f1node01.rhrz.uni-bonn.de>
Jens Axel S�gaard wrote:
> Pascal Costanza wrote:
> 
>> Matthew Danish wrote:
>>
>>> In Scheme, when a list is evaluated, every element of the list is first
>>> evaluated and then the value of the first element is applied with the
>>> values of the rest of the elements.
>>
>> BTW, how does Scheme handle macros? Are they an exception to this 
>> rule, or do they somehow fit in this model?
> 
> I have this simple mental model of the evaluation of (f e1 e2)
> 
> If f is bound to a macro transformer named transform-f:
> 
>   (eval (f e1 e2)) = (eval (transform-f e1 e2))
> 
> If f is bound to a function:
> 
>   (eval (f e1 e2)) = (apply (eval f) (eval e1) (eval e2))
> 
> In other words, to evaluate a form (f e1 e2) rewrite
> it using macrotransformers until the first element is
> bound to a function, then apply that function to the
> evaluated arguments.
> 
> It isn't entirely correct but it's works for me.

So this means that the first position of an s-expression is also treated 
specially in Scheme, before any evaluation of subexpressions take place, 
right? The model that all subexpressions are treated the same is only an 
approximation of what really goes on in Scheme. Or am I missing 
something here?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Jens Axel Søgaard
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3ed3671b$0$97272$edfadb0f@dread12.news.tele.dk>
Pascal Costanza wrote:
> Jens Axel S�gaard wrote:
>> Pascal Costanza wrote:
>>> Matthew Danish wrote:

>>>> In Scheme, when a list is evaluated, every element of the list is first
>>>> evaluated and then the value of the first element is applied with the
>>>> values of the rest of the elements.

>>> BTW, how does Scheme handle macros? Are they an exception to this 
>>> rule, or do they somehow fit in this model?
>>
>>
>> I have this simple mental model of the evaluation of (f e1 e2)
>>
>> If f is bound to a macro transformer named transform-f:
>>
>>   (eval (f e1 e2)) = (eval (transform-f e1 e2))
>>
>> If f is bound to a function:
>>
>>   (eval (f e1 e2)) = (apply (eval f) (eval e1) (eval e2))

Oops [ears beecoming red]. I have a bug.

    (eval (f e1 e2)) = (apply (eval f)
                              (list (eval e1) (eval e2)))

> So this means that the first position of an s-expression is also treated 
> specially in Scheme, before any evaluation of subexpressions take place, 
> right? The model that all subexpressions are treated the same is only an
> approximation of what really goes on in Scheme. Or am I missing 
> something here?

I think Matthew Danish is thinking about function application,
which was kind of implicit in that part of the thread.

I read his prose as:

   (eval (f e1 e2)) = (apply (eval f) (list (eval e1) (eval e2)))

It is clear that this rule alone is not enough to take care of
for example if-expresions like

      (if #t 42 (/ 1 0)) .


Let me try to state the first rule slightly different

    (eval (syntax (f e1 e2))
  = (if (name-bound-to-transformer? (syntax f))
        (eval (apply (eval f) (list (syntax e1) (syntax e2)))
        (eval (apply (eval f) (list e1 e2)))

where (syntax e) is a value representing the syntax of
the expression e.

Again, this is just how I think, which means that's
not neccesarily correct. William Clinger is the person
to ask, if you want definite answer.


-- 
Jens Axel S�gaard
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-674581.19462228052003@copper.ipg.tsnz.net>
In article <············@f1node01.rhrz.uni-bonn.de>,
 Pascal Costanza <········@web.de> wrote:

> So this means that the first position of an s-expression is also treated 
> specially in Scheme, before any evaluation of subexpressions take place, 
> right? The model that all subexpressions are treated the same is only an 
> approximation of what really goes on in Scheme. Or am I missing 
> something here?

Well of course.  You've always had to classify the first subexpression 
as a special form or not a special form before you knew how to treat the 
rest of the expression.  Adding macros doesn't change that, just adds 
one more possibility.

-- Bruce
From: Peter Seibel
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <m3llwtk05y.fsf@javamonkey.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On 25 May 2003, Kent M Pitman wrote:
> > So when people using natural languages go and create several or even
> > dozens of meanings of the same word (according to context), especially
> > without a resultant "mass [webster.com entry #4, defintion 1b] public
> > [entry #1, def 1a] outcry [def 1b]", one is led to believe that this
> > is on the intuitively "approved list" of language building tools.
> 
> I'm one of those scheme users who finds lisp2 confusing, but I think the
> source of my (and others') confusion is misrepresented here, so I'll try
> and clarify it.
> 
> Separate namespaces for /names/, such as one finds in Java, where methods
> and fields are different, is not confusing. What I find confusing is
> seperate namespaces for /values/. I find it non-intuitive, that if I pass
> one kind of value in a variable, I have to use it differently than if I
> passed another kind of value.

Do you find it non-intuitive that we have to use the value of x
differently in the following two functions?

  (defun foo (x) (car x))

  (defun foo (x) (+ x 10))

If not, then why is this so different?

  (defun foo (x) (funcall x))

Just a bit of food for thought.

-Peter 

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw1xyl2q42.fsf@shell01.TheWorld.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> I'm one of those scheme users who finds lisp2 confusing, but I think the
> source of my (and others') confusion is misrepresented here, so I'll try
> and clarify it.

This is ok.  And I'm happy to discuss this on such terms.  But what
I'm getting at is that this is something that confuses newcomers who
are initially pointed in the wrong direction.  That's common with any
of a number of well-formed concepts.  _Deep_ problems in a language
(or any domain) are better illustrated by the inability of an expert
to ever quite get things right, because of an architectural mismatch
that must be continually overcome, even in the steady state.

> Separate namespaces for /names/, such as one finds in Java, where methods
> and fields are different, is not confusing. What I find confusing is
> seperate namespaces for /values/. I find it non-intuitive, that if I pass
> one kind of value in a variable, I have to use it differently than if I
> passed another kind of value.

I think this is a point of view thing.  I think your real problem is
not the separate namespace, which in your own terms is not used differently
at all, but the fact that function calling is not directly accessible on a
value using only primitive syntax.

> The correct analogy in English would be illustrated by the following:
> 
> Flying is fun. I wish I could do *it*.
>    as opposed to
> That camera is nice. I wish I had *it*.
> 

Sure.  Or sometimes the infinitive, too.  For example, 
 "I want that camera."
 "I want to fly."

> In English, pronouns must take a noun-ified version of a verb, like the
> gerund, so we have to add a "do" when we want to use it in a verb-like
> way. 

Yes.

 "That's something I want." (the camera)
 "That's something I want do do." (flying / to fly)

> You might call this closer to lisp2 than lisp1, though it's clearly
> quite different from either.

The crisper case is probably:

 "That fly is flying."

and

 "I want that." (the fly)
 "I want to do that." (the act of flying)

I'd certainly agree that here it's just an issue of naming.  That is,
the notion of the unadorned "that" is acting at the syntax level to
refer to the previous noun, while the phrase "to do that" is a 
discriminator that says to search among references in the function
namespace.  And you could make the case that 'want' is a simple higher
order function operating on either of these, independent of their notation.
(I'm not sure I'd believe that, but this example would push that direction.)

There's also a whole big question of what is a 'reference' (syntactic
entity) and what's a 'value' (domain entity) in English since the two are
blurred; a pronoun can be used to disambiguate using information that will
not be known until domain resolution time (runtime).  But it doesn't matter
to us, because those are not the parts of natural language we have borrowed.

What we have observed and borrowed is not the blurring of compile time and
runtime information, but the fact that the person writing the text is well
capable of managing context.  There can really be no doubt of this fact.

In T (Yale Scheme), we discussed my idea that the whole language should have
been designed as special forms, but I don't recall if we implemented it.
That is, the idea that (f x) should really be shorthand for 
 (CALL (VAR f) (VAR x))
and that _all_ forms should be compound forms whose car told you how to do
the internal language dispatch.  Understood in this way, (f x) is just a
shorthand that acknowledges a primitive kind of function calling and provides
a data flow path in which all arguments are evaluated equivalently.  If
both Scheme and CL had a CALL special form, they could still have multiple
namespaces while having a compatible CALL form.  It would simply be the
case that (f x) in Scheme meant (CALL (VAR f) ...) and in CL it meant
(CALL (FN f) ...).  You would see then that FUNCALL's job is not what you
think it is.  FUNCALL is just a primitive operation that takes variable
'f' instead of constant 'f'.  That is, (FUNCALL f x) would mean
(CALL (VAR f) ...) instead of (FUNCALL (FUNCTION f) ...).  This is not 
unlike the way Java does x.m(a) to mean use method "m" from x's method set 
on object a, while it uses x.(m)(a) to mean use method 
[value of m] from x's method set
on object a.  The less common operation (computing a method name) is given
a special notation (extra parens), the analog of FUNCALL, to remind the
reader that something funny is going on.  In the MOO programming language,
this happens as well.  In MOO, f.x means "get slot x of the value of f" but
f.(x) means "get from the value of f the slot whose name is the value of x".
The extra parens tell you something funny is going on.   One might argue that
Java should have done  f."x"(x) for the normal case and then f.x(x) for the
variable case, and that it would be more "regular".  One might even argue that
Java should have made methods be objects, and make "open" be a method name
that evaluates to an open message so that window.open(x) really meant
window.#<openmsg>(...) and then window.x(x) would not requires a special
syntax.  But they didn't.  And, incidentally, Steele [who designed Scheme]
was right in there on the design and could have said something.

FUNCALL is not treating the value differently.  The real case is that the
Lisp notation provides only one way to call a function:  to have it live
in the function namespace.  Not because a value type is treated differently,
but because the syntax type (f x) in CL always means (CALL (FUNCTION f) x)
and (FUNCTION x) quotes the name X.  There is no FUNCTION* operation that
takes X unquoted, other than SYMBOL-FUNCTION [which only accesses the global
environment] nor is there access to the primitive operation 'CALL'.  Instead,
we have APPLY and FUNCALL [which we couldn't implement from user code because
we don't have those operations].

I'm sorry this is somewhat jumbled but I don't have time to do better today.
Hopefully there's something in here that will help you get the issues in
a new light.

> I suppose it's like Elisp, where you pass the
> symbol name of the function as a value and then funcall it.

No, this is different.  That's a case that is really about values.
FUNCALL is not doing coercion.  You could just as well do 
 (funcall #'f x)
as
 (funcall 'f x)
in elisp.  Funcall supplies uniform treatment of any representation of a
function and calls that function.  

The 'discriminatory behavior' (pardon the pun) you're complaining about is
not in the place you think it is.

If you really want to get nitpicky, Scheme, IMO, is just as ill-founded as
CL.  It has only one primitive way to call a function and that's to put it
at the start of a list.  There is no "operator" for calling a function, and
that makes it hard to have this conversation.  CL commits the equal sin.
Likewise,  there is no operator for accessing the value of variables in either
langauge--only the shorthand.  As such, people get caught up discussing
favorite shorthands for underlying things that have no manifest notation,
and I think this makes the argument remarkably more difficult than it should
be because each person makes up their own fabric of internals to substitute
for these missing terms.  If there's one thing we've learned about symbolic
computing, it's that it helps to have names for things.  And "the place after
the open paren" is a poor name.  Likewise, the term "a variable position" is
a poor name becuase each language defines this position arbitrarily, and
each language's users forget that things which are contingent on variable
position are therefore also arbitrary as a dependent consequence.  That's why
I like to have this discussion in terms of imaginary operators CALL and VAR,
and with the (f x) understood to just be macro syntax that goes away in the
underlying semantics.
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0305262204390.15392-100000@gwen.sixfingeredman.net>
On 26 May 2003, Kent M Pitman wrote:
> I think this is a point of view thing.  I think your real problem is not
> the separate namespace, which in your own terms is not used differently
> at all, but the fact that function calling is not directly accessible on
> a value using only primitive syntax.

Actually, after more thought I realized that I too was confused about why
I was confused, so let me try again. The confusing thing about lisp2 is
when you have to decide which namespace you mean to use. Java doesn't have
this problem because you don't have a choice which to use. Scheme, because
it's just one namespace. Whereas in CL, you can have the function value in
the function namespace or the variable namespace, so you have to know how
to use it accordingly.

In that light, I can see that if I used CL regularly, I would get used to
it pretty quickly, so now my only argument against lisp2 is that I'm not
sure I agree with the arguments /for/ it.

> It would simply be the case that (f x) in Scheme meant (CALL (VAR f)
> ...) and in CL it meant (CALL (FN f) ...).

And of course CL then has to have "FN" and "VAR" versions of lots of forms
where Scheme only has "VAR" ones -- which is the thing I find confusing
and distasteful. If I understand correctly what you mean by "VAR".

> As such, people get caught up discussing favorite shorthands for
> underlying things that have no manifest notation, and I think this makes
> the argument remarkably more difficult than it should be because each
> person makes up their own fabric of internals to substitute for these
> missing terms.

Amen to that.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw7k8cftsq.fsf@shell01.TheWorld.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> Actually, after more thought I realized that I too was confused about why
> I was confused, so let me try again.

I appreciate your going through the exercise.  I think you're not the only
one who has confronted this, and I think people often find it politically
expedient to push their own agenda into these gaps in understanding, rather
than to fairly assess the underlying issues as it seems like you're trying
to do.

> The confusing thing about lisp2 is
> when you have to decide which namespace you mean to use. Java doesn't have
> this problem because you don't have a choice which to use. Scheme, because
> it's just one namespace. Whereas in CL, you can have the function value in
> the function namespace or the variable namespace, so you have to know how
> to use it accordingly.

Yes, I agree that this takes a little getting used to.  But I think it will
eventually feel quite natural once you see what's really going on.  The trick
is not to start down the blind alley in the first place, because the 
consequent misunderstandings that result from having mischaracterized the
difference ar the real problem; and those misunderstandings are not so much
a property of the language, but more a property of people rushing too fast to
"fix" the problem in their mind without asking someone who knows what's going
on.

> In that light, I can see that if I used CL regularly, I would get used to
> it pretty quickly, so now my only argument against lisp2 is that I'm not
> sure I agree with the arguments /for/ it.

That's a fair assessment.  The issues are detailed, btw, in
 http://www.nhplace.com/kent/Papers/Technical-Issues.html 
in case you haven't read them.  This document is the essential
technical content of a slightly longer document that Dick Gabriel and
I wrote to X3J13 discussing the matter.  The document has a bit of MPD
(multiple personality disorder) to it because Gabriel and I had
opposite opinions on the matter. If you read it carefully, you'll see it's
more like a dialog between people of opposing points of view and not just
an explanation of a single point of view.  Like any debate, people often 
come away thinking it was a paper supporting their own point of view because
they read only the parts they like and they ignore the "other half". :)

> > It would simply be the case that (f x) in Scheme meant (CALL (VAR f)
> > ...) and in CL it meant (CALL (FN f) ...).
> 
> And of course CL then has to have "FN" and "VAR" versions of lots of forms
> where Scheme only has "VAR" ones -- which is the thing I find confusing
> and distasteful. If I understand correctly what you mean by "VAR".

Yes.  I've worked this out in a lot more detail than I showed here but
never completed it.  I had an idea as a result of this discussion for how
to share the partial work I've done.  Ping me in a few weeks if you've not
seen followup on this tiny subthread and still care for elaboration.
 
> > As such, people get caught up discussing favorite shorthands for
> > underlying things that have no manifest notation, and I think this makes
> > the argument remarkably more difficult than it should be because each
> > person makes up their own fabric of internals to substitute for these
> > missing terms.
> 
> Amen to that.

Hopefully, if I published at least the notations and partial attempts I've
used, it would help to address this a little.  Maybe the communities could
evolve, over time, a common interchange notation...
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bavts2$u9k$1@f1node01.rhrz.uni-bonn.de>
Kent M Pitman wrote:

> The issues are detailed, btw, in
>  http://www.nhplace.com/kent/Papers/Technical-Issues.html 
> in case you haven't read them.  This document is the essential
> technical content of a slightly longer document that Dick Gabriel and
> I wrote to X3J13 discussing the matter.  The document has a bit of MPD
> (multiple personality disorder) to it because Gabriel and I had
> opposite opinions on the matter. If you read it carefully, you'll see it's
> more like a dialog between people of opposing points of view and not just
> an explanation of a single point of view.

So who of you was in favor of Lisp-1 and who was in favor of Lisp-2? ;)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwsmr02yk3.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> Kent M Pitman wrote:
> 
> > The issues are detailed, btw, in
> >  http://www.nhplace.com/kent/Papers/Technical-Issues.html in case
> > you haven't read them.  This document is the essential
> > technical content of a slightly longer document that Dick Gabriel and
> > I wrote to X3J13 discussing the matter.  The document has a bit of MPD
> > (multiple personality disorder) to it because Gabriel and I had
> > opposite opinions on the matter. If you read it carefully, you'll see it's
> > more like a dialog between people of opposing points of view and not just
> > an explanation of a single point of view.
> 
> So who of you was in favor of Lisp-1 and who was in favor of Lisp-2? ;)

RPG wrote the Lisp1 side.  Was that not clear?  :)
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw65nsb8e1.fsf@shell01.TheWorld.com>
Kent M Pitman <······@world.std.com> writes:

> Adrian Kubala <······@sixfingeredman.net> writes:
...
> > > It would simply be the case that (f x) in Scheme meant (CALL (VAR f)
> > > ...) and in CL it meant (CALL (FN f) ...).
> > 
> > And of course CL then has to have "FN" and "VAR" versions of lots of forms
> > where Scheme only has "VAR" ones -- which is the thing I find confusing
> > and distasteful. If I understand correctly what you mean by "VAR".
> 
> Yes.  I've worked this out in a lot more detail than I showed here but
> never completed it.  I had an idea as a result of this discussion for how
> to share the partial work I've done.  Ping me in a few weeks if you've not
> seen followup on this tiny subthread and still care for elaboration.
>  
> > > As such, people get caught up discussing favorite shorthands for
> > > underlying things that have no manifest notation, and I think this makes
> > > the argument remarkably more difficult than it should be because each
> > > person makes up their own fabric of internals to substitute for these
> > > missing terms.
> > 
> > Amen to that.
> 
> Hopefully, if I published at least the notations and partial attempts I've
> used, it would help to address this a little.  Maybe the communities could
> evolve, over time, a common interchange notation...

Hmmm.  I looked back at the stuff I could find on my local disks and
am not sure if it was in a coherent enough state to have bothered, but
I put it on the web anywhere because I'm just not going to have the
time to think about this for quite a while and maybe someone else will
get something out of it.  Or maybe it will just hopelessly confuse or
irritate people.  Most people publish "finished works" so it's an
experiment on my part that I'm quite uncertain about to just go ahead
and publish "work in progress at no particular state of success", but
what the hell--nothing ventured nothing gained.  At least now I have a
place on the web to put more such stuff if it turns out I want to.
Note that I have made this information available in a "frame" so that
it's hard to ignore the massive number of disclaimers about how it's
not something coherent, etc.

 http://www.nhplace.com/kent/Half-Baked/

PLEASE direct any followups about the content of this page per se to me
personally.  It's not polished enough to warrant a big group discussion.
From: Ray Blaak
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <uisrwzx24.fsf@STRIPCAPStelus.net>
Kent M Pitman <······@world.std.com> writes:
> This is not unlike the way Java does x.m(a) to mean use method "m" from x's
> method set on object a, while it uses x.(m)(a) to mean use method [value of
> m] from x's method set on object a.

Are you sure? Is this new?

In JDK 1.4 this is a syntax error.

Java has no method values, unless this is something new in 1.5, but I know of
only generics, enums, static imports, and a few other bits as new in 1.5.

Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw3cj0ftp5.fsf@shell01.TheWorld.com>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Kent M Pitman <······@world.std.com> writes:
> > This is not unlike the way Java does x.m(a) to mean use method "m" from x's
> > method set on object a, while it uses x.(m)(a) to mean use method [value of
> > m] from x's method set on object a.
> 
> Are you sure? Is this new?
> 
> In JDK 1.4 this is a syntax error.
> 
> Java has no method values, unless this is something new in 1.5, but I know of
> only generics, enums, static imports, and a few other bits as new in 1.5.

Sorry.  MOO does this and I hadn't used Java's laborious reflective
layer in long enough that my brain helpfully covered over the pain
with the more friendly memory of MOO syntax.  [I think Java _could_
do this, of course.  But oh well.]   Thanks for flagging this so I could
correct myself; sorry for the misinformation.
From: Harald Hanche-Olsen
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <pcohe7h4cij.fsf@thoth.math.ntnu.no>
+ Adrian Kubala <······@sixfingeredman.net>:

| I'm one of those scheme users who finds lisp2 confusing, but I think the
| source of my (and others') confusion is misrepresented here, so I'll try
| and clarify it.
| 
| Separate namespaces for /names/, such as one finds in Java, where
| methods and fields are different, is not confusing. What I find
| confusing is seperate namespaces for /values/. I find it
| non-intuitive, that if I pass one kind of value in a variable, I
| have to use it differently than if I passed another kind of value.

I agree that it can be confusing if you think of them as values.  The
proper use of metaphors can go a long way to relieve confusion:  Think
instead of verbs versus nouns.

| The correct analogy in English would be illustrated by the following:
| 
| Flying is fun. I wish I could do *it*.
|    as opposed to
| That camera is nice. I wish I had *it*.

I think a better analogy is this:

  Time flies like an arrow.
  Fruit flies like a banana.

In the first sentence "flies" is a verb.  In the second, it is a
noun.  The example is confusing because the sentence structure is
superficially the same in both sentences.  Normally, in English, word
order and sentence structure will disambiguate (what a wonderful
phrase!) such words easily.  The same thing happens in lisp, where the
typical "sentence" looks like (verb noun noun ...).

In English any noun can be verbed.  This cannot be done in Lisp
(storing a non-function in a function slot), but instead verbs can
nouned, i.e., a function can be stored in a value slot.  It is in
these exceptional circumstances that you need a special mechanism
(funcall or apply) in order to use the value as a function, instead of
just passing it around like any other value.

BTW, I don't recommend that you adopt this terminlogy of verbs and
nouns when talking about Lisp programs - it will just confuse the heck
out of fellow programmers. But it can be a useful way to talk to
yourself until you have internalized the way the concepts work.

This is like other unorthodox aspects of Lisp in that it seems quite
confusing in theory, but is much less so in practice.  You typically
have to come up with some pretty contrived example in order to manage
to get really confused by it.

-- 
* Harald Hanche-Olsen     <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
  than thinking does: but it deprives us of whatever chance there is
  of getting closer to the truth.  -- C.P. Snow
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0305262234300.15392-100000@gwen.sixfingeredman.net>
On 26 May 2003, Harald Hanche-Olsen wrote:
> | Flying is fun. I wish I could do *it*.
> |    as opposed to
> | That camera is nice. I wish I had *it*.
>
> I think a better analogy is this:
>   Time flies like an arrow.
>   Fruit flies like a banana.
>
> In the first sentence "flies" is a verb.  In the second, it is a
> noun.

I was trying to say that this "better analogy" is not. You see the
differences between Lisp1 and Lisp2 when writing a higher order function,
and I believe that "pronouns" correspond better to the parameters in
higher order functions than do "words".

So it's:
; Flying -- do it.
(let ((it #'fly))
  (funcall it))
; The fly -- swat it.
(let ((it fly))
  (swat it))

So yes, English is more like lisp2 in this small respect -- though I'm not
convinced this makes it somehow more intuitive.
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305252020.7269be36@posting.google.com>
Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...

> It's interesting that in domains where precision in language is
> required, languages which are more-or-less artificial are used, even
> when they are not interpreted by machines.

You're confusing a specialized written notation with an artificial
language. Listen to a conversation between two accountants - they do
*not* speak an artificial language to each other, just a natural
language with some domain specific terminology.
From: Jeff Caldwell
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <AfqAa.2729$H84.1526943@news1.news.adelphia.net>
Raffael,

Perhaps if I saw an example of what you are driving at, I could begin to 
understand it. Would you please translate the following Lisp code into 
the kind of natural language program you are describing? I realize we 
don't have the compiler you desire but perhaps one day the natural 
language program you post here could be the first one compiled by that 
compiler.

I often listen to conversations between programmers. My observation is 
that the ideas expressed in code are initiated and organized by the 
conversations but are not the same as the ideas expressed in the 
conversations. Two different tools for two different purposes. I would 
hate to have to use Lisp to talk with my co-workers and I don't yet see 
how it would be good for me to specify programs in natural language.  I 
do agree there is a place for what you describe but I don't think it 
will replace, for example, Lisp.

This code is from c.l.l. Fred Gilham posted it on 2003-05-09 and said it 
came from Christian Jullien in 2000.

(defconstant Y-combinator
    (lambda (f)
       (funcall (lambda (g) (funcall g g))
         (lambda (x) (funcall f (lambda () (funcall x x)))))))

(defun yfib (n)
    (let ((fib (funcall Y-combinator
                 (lambda (fg)
                   (lambda (n)
                     (cond ((= n 1) 1)
                           ((= n 2) 1)
                           (t (+ (funcall (funcall fg) (- n 1))
                                 (funcall (funcall fg) (- n 2))))))))))
       (funcall fib n)))



Raffael Cavallaro wrote:
> Tim Bradshaw <···@cley.com> wrote in message news:<···············@cley.com>...
> 
> 
>>It's interesting that in domains where precision in language is
>>required, languages which are more-or-less artificial are used, even
>>when they are not interpreted by machines.
> 
> 
> You're confusing a specialized written notation with an artificial
> language. Listen to a conversation between two accountants - they do
> *not* speak an artificial language to each other, just a natural
> language with some domain specific terminology.
From: Coby Beck
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bau7m7$2q6d$1@otis.netspace.net.au>
"Jeff Caldwell" <·····@yahoo.com> wrote in message
···························@news1.news.adelphia.net...
> Raffael,
>
> Perhaps if I saw an example of what you are driving at, I could begin to
> understand it. Would you please translate the following Lisp code into
> the kind of natural language program you are describing? I realize we
> don't have the compiler you desire but perhaps one day the natural
> language program you post here could be the first one compiled by that
> compiler.

LOL!  Such a friendly tone, you really had me going!  Then I see your
code...

Just to chime in with a high level overview:  I think it is as always a
question of balance.  We need specialized language and syntactic structures
to communicate with a computer for sure, but we should use to their full
advantages all the instinctual or genetic abilities that allow us to
communicate to each other.  Don't forget this whole thing started with the
still present subject line "Lisp-2 or Lisp-1".  I don't think anyone claims
CL is modelled on natural language but I and others claim seperate
namespaces are good because they relect a property of natural language.

> This code is from c.l.l. Fred Gilham posted it on 2003-05-09 and said it
> came from Christian Jullien in 2000.
>
> (defconstant Y-combinator
>     (lambda (f)
>        (funcall (lambda (g) (funcall g g))
>          (lambda (x) (funcall f (lambda () (funcall x x)))))))
>
> (defun yfib (n)
>     (let ((fib (funcall Y-combinator
>                  (lambda (fg)
>                    (lambda (n)
>                      (cond ((= n 1) 1)
>                            ((= n 2) 1)
>                            (t (+ (funcall (funcall fg) (- n 1))
>                                  (funcall (funcall fg) (- n 2))))))))))
>        (funcall fib n)))
>

Though I believe the above is mostly rhetorical, I will take a stab:

define function fib on n
   if n is 1 or 0
      then return 1
      otherwise return fib of (n - 2) + fib of (n - 1).

which can be nicely abbreviated as:
(defun fib (n)
    (if (or (= 0 n) (= 1 n))
        1
       (+ (fib (1- n)) (fib (- n 2)))))

If you really want it in the form you present above, well, write it
yourself.  Most programming tasks in The Real World are only about getting
something done, not specifically how to do it.

So what's my point?  Such a simple and purely mathematical example does not
tell us much about where we need to go from here.

A better sample might be:

Make an interface to the customer table that allows updates only on public
details and read priviledges on all attributes.  Use all of the default
color and widget schemes.  Make it accessible to members of the Data Entry
group.  Link to main menu.

Want something more succint?  Very easy, I won't even bother to rewrite it.

Why can't this be part of a proper program?  This is the kind of "crank it
out" code that is reinvented thousands of times per day that can and will be
replaced by higher and higher level constructs.  These are the constructs
that will benefit from natural language properties like disambiguation from
context.

If you need more control, just drop down to a lower level of code and embed
it.  Just like you can embed assembler in some Lisps.  Language design is
all a question of balance.

-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Michael Sullivan
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <1fvi8b4.1u9vz1h13p4v2aN%mes@panix.com>
Raffael Cavallaro <·······@mediaone.net> wrote:

> What we need are general purpose computer languages that are also user
> friendly. When it comes to languages, "user friendly" means
> natural-language-like.

I agree with the first sentence.  I'm not sure I agree with the second.
In the very, very long term, it's probably true.  But to make a
natural-language-like general purpose programming language that isn't
just dumbed down for non-programmers, but actually helps good
programmers more than it hurts them, is probably a strong-AI problem.  I
don't think this is a really bad thing to work towards, but I think you
need to be careful of putting the cart before the horse.  You need a
really strong base of natural language processing before something
that's a real improvement over existing computer languages can be
possible.

COBOL is what you get when you try do this with *zero* natural language
expertise and a willingness to dumb things down to the nth degree.

Applescript would be the latest attempt, and it would actually be a
decent general purpose language if it weren't for some articial
implementation limitations.  In some respects, we can consider
applescript to be remarkably successful.  Lots of people use it who
don't think of themselves as programmers.  But it's still a reasonably
good language for programmers (as long as they aren't running into the
stupid implementation restrictions -- I'm talking solely about the
language design here).  No, it doesn't compare to lisp, but, with a
decent standard library, a few key non-architectural improvements (like
native hashing, stack space limited only by available RAM, etc.) it
would be much more pleasurable to work in than many far more popular
languages.

But does this mean that AS represents a win for your philosophy?  I
don't think so.  AS popularity among "non-programmers" is attributable
primarily (IMO) to the fact that it *is* the scripting language of the
mac, and to the support built into popular applications on that
platform.  If QuarkXpress and a few other programs had not come out
early with very good apple event scripting support, then applescript
would have gone nowhere fast, and probably no one at all would be using
it today.

If anything, applescript's *syntax* gets a lot of complaints from people
new to using it, even if they have never used any "real" programming
languages.  It tempts you into believing that it is english, when it is
really its own bizarre form of legalese.  Things that you would expect
to mean one thing, often mean something completely different.

Once you get over that hump, and begin to understand it as a computer
language, it's really not that much different from other computer
languages, but it *is* more verbose.  

I contend that if apple had chosen scheme or caml, or csh, or pretty
much any other language as its extension language, and the same support
had been built, and the same runtime characteristics (or better) were
exhibited, that whatever this language was would have had the same kind
of adoption that applescript has now.   Look at Visual Basic.  It's
atrocious.  Applescript makes it look like old fortran.  It's the kind
of language that inspired Dijkstra's famous comment about brain damage.
But it's arguably the most used computer language in the world, and by
far the most used by people who would not identify as programmers.  Why?
Because it's the standard scripting interface to the monopoly OS.  

Power users want to automate, and they will wrap their heads around
anything which will let them do this with a minimum of architecture
building.  They will not learn lisp or C++ in order to write a whole
structure, when they can plug away at existing applications and get
60-70% of what they need.  give them a decent scripting environment to
go with those applications and suddenly they can get 95%+ of what they
need.  Why reinvent all those wheels for that last 5%?

So power users will always gravitate toward whatever the scripting
language is of the applications they use?  Give them lisp as that
scripting language, and they will learn lisp.  Give them TECO, and they
will learn TECO, give them unlambda, and well...  Okay, maybe not.

But the point is -- power users get into programming because of good
libraries, and hooks into existing software.  They do *NOT* need an
easier language.  They are already using languages that are *far less
easy* than most of the lisp and ML family.  The fact that they are also
less powerful is just another sad truth about the computing landscape.

I'll state here because it should affect your reading of this -- *I* am
among your target audience.  Maybe not the exemplar because I love math
and CS, and studied those in college and still keep up to some extent.
But as a professional -- I am primarily a domain expert who scripts and
programs to save me and my company time.   I don't have the time or
money to build large applications from the ground up.  

I am in a large community of similar folks who use applescript every
day.  I certainly can't speak for them all, but I speak for a lot of
them when I say that trying to make a computer language be more like a
natural language was one of the *mistakes* of applescript, and we will
be better off without attempting this until we are *much* further along
into natural language processing than we currently are.

Libraries and hooks.  Libraries and hooks.  Libraries and hooks.  That's
what power users need.

> You're thinking in a domain-specific-solution way. This is bound to
> fail, because each new domain will require its own unique, mutually
> incompatible, domain specific language. 

No, it requires its own unique domain specific libraries -- and it
requires domain specific software that is relatively standard and has
great scripting hooks.  Once you have that, power users will do the
rest.

The problem is that writing the whole software project for a domain is
hard, no matter how good your language is.  And it's hard in a software
architecture sense, as well as a domain expertise sense.  Most domain
experts don't have time to deal with such a project in its entirety
unless it is their primary function.  Economically, that only happens
when they are working for a company whose core product is that software.
And when domain experts take on the project anyway because nothing does
anything close to what they need -- they don't do the software
architecture part as well as a real software expert unless they are
lucky enough to be both.  I still believe this would be true even if a
true AI-complete style natural-programming language existed.

The real problem today is that software producers won't generally assign
domain experts who are not also software experts to do the fundamental
design, and much of most fundamental design remains about constricting
capabilities of the users rather than extending them.  Existing
technology is plenty powerful enough to empower scripting friendly
users.  If only it were implemented in powerful, standard and
well-documented ways in every domain for which we have standard
software, there would be no domain expertise crisis in the software
industry.

> Unless your needs fall
> precisely into that particular realm, and do not extend beyond it in
> any way, you lose. Better to craft a general purpose,
> natural-language-like computer language, that all the specific domains
> can use. As new application domains arise, a general purpose languge
> can be turned to those tasks, but domain specific solutions are
> unlikely to be flexible enough to be useful.

The problem is that any true general purpose language without a lot of
domain specific technology is insufficient to get people who aren't
software architects building huge applications akin to CAD programs or
general ledger accounting systems.   At least not until your compiler
becomes an AI software engineer.

OTOH, down here in the real world of 2003, if your domain specific
solutions are as extensible as possible, then it will often be possible
to adapt solutions appropriate for one domain to related domains saving
much work.  Witness all the ways that power users use spreadsheets to do
things that have little or nothing to do with accountancy.

The key here is -- when building domain specific solutions, don't ever
assume that they will never be used for anything outside the domain.
When you can provide extensibility at very little cost, *DO IT*.



Michael
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305281516.1bd0231d@posting.google.com>
···@panix.com (Michael Sullivan) wrote in message news:<···························@panix.com>...
> Raffael Cavallaro <·······@mediaone.net> wrote:
> 
> > What we need are general purpose computer languages that are also user
> > friendly. When it comes to languages, "user friendly" means
> > natural-language-like.
> 
> I agree with the first sentence.  I'm not sure I agree with the second.
> In the very, very long term, it's probably true.  But to make a
> natural-language-like general purpose programming language that isn't
> just dumbed down for non-programmers, but actually helps good
> programmers more than it hurts them, is probably a strong-AI problem.

I was actually talking about a natural-language-like general purpose
programming language that *is* "dumbed down" for non programmers. My
point was that more programming needs to be done by people who are now
non-programmers. These people, the overwhelming majority of computer
users, simply will not get the software they want or need any other
way. They can't afford to hire a development team, and even it they
did, it would be - and often is - a crap shoot as to whether said
development team would ever finish the project to spec, under budget,
and on time.

Its obvious to me, and should be to most programmers, that there exist
a number of perfectly satisfactory programming languages for
*programmers*, Common Lisp being foremost among them IMHO. However,
this still doesn't solve the much larger problem - most users would be
much more productive with more customized software, and they have no
way of getting it.

A user friendly language, would solve this problem. For most users,
this would have to mean natural-language-like, since that's the only
language paradigm we can assume that all users have mastery of. If one
argues that they should learn the artificial constructs of any of a
number of existing families of computer languages, we've already
defeated the purpose - i.e., this is tantamount to saying "they should
just learn how to program."


  
> I don't think this is a really bad thing to work towards, but I think you
> need to be careful of putting the cart before the horse.  You need a
> really strong base of natural language processing before something
> that's a real improvement over existing computer languages can be
> possible.

I think this is where we part company. I think if a language were
developed that started, from scratch, with the natural language model
(subject, verb, object, modifiers, prepositions), and mapped those
familiar constructs onto the necessary elements of computer
programming, we would have a language that was the next step beyond
AppleScript, to use your example.

> I contend that if apple had chosen scheme or caml, or csh, or pretty
> much any other language as its extension language, and the same support
> had been built, and the same runtime characteristics (or better) were
> exhibited, that whatever this language was would have had the same kind
> of adoption that applescript has now.

I would have to disagree very strongly with this contention. Many of
the people who use Applescript would run screaming from caml -
actually, they wouldn't even get that far - you have to begin to
understand how something works before you can recoil in horror from
it, and many Applescript users wouldn't even get to the stage of
understanding how caml works in the first place.

   
> Look at Visual Basic.  It's atrocious.

Only from a computer language snob perspective - no insult intended
here, I count myself as one too. From a user perspective though, it
gets things done in an understandable, if often inelegant fashion. I
think you greatly underestimate the extent to which Basic, in any
form, is easier to learn and use than, for example, caml. There's a
reason why they chose the acronym BASIC, after all.




> Power users want to automate, and they will wrap their heads around
> anything which will let them do this with a minimum of architecture
> building.

Not if that anything is caml, or common lisp. In other words, by
choosing a sufficiently difficult scripting language, you greatly
reduce the proportion of users who will use it.

  They will not learn lisp or C++ in order to write a whole
> structure, when they can plug away at existing applications and get
> 60-70% of what they need.  give them a decent scripting environment to
> go with those applications and suddenly they can get 95%+ of what they
> need.  Why reinvent all those wheels for that last 5%?


Because your numbers are only true for 10% of users. The other 90% do
not even get the 60-70% from existing applications. Needless to say,
they won't realize the advantages of scripting languages either,
unless they are much more like Applescript than caml. Even more like
Applescript than Applescript would be even better.


> But the point is -- power users get into programming because of good
> libraries, and hooks into existing software.  They do *NOT* need an
> easier language.  They are already using languages that are *far less
> easy* than most of the lisp and ML family.

You've been using high powered languages too long. Whatever one may
say in favor of lisp and ML, they are definitely not easier for non
computer scientists to learn. Basic may suck from a CS standpoint, but
from the naive user standpoint, Basic is great, because it actually
makes sense without a CS degree.


> The fact that they are also
> less powerful is just another sad truth about the computing landscape.


Its a natural consequence of the fact that they put ease of learning
above power in their designs.

In other words, to follow up on your point, for ordinary users, power
comes from libraries, not from language features. So a language that
was easy to learn - meaning, having a natural-language-like syntax -
and had powerful libraries, would be the ideal solution for the
overwhelming majority of computer users. This leaves out lisp and ML,
of course, because they are far too unlike natural language to be easy
for *most users* to learn.


> I am in a large community of similar folks who use applescript every
> day.  I certainly can't speak for them all, but I speak for a lot of
> them when I say that trying to make a computer language be more like a
> natural language was one of the *mistakes* of applescript, and we will
> be better off without attempting this until we are *much* further along
> into natural language processing than we currently are.


The mistake of Applescript was to give a superficial appearance of
natural language, without the underlying structure of natural
language. Starting with basic grammatical structures first, then
coming up with the language syntax, then how this maps to
computational functionality, in that order, would yield a much better
language.



Raf
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0305282213340.26504-100000@gwen.sixfingeredman.net>
On 28 May 2003, Raffael Cavallaro wrote:
> I was actually talking about a natural-language-like general purpose
> programming language that *is* "dumbed down" for non programmers.

"Programming is not hard, it's the world that is hard."
 	-- I forgot who said this -- can anyone tell me?

In other words, the reason programming is difficult is because the world
is difficult to describe non-ambiguously, which is the goal of
programming. If all you want is the ability to customize some software
with a few library functions, you're not programming. That's not to say
there's not some need for a middle-ground between abstract programming and
dialogue boxes, but I don't think just having this will make
non-programmers as much more productive as you seem to think.
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305291346.4524df40@posting.google.com>
Adrian Kubala <······@sixfingeredman.net> wrote in message news:<········································@gwen.sixfingeredman.net>...

> That's not to say
> there's not some need for a middle-ground between abstract programming and
> dialogue boxes, but I don't think just having this will make
> non-programmers as much more productive as you seem to think.

I don't think you've seen how unproductive most users are given the
incredible computational power at their disposal.

For example, they'll rename scores of files manually, because they
either don't know that scripting tools exist that can do this in a
line of code, or, what is more usually the case, because perl, python,
ruby, etc. are completely beyond them. Even scripting languages are
too unlike natural language for most users to learn them easily.

Now, there exist GUI programs with dialog boxes that tackle this
specific problem, but there do not exist GUi programs that tackle
every problem like this that most computer users face. Being able to
use a decent scripting language makes power users about an order of
magnitude more productive with their machines than ordinary users -
i.e., they spend about a tenth the time, or less, on certain tasks as
ordinary users.

Maybe its that programmers don't realize how completely beyond most
users even scripting languages are. But giving ordinary users the
power of, say python, in an easily learnable form, would greatly
enhance their productivity with computers.

Raf
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0305291926130.966-100000@gwen.sixfingeredman.net>
On 29 May 2003, Raffael Cavallaro wrote:
> For example, they'll rename scores of files manually, because they
> either don't know that scripting tools exist that can do this in a line
> of code, or, what is more usually the case, because perl, python, ruby,
> etc. are completely beyond them. Even scripting languages are too unlike
> natural language for most users to learn them easily.

I submit that anyone incapable of using python is also incapable of
learning the precision of thought necessary in instructing a computer to
do almost anything worth automating. Give me an example of just one
command in your proposed "natural language", and I'll show you how it
either: 1. provides no more abstraction than current GUI tools, or 2.
requires the user think just as much about framing the command as they
would for any computer language.

The problem isn't that languages are hard, it's that thinking clearly is
hard, and most people, while capable of it, would rather not bother.

What you want is an AI agent which takes vague requests by users and uses
finely-honed logical thinking skills to turn these into precise
specifications which are then executed. Currently these agents are called
"programmers" and command decent salaries -- I can put you in touch with a
few, if you're interested.

Adrian
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305301320.2c15c792@posting.google.com>
Adrian Kubala <······@sixfingeredman.net> wrote in message news:<······································@gwen.sixfingeredman.net>...
> What you want is an AI agent which takes vague requests by users and uses
> finely-honed logical thinking skills to turn these into precise
> specifications which are then executed. Currently these agents are called
> "programmers" and command decent salaries -- I can put you in touch with a
> few, if you're interested.

And I submit that much of what programmers get paid to do over and
over again, can be packaged in a user friendly way, much as a
spreadsheet works, but with a broader range of functionality.

I further submit that one reason this has not happened is that many
programmers *like* reinventing the wheel over and over in an arcane
syntax- it makes them feel useful and special. So there has been
relatively little pressure from the programming community to make
languages more user friendly. Add to this the fact that virtually all
computer languages have been designed with programmers, not general
users, in mind, and you have a situation where the average user has no
desire to learn how to program.

How many fewer programmers would there be if programming still meant
entering raw op-codes in order to specify a program? I know that I for
one wouldn't be programming. But compiler writers and language
designers have abstracted enough functionality that this is no longer
necessary in order to write functional software.

The next step from today's languages is clear, because the history of
computer use is clear - a broader and broader user base, composed of
less and less computer-savvy individuals. Precisely parallel to this
broadening of the user base has been an increasing distance in
semantics, and syntax, from machine code.

The next step in this progression is a language that essentially
"cans" most of the common functionality that programmers write over,
and over in each application as a set of libraries (or frameworks).
These frameworks would then be accessed by commands in a syntax that
much more closely approaches natural language than is true of any
existing computer language.

In addition, the IDE for such a language would resolve ambiguities by
presenting the user with a set of possible interpretations of their
ambiguous code, from which the user would select the meaning she
intended. This would, as should be obvious, be an exercise as much in
natural language parsing as in compiler and library design.

Raf
From: Karl A. Krueger
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bb8nne$oce$1@baldur.whoi.edu>
Raffael Cavallaro <·······@mediaone.net> wrote:
> 
> In addition, the IDE for such a language would resolve ambiguities by
> presenting the user with a set of possible interpretations of their
> ambiguous code, from which the user would select the meaning she
> intended. This would, as should be obvious, be an exercise as much in
> natural language parsing as in compiler and library design.

Clippy for programmers.

	"It looks like you're writing a conditional!  Do you want your
	 program to:

		(*) Take the first branch which matches at all;
		( ) Take the branch which matches closest, using Joe's
		    Discount Fuzzy Logic and Bait Shop Algorithm, for an
		    extra 50 cent surcharge per invocation;
		( ) Take every branch which matches, in arbitrary order;
		( ) Take every branch which matches, in the given order;
		( ) Take the first branch which matches and every
		    subsequent branch until a 'break' statement;
		( ) Or didn't you mean to type 'if' ?"

Fear.

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305302125.1713b338@posting.google.com>
"Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...

> Clippy for programmers.
> 
> 	"It looks like you're writing a conditional!  Do you want your
> 	 program to:
> 
> 		(*) Take the first branch which matches at all;
> 		( ) Take the branch which matches closest, using Joe's
> 		    Discount Fuzzy Logic and Bait Shop Algorithm, for an
> 		    extra 50 cent surcharge per invocation;
... [more lame mockery snipped]

And it is precisely this sort of superior attitude which will have
"real programmers" scratching their heads and wondering what they did
wrong when they have been made redundant by software that can ...
write software.

What programmers do is, for the most part, really not all that
special. Most of the hard work is in the libraries - that's why
clearly inferior languages like Java do so well - because much of the
work has already been done for you.

Make these sort of libraries accessible to ordinary users, and you
don't even need Java programmers for most tasks.

So lets keep programming languages as arcane as possible - that way,
we ensure that "real programmers" will always have a job.

For the record, I never suggested fuzzy logic as a means of
disambiguating user input - simply allow users to choose among valid
rewrites of the code they actually wrote. Neither did I suggest that
the mere typing of the word "if" would cause a prompt to disambiguate
input.

But what's even more telling is the truly frightened and visceral
reaction to these ideas, even though any one with any sense of
historical perspective can see that they are inevitable. Libraries
will get more powerful; parsers and compilers will get smarter;
surface syntax will become more natural-language-like; error and
warning messages will become more helpful, and allow users to choose
rewrites. The language to put these features together first, will
become the language that most people write software in.
From: Russell McManus
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87n0h3uu9l.fsf@thelonious.dyndns.org>
·······@mediaone.net (Raffael Cavallaro) writes:

> But what's even more telling is the truly frightened and visceral
> reaction to these ideas, even though any one with any sense of
> historical perspective can see that they are inevitable. Libraries
> will get more powerful; parsers and compilers will get smarter;
> surface syntax will become more natural-language-like; error and
> warning messages will become more helpful, and allow users to choose
> rewrites. The language to put these features together first, will
> become the language that most people write software in.

The history of computing argues against this thesis, or at least that
the progress you expect will take a very long time.

Programming languages in wide use have hardly advanced since the late
sixties.  There have been important advances in programming languages
since then, but for the most part these advances have not been adopted
by industry.  Instead we're mostly progamming in some variant of
simula or algol to this day.

-russ
From: Karl A. Krueger
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bbb3uo$iur$1@baldur.whoi.edu>
Raffael Cavallaro <·······@mediaone.net> wrote:
> And it is precisely this sort of superior attitude which will have
> "real programmers" scratching their heads and wondering what they did
> wrong

Oh, that happened.  "Real programmers" seem to wonder why a language
written by and for system administrators -- that would be Perl -- is
far more popular for many general-purpose tasks than many better-
designed general languages written by and for CS and math types.

Even though cl-ppcre does regex faster than Perl.  :)


> For the record, I never suggested fuzzy logic as a means of
> disambiguating user input - simply allow users to choose among valid
> rewrites of the code they actually wrote. Neither did I suggest that
> the mere typing of the word "if" would cause a prompt to disambiguate
> input.

It's called "humor":  a flame only to the paranoid.


> But what's even more telling is the truly frightened and visceral
> reaction to these ideas, even though any one with any sense of
> historical perspective can see that they are inevitable.

That claim sounds a lot like some claims that have been made before,
about "historical inevitability" ... the subject was economic systems
and not computing systems ... hmm, what could it have been?  :)

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305312135.2b2c100@posting.google.com>
"Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...

> Oh, that happened.  "Real programmers" seem to wonder why a language
> written by and for system administrators -- that would be Perl -- is
> far more popular for many general-purpose tasks than many better-
> designed general languages written by and for CS and math types.

And it will happen again, only the successor to Perl's
natural-language-like claims will be a language is is actually
natural-language-like.

> > For the record, I never suggested fuzzy logic as a means of
> > disambiguating user input - simply allow users to choose among valid
> > rewrites of the code they actually wrote. Neither did I suggest that
> > the mere typing of the word "if" would cause a prompt to disambiguate
> > input.
> 
> It's called "humor":  a flame only to the paranoid.

but I never called it a flame - and as a description of your attempts
at humor, that would be one letter too many - lame is more like it.

> That claim sounds a lot like some claims that have been made before,
> about "historical inevitability" ... the subject was economic systems
> and not computing systems ... hmm, what could it have been?  :)

Isn't this an invocation of a corollary to Godwin's law? After all,
what's the rhetorical difference between comparing someone to
communists and comparing someone to those goose-stepping guys from
germany? They're both specious ad hominem arguments.

Needless to say, just because communists spoke of historical
inevitability doesn't mean that there's no such thing - they were just
wrong about what was inevitable. But only a fool would believe that
computer languages will acquire more arcane syntax over time, or
become less like natural languages.
From: Karl A. Krueger
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bbd0hb$87b$1@baldur.whoi.edu>
Raffael Cavallaro <·······@mediaone.net> wrote:
> "Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...
>> Oh, that happened.  "Real programmers" seem to wonder why a language
>> written by and for system administrators -- that would be Perl -- is
>> far more popular for many general-purpose tasks than many better-
>> designed general languages written by and for CS and math types.
> 
> And it will happen again, only the successor to Perl's
> natural-language-like claims will be a language is is actually
> natural-language-like.

Perhaps.  It could not be much more notation-intensive than Perl.  :)

My point though is even in the case of a language designed for someone
other than "real programmers", the activity of those who write in the
language is still identifiably "programming" and not some natural-
language activity like, say, "discussion".

Or to put it another way:  No well-specified programming language is any
more vague than any other.  Perl, Lisp, Fortran, are all precise in a
way that natural languages are not ... yes, even though there is no
formal specification written down for Perl.  :)


> Needless to say, just because communists spoke of historical
> inevitability doesn't mean that there's no such thing - they were just
> wrong about what was inevitable. But only a fool would believe that
> computer languages will acquire more arcane syntax over time, or
> become less like natural languages.

Ah, now you are saying something different though.  Before you seemed to
be saying that a particular end-point (programmers becoming obsolete)
was "historically inevitable", now you are saying that a particular
trend that you perceive can be expected to continue.  One claim is
teleological (and hence reminiscent of Marxist fantasies) and the other
is empirical.

However you seem to be still conflating "not having arcane syntax" with
"being like a natural language".  Even programming languages that have a
minimum of arcana (here let's think of Python, or maybe AppleScript) are
still unlike natural languages in that they are precise and algorithmic
rather than vague and blurred by social context.

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0306010938.7707c11b@posting.google.com>
"Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...

> My point though is even in the case of a language designed for someone
> other than "real programmers", the activity of those who write in the
> language is still identifiably "programming" and not some natural-
> language activity like, say, "discussion".

So what? What does it matter what you call it, as long as it is
accessible to many more users? Call it programming, scripting,
software crafting, who cares?

The point is, it will be significantly different from programming now
because it will not require the user to have the slightest idea what
is going on under the hood. All the user will need to know is the
domain specific specifications. How many spreadsheet users have the
faintest clue what's really going on behind the scenes.

> 
> Or to put it another way:  No well-specified programming language is any
> more vague than any other.  Perl, Lisp, Fortran, are all precise in a
> way that natural languages are not ... yes, even though there is no
> formal specification written down for Perl.  :)


That's simply not true. Perl is decidedly more vague than scheme. The
same token means different things in different contexts in Perl. This
is not true in Scheme - remember the thread subject - Lisp-2 or
Lisp-1. The point is, natural language varies from computer languages
in the following ways (at least):

1. Natural language makes extensive use of context to determine
meaning. Computer languages do this only rarely, and in very limited
ways.

2. Natural language speakers resolve ambiguity by asking for
clarification of specific terms, referents, or utterances. Often, and
most helpfully, speakers provide their counterparts with choices of
disambiguation - "did you mean this book here, or the red one here?"
Computer languages only indicate what it was that they failed to parse
correctly. It often isn't even the real error, but something
downstream.


> Ah, now you are saying something different though.  Before you seemed to
> be saying that a particular end-point (programmers becoming obsolete)
> was "historically inevitable", now you are saying that a particular
> trend that you perceive can be expected to continue.  One claim is
> teleological (and hence reminiscent of Marxist fantasies) and the other
> is empirical.


In case it hasn't dawned on you yet, all valid teleological arguments
*are* empirical. You can always spot the bogus ones because they rely
on some unsubstantiated, reified causal principle (such as dialectical
materialism), rather than reference to real historical events.

I had this discussion with some marxist colleagues about 20 years ago.
I pointed out that Marx and Engels couldn't possibly be right if so
called Primitive Communism had never existed. I then outlined the
evidence that social status and wealth stratification had always
existed in human societies, so there was no ideal past condition of
social relations to return to. They claimed that the historical facts
were irrelevant! Dialectical materialism would take care of
everything. It was then I knew I was dealing with a religious faith,
and not a real sociology, and I stopped even engaging marxists in
conversation about it.

> 
> However you seem to be still conflating "not having arcane syntax" with
> "being like a natural language".  Even programming languages that have a
> minimum of arcana (here let's think of Python, or maybe AppleScript) are
> still unlike natural languages in that they are precise and algorithmic
> rather than vague and blurred by social context.

No, you're the one doing the conflating. Not having arcane syntax is
just one necessary feature of being a natural language - i.e., all
natural languages have a structurally similar syntax. However, natural
languages have lots of other features as well - redundancy, context
determined meaning, disambiguation through interrogation, etc.

Computer languages would need to emulate most of these to be
natural-language-like. Getting rid of the arcane syntax is just a
start. I have consistently said that other features, such as
disambiguation through interrogation, would be necessary. In fact, it
was precisely this feature that you mocked as "Clippy for
Programmers."

To be natural language like, a computer language would need a surface
syntax that fit the natural language pattern. This syntax would have
to map to libraries that did most of the heavy lifting. This syntax
would need to support context sensitive meaning of tokens. And
ambiguous statements would need to be resolved by querying the user,
preferably with choices, as to what was meant more precisely.
From: Karl A. Krueger
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bbfpdc$7bn$1@baldur.whoi.edu>
Raffael Cavallaro <·······@mediaone.net> wrote:
> "Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...
>> My point though is even in the case of a language designed for someone
>> other than "real programmers", the activity of those who write in the
>> language is still identifiably "programming" and not some natural-
>> language activity like, say, "discussion".
> 
> So what? What does it matter what you call it, as long as it is
> accessible to many more users? Call it programming, scripting,
> software crafting, who cares?

Some people used to claim that writing in Perl was "scripting, not
programming".  Nobody believes them.  :)


> The point is, it will be significantly different from programming now
> because it will not require the user to have the slightest idea what
> is going on under the hood. All the user will need to know is the
> domain specific specifications. How many spreadsheet users have the
> faintest clue what's really going on behind the scenes.

Currently, programmers do not need to know what goes on "under the hood"
of their libraries:  one can write in Perl without knowing how regex
matchers work, or in Lisp without knowing how to implement gc.

However, that is not at all the same as being able to write general-
purpose programs without some kind of understanding of algorithm.


>> Or to put it another way:  No well-specified programming language is any
>> more vague than any other.  Perl, Lisp, Fortran, are all precise in a
>> way that natural languages are not ... yes, even though there is no
>> formal specification written down for Perl.  :)
> 
> That's simply not true. Perl is decidedly more vague than scheme. The
> same token means different things in different contexts in Perl. This
> is not true in Scheme - remember the thread subject - Lisp-2 or
> Lisp-1.

That does not make Perl more vague; it makes Perl more contextual, which
is not at all the same thing.  However, if "vague" meant "the same token
means different things in different contexts" as you have it, then Lisps
would also be vague, since a symbol can mean different things in
different lexical contours.

If Perl were vague, then there would be two or more different correct
interpretations of the same well-formed Perl program, as there can be
different interpretations of the same grammatical English paragraph.
There are not; there is only one correct interpretation of a well-formed
Perl program, just as with a Scheme or Common Lisp program.


> The point is, natural language varies from computer languages
> in the following ways (at least):
> 
> 1. Natural language makes extensive use of context to determine
> meaning. Computer languages do this only rarely, and in very limited
> ways.

... and those ways are not universally recognized as good.  Notably,
people criticize Perl for its tendency to do things implicitly.  For
instance, doing a regex match implicitly sets a number of variables
(which might be thought of as "pronouns" in the natural language
analogy).

So there are variables defined which never received an explicit
assignment or binding, leaving the person doing debugging searching for
where a particular variable was set ...


> 2. Natural language speakers resolve ambiguity by asking for
> clarification of specific terms, referents, or utterances. Often, and
> most helpfully, speakers provide their counterparts with choices of
> disambiguation - "did you mean this book here, or the red one here?"
> Computer languages only indicate what it was that they failed to parse
> correctly. It often isn't even the real error, but something
> downstream.

If what you really want is better debugging for a precise language, that
is a very different thing from asking for a vague language.  :)

-- 
Karl A. Krueger <········@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped.  s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0306021954.4febb51d@posting.google.com>
"Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...

> However, if "vague" meant "the same token
> means different things in different contexts" as you have it, then Lisps
> would also be vague, since a symbol can mean different things in
> different lexical contours.

Yes, but you have to work to make this so, by foolishly using the same
symbol in different lexical contours. One sees this, of course; the
same cryptic, three letter symbol names, used in nested lets, but this
is a sign of brain damage, not cleverness.


> If Perl were vague, then there would be two or more different correct
> interpretations of the same well-formed Perl program, as there can be
> different interpretations of the same grammatical English paragraph.
> There are not; there is only one correct interpretation of a well-formed
> Perl program, just as with a Scheme or Common Lisp program.

This rests on the false premise that because it is possible to make
grammatically correct utterances that are factually false, or
grammatically correct utterances which can have two or more different
meanings, then natural language is incapable of precision.

However, as I've stated a number of times in this thread, domain
experts are perfectly capable of using natural language precisely.
I've never claimed that computer languages should be able to magically
make imprecise language precise. Only that computer languages ought to
be able to handle precise statements in a syntax that more closely
resembles natural languages.

Perl is vague in precisely the same way that natural languages are -
certain tokens/words have meaning which is not self contained. One has
to look beyond them to a greater context to determine their meaning.
See my reply to Ray Blaak for more on this.


> ... and those ways are not universally recognized as good.  Notably,
> people criticize Perl for its tendency to do things implicitly.

I don't. This is one of the virtues of natural language, and people
grow up relying on the fact that their listener will do a whole lot of
implicit interpretation of their utterances, without their having to
spell everything out.

> For
> instance, doing a regex match implicitly sets a number of variables
> (which might be thought of as "pronouns" in the natural language
> analogy).

Yes, very much so, and I believe Larry Wall was thinking of pronouns
when he put these into Perl. The problem with Perl is that it's
neither fish nor fowl. It isn't completely natural language based, so
its model doesn't really conform to what you'd expect from a natural
language. It also diverges from one's expectations of a computer
language by doing certain natural language like things, such as
silently setting the value of $`, $&, and $' (should you use them
anywhere in your program).


> If what you really want is better debugging for a precise language, that
> is a very different thing from asking for a vague language.  :)

I want a language that is vague just as natural languages are - i.e.,
context dependent meaning- with a natural language like syntax, and
with better debugging through suggested rewrites. I don't believe that
these are mutually exclusive.
From: Gavin
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bctsjs$gjj$1@nobel.pacific.net.sg>
"Karl A. Krueger" <········@example.edu> wrote in message
·················@baldur.whoi.edu...
> Raffael Cavallaro <·······@mediaone.net> wrote:
> > "Karl A. Krueger" <········@example.edu> wrote in message
news:<············@baldur.whoi.edu>...
> >> My point though is even in the case of a language designed for someone
> >> other than "real programmers", the activity of those who write in the
> >> language is still identifiably "programming" and not some natural-
> >> language activity like, say, "discussion".
> >
> > So what? What does it matter what you call it, as long as it is
> > accessible to many more users? Call it programming, scripting,
> > software crafting, who cares?
>
> Some people used to claim that writing in Perl was "scripting, not
> programming".  Nobody believes them.  :)
>

Different languages were implemented for solving problems in diffrent
domains.
 Dialects and hybrids of those languages were implemented for solving
specialised cases
within the various domain problems or across diffrent domains.

Programming in Perl is still programming even though it may not be as a
Turing -complete
language as Lisp.
Most modern day languages eg C++,  Java etc are Turing equivalent. What
program
a person can code in Java, one can code in C++ or Lisp etc.

However on the other hand, some tasks are more easily done in some languages
than others.
Languages are just tools, you dont have to use one language to solve all
your programming problems.
You can use different tools for varying programming problems or use a
combination-hybrid solution.
eg Perl or Python (frontend) and Java or C++ frontend or Lisp( frontend) and
C/C++ or Java (backend) etc

> > The point is, it will be significantly different from programming now
> > because it will not require the user to have the slightest idea what
> > is going on under the hood. All the user will need to know is the
> > domain specific specifications. How many spreadsheet users have the
> > faintest clue what's really going on behind the scenes.


> Currently, programmers do not need to know what goes on "under the hood"
> of their libraries:  one can write in Perl without knowing how regex
> matchers work, or in Lisp without knowing how to implement gc.

As programming domains grew larger, the complexities of the design,
implmentation and testing
of the software sytems needed to be abstracted away for the normal user to
use languages to solve their
problems. So you dont really need to know how Java or C++ or Lisp does
garbage collection to
write code to solve your proble. Just turn on garbage collection or turn it
on more frequently in your compiler
or interpreter. Of course you must know how regex or what gc is for but one
does not need to know the underlying
implementation to use it.
One need know what algorithm the gc uses or whether regex is implmented
using the same language or some C-library.

> However, that is not at all the same as being able to write general-
> purpose programs without some kind of understanding of algorithm.
>

Most scripting languages and/or systems languages like C++ or Java or Lisp
have already enscapulated their algorithms in their vast standard( or
non-standard library)
for eg for a person to use a library routine in C++ STL , he just needs to
know the signature
of the function, input , output and some inkling of how the algorithm works.
He or she does not have to bother exactly how a list or vector is
implemented behind-the-scenes.
The same goes for Lisp.


> >> Or to put it another way:  No well-specified programming language is
any
> >> more vague than any other.  Perl, Lisp, Fortran, are all precise in a
> >> way that natural languages are not ... yes, even though there is no
> >> formal specification written down for Perl.  :)
> >
> > That's simply not true. Perl is decidedly more vague than scheme. The
> > same token means different things in different contexts in Perl. This
> > is not true in Scheme - remember the thread subject - Lisp-2 or
> > Lisp-1.

Common Lisp is Lisp  1 and Scheme, Python  are Lisp 2. Perl is more raw,
closer
to the machine architecture, its underlying implementation is in 'C'.

> That does not make Perl more vague; it makes Perl more contextual, which
> is not at all the same thing.  However, if "vague" meant "the same token
> means different things in different contexts" as you have it, then Lisps
> would also be vague, since a symbol can mean different things in
> different lexical contours.
>
> If Perl were vague, then there would be two or more different correct
> interpretations of the same well-formed Perl program, as there can be
> different interpretations of the same grammatical English paragraph.
> There are not; there is only one correct interpretation of a well-formed
> Perl program, just as with a Scheme or Common Lisp program.
>
>
> > The point is, natural language varies from computer languages
> > in the following ways (at least):
> >
> > 1. Natural language makes extensive use of context to determine
> > meaning. Computer languages do this only rarely, and in very limited
> > ways.
>
> ... and those ways are not universally recognized as good.  Notably,
> people criticize Perl for its tendency to do things implicitly.  For
> instance, doing a regex match implicitly sets a number of variables
> (which might be thought of as "pronouns" in the natural language
> analogy).

Perl also supports references to subroutines and the powerful construct
called closures. Lisp programmers know closures are an unnamed subroutine
that carries its environment with it.

> So there are variables defined which never received an explicit
> assignment or binding, leaving the person doing debugging searching for
> where a particular variable was set ...
>
>

Perl also has Typeglobs and Symbol tables. Typeglobs are a hidden data type
and it can be used to obtain information about the state of the
interpreter( meta-data).
Perl uses a symbol table (implemented internally as a hash table) to map
indentifier
names to the appropriate values.

> > 2. Natural language speakers resolve ambiguity by asking for
> > clarification of specific terms, referents, or utterances. Often, and
> > most helpfully, speakers provide their counterparts with choices of
> > disambiguation - "did you mean this book here, or the red one here?"
> > Computer languages only indicate what it was that they failed to parse
> > correctly. It often isn't even the real error, but something
> > downstream.

Most natural language processing systems use Lisp, Prolog or similar
functional languages.
You can use Perl to do natural language processing also by extending Perl
using your own hand-coded
'C' modules. You have to remember, C, C++, Java, Lisp and Scheme are Turing
equivalent.
A person can write the parts Perl is missing when compared to Prolog or Lisp
or a person can
simply use Python.

The syntax and semantics of Perl would make it just a bit more awkward to
express the
algorithms and data-structures of natural language processing, but it has
great text handling
capabilities and it is fast doing it (because of the underlying C
implementation)

> If what you really want is better debugging for a precise language, that
> is a very different thing from asking for a vague language.  :)

Every language is what you want it to be or how you want to use it .
Every language evolved and was created by inventors because a purpose for it
arose along the way in the past 40-50 years of computing.
Thank you for your thread.

> --
> Karl A. Krueger <········@example.edu>
> Woods Hole Oceanographic Institution
> Email address is spamtrapped.  s/example/whoi/
> "Outlook not so good." -- Magic 8-Ball Software Reviews

Warmest Regards,
Gavin
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-B3D5B8.19165020062003@copper.ipg.tsnz.net>
In article <············@nobel.pacific.net.sg>,
 "Gavin" <······@pacific.net.sg> wrote:

> Programming in Perl is still programming even though it may not be as 
> a Turing -complete language as Lisp. Most modern day languages eg 
> C++,  Java etc are Turing equivalent. What program a person can code 
> in Java, one can code in C++ or Lisp etc.

*All* programming languages in general use[1] are Turing-equivalent.  
Hell, even vi editor macros are Turing-equivalent!



> Common Lisp is Lisp  1 and Scheme, Python  are Lisp 2. Perl is more 
> raw, closer to the machine architecture, its underlying 
> implementation is in 'C'.

I'm sorry but that's totally incorrect.

Common Lisp is a Lisp 2.  Scheme is a Lisp 1.  Carrying the 
classification over to totally different languages doesn't necessarily 
make a lot of sense, but Perl is something like a Lisp 5, if you count 
Perl's globs as being the same sort of thing as CL's symbols -- you 
don't just have a function and data value for each identifier, but a 
scalar, a vector, a hash, a format, and a filehandle.

-- Bruce

[1] OK, except early FORTRAN and COBOL and other languages without a 
heap, which are only finite state machines.  And to be really picky, our 
computers are also only finite state machines, which means that our 
programming language *implementations* can not in actual fact be 
Turing-equivalent.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwr85ohi2p.fsf@shell01.TheWorld.com>
Bruce Hoult <·····@hoult.org> writes:

> Common Lisp is a Lisp 2.  Scheme is a Lisp 1.  Carrying the 
> classification over to totally different languages doesn't necessarily 
> make a lot of sense, but Perl is something like a Lisp 5, if you count 
> Perl's globs as being the same sort of thing as CL's symbols -- you 
> don't just have a function and data value for each identifier, but a 
> scalar, a vector, a hash, a format, and a filehandle.

It's worth noting as long as you're counting namespaces that we had
the whole debate about namespaces in the context of a question about
whether variables and functions should collapse their namespaces, and
so I grabbed Lisp2 as the name to use in the debate.  In fact, though,
CL is at least a Lisp4 in the sense that it has 4 legitimate lexically
closable namespaces: variables, functions, tagbody/go tags, and
block/return-from tags.  (There is some vaguery as to whether SETF
names like (SETF CAR) are a separate namespace or an extension of the
function namespace, but I prefer to model them as just extra names 
for things in the function namespace that are usable in some contexts
and not in others.)  

Some have talked about class definitions as being other namespaces
since defining the same name in any of the 4 namespaces won't occlude
them, but they're not referenced by special form, not possible to create
closures over, etc.  So I don't count them.  Effectively, they're just
objects stored in a table that a well-known set of functions know about...

Anyway, my big point was that CL isn't truly a Lisp2.  We use the name only
loosely to mean "at least Lisp2" meaning "definitely not Lisp1".
From: Paul Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3EF33495.FAD07785@motorola.com>
Kent M Pitman wrote:

> Anyway, my big point was that CL isn't truly a Lisp2.

It's a Lisp2 for large values of 2.

	Paul
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw65n0frq4.fsf@shell01.TheWorld.com>
Paul Dietz <············@motorola.com> writes:

> Kent M Pitman wrote:
> 
> > Anyway, my big point was that CL isn't truly a Lisp2.
> 
> It's a Lisp2 for large values of 2.

Heh.  Reminds me of a joke I saw scrawled on a bathroom wall when I first
got to MIT:
 Limit(sqrt(10))->3 for very small values of 10.
From: Jens Axel Søgaard
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3ef4077f$0$97189$edfadb0f@dread12.news.tele.dk>
Kent M Pitman wrote:
> Heh.  Reminds me of a joke I saw scrawled on a bathroom wall when I first
> got to MIT:
>  Limit(sqrt(10))->3 for very small values of 10.

The non-MIT version is

   1+1=3 for large values of 1.

-- 
Jens Axel S�gaard
From: Gareth McCaughan
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87znk8e5m2.fsf@g.mccaughan.ntlworld.com>
Kent Pitman wrote:

> Heh.  Reminds me of a joke I saw scrawled on a bathroom wall when I first
> got to MIT:
>  Limit(sqrt(10))->3 for very small values of 10.

 lim    3 = 8.
w -> oo

(Ideally that would be an omega rather than a "w".)

-- 
Gareth McCaughan
.sig under construc
From: Joe Marshall
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <u1akn43h.fsf@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> Some have talked about class definitions as being other namespaces
> since defining the same name in any of the 4 namespaces won't occlude
> them, but they're not referenced by special form, not possible to create
> closures over, etc.  So I don't count them.  Effectively, they're just
> objects stored in a table that a well-known set of functions know about...

I'm not sure that tagbody/go tags and block/return-from tags count,
either.  The values denotable in the function namespace are the same as
those denotable in the variable namespace, and there are namespace
qualifiers that allow you to switch between them.  But the tagbody/go
tag namespace doesn't have values per se, only control points.  Same
goes for block/return-from.
From: Pekka P. Pirinen
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ixptl8zly6.fsf@ocoee.cam.harlequin.co.uk>
Joe Marshall <···@ccs.neu.edu> writes:
> I'm not sure that tagbody/go tags and block/return-from tags count,
> either.  The values denotable in the function namespace are the same as
> those denotable in the variable namespace, and there are namespace
> qualifiers that allow you to switch between them.  But the tagbody/go
> tag namespace doesn't have values per se, only control points.

I don't understand what you're getting at here.  Obviously functions
can be identified with objects of type FUNCTION, but there are
definitely values that a variables can be bound to that can't be be
bound to a name in the function namespace.

By and large, functions are as limited as control points: you can call
one and invoke the other, and that's essentially it.  Scheme even has
a standard mapping between them: CALL/CC.
-- 
Pekka P. Pirinen
Invent a clever saying, and your name shall live forever.
      - Anonymous
From: Joe Marshall
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <of0smxmg.fsf@ccs.neu.edu>
···············@globalgraphics.com (Pekka P. Pirinen) writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> > I'm not sure that tagbody/go tags and block/return-from tags count,
> > either.  The values denotable in the function namespace are the same as
> > those denotable in the variable namespace, and there are namespace
> > qualifiers that allow you to switch between them.  But the tagbody/go
> > tag namespace doesn't have values per se, only control points.
> 
> I don't understand what you're getting at here.  Obviously functions
> can be identified with objects of type FUNCTION, but there are
> definitely values that a variables can be bound to that can't be be
> bound to a name in the function namespace.

What I meant was this:  in a standard form, the first position is an
identifier in the function namespace, but the others are identifiers
in the variable namespace.

   (let ((foo 49))
     (flet ((foo (x) (+ x 22)))
       (foo foo)))

In (FOO FOO), the first foo refers to the function namespace, but the
second refers to the variable namespace.

But there exist namespace qualifiers that let you change that:

(let ((foo (lambda (x) (+ (funcall x) 22)))) 
  (flet ((foo () 49))
    (funcall foo (function foo))))

We can't quite bind a function name to a number with FLET, but we can
get close.  We could put a number in the function cell of a symbol,
though. 

There are no namespace qualifiers for tagbody/go or block/return-from.
They only work in the special forms that refer to them.
From: Paul Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3EF35D49.E46B2CB9@motorola.com>
Joe Marshall wrote:

>  We could put a number in the function cell of a symbol,
> though.

No conforming CL program can do this.  The SYMBOL-FUNCTION
slot can only be setf-ed to a function, and a program that violates
this constraint has undefined behavior (see section 1.4.4.3
of CLtS.)  A conforming implementation may check that the value
being assigned is a function, or assume that it is a function.

	Paul
From: ·············@attbi.com
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <8yrwnzs7.fsf@attbi.com>
Paul Dietz <············@motorola.com> writes:

> Joe Marshall wrote:
>
>>  We could put a number in the function cell of a symbol,
>> though.
>
> No conforming CL program can do this.  The SYMBOL-FUNCTION
> slot can only be setf-ed to a function, and a program that violates
> this constraint has undefined behavior (see section 1.4.4.3
> of CLtS.)  A conforming implementation may check that the value
> being assigned is a function, or assume that it is a function.

Well, how about that.  I knew this was an implementation trick, but I
didn't think it was codified to that extent.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <8LKdnY9MM-zVemmjXTWJjQ@dls.net>
·············@attbi.com wrote:

> Well, how about that.  I knew this was an implementation trick, but I
> didn't think it was codified to that extent.

ANSI CL has many interesting things that are not explicitly stated,
but are a consequence of combinations of other things in the
standard.

For example: in a conforming implementation it must be the case that

   (subtypep 'string '(vector character)) ==> nil, true

Anyone who hasn't been following #lisp or some the development
mailing lists want to guess why that is?

	Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2106031806560001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> ·············@attbi.com wrote:
> 
> > Well, how about that.  I knew this was an implementation trick, but I
> > didn't think it was codified to that extent.
> 
> ANSI CL has many interesting things that are not explicitly stated,
> but are a consequence of combinations of other things in the
> standard.
> 
> For example: in a conforming implementation it must be the case that
> 
>    (subtypep 'string '(vector character)) ==> nil, true
> 
> Anyone who hasn't been following #lisp or some the development
> mailing lists want to guess why that is?
> 
>         Paul

OK, I'll bite.  In MCL:

? (subtypep 'string '(vector character))
NIL
T

But:

? (subtypep 'string '(or (vector character) (vector base-character)))
T
T

So in a conforming implementation (subtypep 'string '(vector character))
*might* return nil if, for example (as in MCL), arrays of base characters
are represented differently from arrays of (general) characters.  However,
an implementation is allowed to conflate all the character types into a
single type.  (Well, strictly speaking, an implementation is allowed to
conflate character and base-char, and have extended-char be the empty
set.)  In such an implementation it would seem to me that (subtypep
'string '(vector character)) would return T,T, and that that would still
be a conforming implementation.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3p2cnbntTaOdjmijXTWJiw@dls.net>
Erann Gat wrote:

> So in a conforming implementation (subtypep 'string '(vector character))
> *might* return nil if, for example (as in MCL), arrays of base characters
> are represented differently from arrays of (general) characters.  However,
> an implementation is allowed to conflate all the character types into a
> single type.  (Well, strictly speaking, an implementation is allowed to
> conflate character and base-char, and have extended-char be the empty
> set.)  In such an implementation it would seem to me that (subtypep
> 'string '(vector character)) would return T,T, and that that would still
> be a conforming implementation.

Nope!  Try again -- there's a chain of deductions that leads to the
conclusion that a conforming implementation must return nil, true.

Hint: this is true even if base-char == character.

	Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2106032247430001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> Erann Gat wrote:
> 
> > So in a conforming implementation (subtypep 'string '(vector character))
> > *might* return nil if, for example (as in MCL), arrays of base characters
> > are represented differently from arrays of (general) characters.  However,
> > an implementation is allowed to conflate all the character types into a
> > single type.  (Well, strictly speaking, an implementation is allowed to
> > conflate character and base-char, and have extended-char be the empty
> > set.)  In such an implementation it would seem to me that (subtypep
> > 'string '(vector character)) would return T,T, and that that would still
> > be a conforming implementation.
> 
> Nope!  Try again -- there's a chain of deductions that leads to the
> conclusion that a conforming implementation must return nil, true.
> 
> Hint: this is true even if base-char == character.

Well, I give up.

I see how (subtypep 'string '(vector character)) could return nil, but I
don't see how it could possibly be required to in light of the fact that:

(subtypep 'string '(or (vector character) (vector base-character)))

isn't required to return T (assuming MCL is conforming), and character and
base-character can be the same type.  If MCL didn't distinguish character
and base character then it would still return T to (or (vector char)
(vector base-char)), which would then be equivalent to (or (vector char)
(vector char)), which would be equivalent to (vector char).

So what am I missing?

BTW, it seems to me you (and many CLers) take a certain glee is posing
puzzles such as this.  I submit that you do the language a disservice by
doing so.  These sorts of puzzles IMO should not be considered features. 
If you think they are then you really should be programming in C++, which
is much more featureful than Lisp on such a metric.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <kS-dnaguv-giNGijXTWJhg@dls.net>
Erann Gat wrote:

> Well, I give up.
> 
> I see how (subtypep 'string '(vector character)) could return nil, but I
> don't see how it could possibly be required to in light of the fact that:
> 
> (subtypep 'string '(or (vector character) (vector base-character)))

<glee>You're not going to like this.</glee>

The definition of string is:

     a specialized vector whose elements are of type character or a subtype
     of type character.

Now, character and base-char are subtypes of character, but there's
another such standardized type:  nil.  And, as you may recall,
the rules for upgraded-array-element-type require that nil also be
a specialized array element type.  So, (vector nil) is a subtype
of string.


> BTW, it seems to me you (and many CLers) take a certain glee is posing
> puzzles such as this.  I submit that you do the language a disservice by
> doing so.  These sorts of puzzles IMO should not be considered features. 
> If you think they are then you really should be programming in C++, which
> is much more featureful than Lisp on such a metric.

Oh, piffle.  If one is testing anything one should approach it with the goal
of breaking it, or one will not do a good job.  Being a cruel bastard
(in a playful way) is a good thing here.  I think explicating precisely what
is required by the spec, determining if those requirements make sense and are
testable, and where implementations meet or do not meet these requirements,
can only make CL more useful.

	Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2206031012020001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> Erann Gat wrote:
> 
> > Well, I give up.
> > 
> > I see how (subtypep 'string '(vector character)) could return nil, but I
> > don't see how it could possibly be required to in light of the fact that:
> > 
> > (subtypep 'string '(or (vector character) (vector base-character)))
> 
> <glee>You're not going to like this.</glee>

Indeed.

> The definition of string is:
> 
>      a specialized vector whose elements are of type character or a subtype
>      of type character.
> 
> Now, character and base-char are subtypes of character, but there's
> another such standardized type:  nil.  And, as you may recall,
> the rules for upgraded-array-element-type require that nil also be
> a specialized array element type.  So, (vector nil) is a subtype
> of string.

1.  No, I do not recall this, nor can I find any reference to it in the
hyperspec.  In fact, the string "NIL" does not appear in section 15.1.2.2
Required Kinds of Specialized Arrays.

2.  If this were true, then MCL would not be compliant in responding T to
(subtypep 'string '(or (vector character) (vector base-character)))

3.  Requiring that nil be a specialized array element type would be
extremely stupid, since such arrays would be useless, since they could not
contain any elements.

4.  MCL returns T for (upgraded-array-element-type nil), and Lispworks
generates an error saying (ARRAY NIL) is an illegal type specifier.

The evidence seems to be arrayed strongly against you.

> > BTW, it seems to me you (and many CLers) take a certain glee is posing
> > puzzles such as this.  I submit that you do the language a disservice by
> > doing so.  These sorts of puzzles IMO should not be considered features. 
> > If you think they are then you really should be programming in C++, which
> > is much more featureful than Lisp on such a metric.
> 
> Oh, piffle.  If one is testing anything one should approach it with the goal
> of breaking it, or one will not do a good job.  Being a cruel bastard
> (in a playful way) is a good thing here.  I think explicating precisely what
> is required by the spec, determining if those requirements make sense and are
> testable, and where implementations meet or do not meet these requirements,
> can only make CL more useful.

I think you should consider your audience.  If you're talking exclusively
to Lisp implementers then I might agree with you.  But this is a general
audience, which includes people who don't care about subtle implementation
issues, and are just interested in Lisp as a productivity tool.  To them
(subtypep 'string '(vector character)) --> NIL,T could appear to be sheer
brain damage.  (It seems that way to me at this point.)  I do not think it
is wise to risk giving such people the impression that the community
thinks this apparent brain damage is a feature without at least explaining
why up front.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <N_ydnU3M5NpQZGijXTWJkg@dls.net>
Erann Gat wrote:

>>Now, character and base-char are subtypes of character, but there's
>>another such standardized type:  nil.  And, as you may recall,
>>the rules for upgraded-array-element-type require that nil also be
>>a specialized array element type.  So, (vector nil) is a subtype
>>of string.
> 
> 
> 1.  No, I do not recall this, nor can I find any reference to it in the
> hyperspec.  In fact, the string "NIL" does not appear in section 15.1.2.2
> Required Kinds of Specialized Arrays.

(It was discussed in this newsgroup in the last year.)

It is a consequence of the rules for array element upgrading.
Section 15.1.2.1 states:

    Type upgrading implies a movement upwards in the type hierarchy
    lattice. A type is always a subtype of its upgraded array element
    type. Also, if a type Tx is a subtype of another type Ty, then the
    upgraded array element type of Tx must be a subtype of the upgraded
    array element type of Ty.

The page for UPGRADED-ARRAY-ELEMENT-TYPE states that

    If typespec is bit, the result is type equivalent to bit. If typespec
    is base-char, the result is type equivalent to base-char. If typespec
    is character, the result is type equivalent to character.

The requirement that (upgraded-array-element-type nil) be equivalent
to nil follows from these requirements (since: (uaet nil) must be a subtype
of (uaet 'bit) and of (uaet 'character), but those types are disjoint,
so then (uaet nil) must be empty.)

> 2.  If this were true, then MCL would not be compliant in responding T to
> (subtypep 'string '(or (vector character) (vector base-character)))

This is true.  MCL would not be compliant if it did that.

> 
> 3.  Requiring that nil be a specialized array element type would be
> extremely stupid, since such arrays would be useless, since they could not
> contain any elements.

Arguably true, but again irrelevant to whether the spec actually
requires this.

> 
> 4.  MCL returns T for (upgraded-array-element-type nil), and Lispworks
> generates an error saying (ARRAY NIL) is an illegal type specifier.
> 
> The evidence seems to be arrayed strongly against you.

Implementations do not dictate what the standard says.  The implementors
either didn't realize this was required, or did not see fit to follow the
spec here.


> I think you should consider your audience.  If you're talking exclusively
> to Lisp implementers then I might agree with you.  But this is a general
> audience, which includes people who don't care about subtle implementation
> issues, and are just interested in Lisp as a productivity tool.  To them
> (subtypep 'string '(vector character)) --> NIL,T could appear to be sheer
> brain damage. 

The issue is sufficiently esoteric that I doubt it would have any such effect.
If anything, such arguments could be taken as evidence that people actually
care enough about CL to be arguing these points, which is a positive thing.

	Paul
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sq1xxmur39.fsf@lambda.jcn.srcf.net>
···@jpl.nasa.gov (Erann Gat) writes:

> BTW, it seems to me you (and many CLers) take a certain glee is posing
> puzzles such as this.  I submit that you do the language a disservice by
> doing so.  These sorts of puzzles IMO should not be considered features. 
> If you think they are then you really should be programming in C++, which
> is much more featureful than Lisp on such a metric.

So instead we shouldn't bother reading the spec, relying instead on
the oral heritage of our ancestors?  Oh, please.  What's the point in
specifying a language, then?  

By all means, let's get real work done, but let's also examine the
language itself, if only that maybe future workers in language
standardization can avoid the pitfalls that are uncovered.

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: ·············@attbi.com
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ptl6f025.fsf@attbi.com>
"Paul F. Dietz" <·····@dls.net> writes:

> ·············@attbi.com wrote:
>
>> Well, how about that.  I knew this was an implementation trick, but I
>> didn't think it was codified to that extent.
>
> ANSI CL has many interesting things that are not explicitly stated,
> but are a consequence of combinations of other things in the
> standard.
>
> For example: in a conforming implementation it must be the case that
>
>    (subtypep 'string '(vector character)) ==> nil, true
>
> Anyone who hasn't been following #lisp or some the development
> mailing lists want to guess why that is?

I suspect it has something to do with type NIL.

But seeing as type string is *defined* as (vector character) in
section 16.2, `System Class STRING', I don't see why 'string and
'(vector character) aren't identical definitions.

If string is not a subtype of vector, then there must exist an object
that is a string, but not a vector.  What would that be?
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2206031236410001@192.168.1.51>
In article <············@attbi.com>, ·············@attbi.com wrote:

> "Paul F. Dietz" <·····@dls.net> writes:
> 
> > ·············@attbi.com wrote:
> >
> >> Well, how about that.  I knew this was an implementation trick, but I
> >> didn't think it was codified to that extent.
> >
> > ANSI CL has many interesting things that are not explicitly stated,
> > but are a consequence of combinations of other things in the
> > standard.
> >
> > For example: in a conforming implementation it must be the case that
> >
> >    (subtypep 'string '(vector character)) ==> nil, true
> >
> > Anyone who hasn't been following #lisp or some the development
> > mailing lists want to guess why that is?
> 
> I suspect it has something to do with type NIL.
> 
> But seeing as type string is *defined* as (vector character) in
> section 16.2, `System Class STRING', I don't see why 'string and
> '(vector character) aren't identical definitions.

It's because (vector character) does not mean (as one might intuitively
suppose) a vector constrained to hold only characters, but instead means a
vector that is required to be capable of holding characers (and possibly
other things as well).

> If string is not a subtype of vector, then there must exist an object
> that is a string, but not a vector.  What would that be?

STRING is a subtype of VECTOR, but (it is argued) not a subtype of (VECTOR
CHARACTER) because (VECTOR NIL) must be a subtype of STRING but cannot be
a subtype of (VECTOR CHARACTER).  This reasoning is obscure, but sound.

However, the argument hinges crucially on how one interprets the following
sentence:

" A string is a specialized vector whose elements are of type character or
a subtype of type character."

This could mean that string elements *may* be (some) sub-types of
CHARACTER, or it can be interpreted to mean that string elements *must* be
*all* sub-types of CHARACTER, including NIL.

Under the latter interpretation it is indeed true that (subtypep 'string
'(vector character)) must be NIL,T.  Under the former interpretation, the
result may be T or NIL depending on whether or not the implementation has
specialized versions of sub-types of STRINGs (e.g. base-strings or
simple-base-strings).

IMO therefore the former interpretation is to be preferred, since on this
interpretation the call in question actually yields useful information,
whereas on the latter interpretation all you get is a means for language
lawyers to show off how much smarter they are than everybody else.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <RrGcnYFZPN9Lg2ujXTWJkg@dls.net>
Erann Gat wrote:

> However, the argument hinges crucially on how one interprets the following
> sentence:
> 
> " A string is a specialized vector whose elements are of type character or
> a subtype of type character."
> 
> This could mean that string elements *may* be (some) sub-types of
> CHARACTER, or it can be interpreted to mean that string elements *must* be
> *all* sub-types of CHARACTER, including NIL.


The paragraph under 'Compound Type Specifier' on the STRING page clearly
implies the latter:

    This denotes the union of all types (array c (size)) for all subtypes
    c of character; that is, the set of strings of size size.

	Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2206031518480001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> Erann Gat wrote:
> 
> > However, the argument hinges crucially on how one interprets the following
> > sentence:
> > 
> > " A string is a specialized vector whose elements are of type character or
> > a subtype of type character."
> > 
> > This could mean that string elements *may* be (some) sub-types of
> > CHARACTER, or it can be interpreted to mean that string elements *must* be
> > *all* sub-types of CHARACTER, including NIL.
> 
> 
> The paragraph under 'Compound Type Specifier' on the STRING page clearly
> implies the latter:
> 
>     This denotes the union of all types (array c (size)) for all subtypes
>     c of character; that is, the set of strings of size size.
> 
>         Paul

No.  (array nil (size)) is the empty set, so the union of (array c (size))
is the same whether or not you include the case c==nil.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <8cWcnWVvk7lepGujXTWJjA@dls.net>
Erann Gat wrote:

> No.  (array nil (size)) is the empty set, so the union of (array c (size))
> is the same whether or not you include the case c==nil.

Why do you say (array nil (size)) is the empty set?  It's the set
of vectors of size size specialized to hold nothing.  You can't read from
or write into these vectors, but there's no reason you can't create them
(particularly if they have size zero.)

	Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2206031749320001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> Erann Gat wrote:
> 
> > No.  (array nil (size)) is the empty set, so the union of (array c (size))
> > is the same whether or not you include the case c==nil.
> 
> Why do you say (array nil (size)) is the empty set?  It's the set
> of vectors of size size specialized to hold nothing.

Not exactly.  It's the set of vectors of size size specialized to hold
members of the empty set.  But the empty set has no members, therefore
there can be no such arrays.

> You can't read from or write into these vectors

Only because they can't exist.

>  but there's no reason you can't create them

Yes, there is.  Section 15.1.1:

"An array contains a set of objects called elements that can be referenced
individually according to a rectilinear coordinate system."

It does not make any exceptions.  Therefore, if you can't reference its
elements, it can't be an array.

> (particularly if they have size zero.)

A student went to the master of the lambda nature and asked, "Is it true
that (subtypep 'string '(vector character)) must return NIL,T is a
conforming Common Lisp implementation?"  The master replied, "Here are two
grains of sand.  One of them is an elephant cage with a capacity of zero
elephants.  The other is a zebra cage with a capacity of zero zebras. 
Tell me which is which."  At that moment the student was enlightened.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ThednTHu9_knxGujXTWJjA@dls.net>
Erann Gat wrote:

> Not exactly.  It's the set of vectors of size size specialized to hold
> members of the empty set.  But the empty set has no members, therefore
> there can be no such arrays.

Sure there can.  You can't read or write into them, but you can create them.

> "An array contains a set of objects called elements that can be referenced
> individually according to a rectilinear coordinate system."
> 
> It does not make any exceptions.  Therefore, if you can't reference its
> elements, it can't be an array.

So you're saying that the object returned by this call:

   (make-array '(10))

is not an array, since I can't legally reference its elements
(they have not been initialized).

An array with NIL element type is just like that array, except
that I can never set the elements so that they can later be read.

>>(particularly if they have size zero.)

   [ non sequitur deleted ]

Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2206032225180001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> So you're saying that the object returned by this call:
> 
>    (make-array '(10))
> 
> is not an array, since I can't legally reference its elements
> (they have not been initialized).

Of course you can legally reference the contents of an uninitialized
array, it's just that the results of doing so are undefined.  Note, by the
way, that this implies that a conforming implementation could in fact dump
core if you call (make-array '(10)) at the top level without taking
certain precautions.  (I predict that such an implementation would not be
long-lived.)

> An array with NIL element type is just like that array, except
> that I can never set the elements so that they can later be read.

The master handed the student a box of chocolates and said, "This is an
elephant cage."  The student protested saying, "But it is full of
chocolates, not elephants.  And besides, it is nowhere near large enough
to hold an elephant."  The master replied, "It is just like an elephant
cage except that I can never put elephants in it so that they can later be
taken out."

Alas, the student was apparently not enlightened.  :-(

E.
From: sv0f
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <none-8106C9.13465824062003@news.vanderbilt.edu>
In article <····················@192.168.1.51>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

>In article <······················@dls.net>, "Paul F. Dietz"
><·····@dls.net> wrote:
>
>> So you're saying that the object returned by this call:
>> 
>>    (make-array '(10))
>> 
>> is not an array, since I can't legally reference its elements
>> (they have not been initialized).

I vaguely recall a discussion here as to the legality
and utility of things like:

? (setq *a* (make-array '(0)))
#()

My understanding was that such arrays could legally be
created, and might even be useful for the non-element
information they carry:

? (array-element-type *a*)
T

[The preceding output from MCL 4.1.]

>The master handed the student a box of chocolates and said, "This is an
>elephant cage."  The student protested saying, "But it is full of
>chocolates, not elephants.  And besides, it is nowhere near large enough
>to hold an elephant."  The master replied, "It is just like an elephant
>cage except that I can never put elephants in it so that they can later be
>taken out."
>
>Alas, the student was apparently not enlightened.  :-(

Alas, I too am not enlightened.  Does this mean I can
eat the chocolates in ignorant bliss, without fear of
getting bits of elephant dung in my teeth?  ;-)
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2406031524450001@k-137-79-50-101.jpl.nasa.gov>
In article <··························@news.vanderbilt.edu>, sv0f
<····@vanderbilt.edu> wrote:

> In article <····················@192.168.1.51>,
>  ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> >In article <······················@dls.net>, "Paul F. Dietz"
> ><·····@dls.net> wrote:
> >
> >> So you're saying that the object returned by this call:
> >> 
> >>    (make-array '(10))
> >> 
> >> is not an array, since I can't legally reference its elements
> >> (they have not been initialized).
> 
> I vaguely recall a discussion here as to the legality
> and utility of things like:
> 
> ? (setq *a* (make-array '(0)))
> #()
> 
> My understanding was that such arrays could legally be
> created, and might even be useful for the non-element
> information they carry:
> 
> ? (array-element-type *a*)
> T
> 
> [The preceding output from MCL 4.1.]
> 

The utility and legality of arrays of size zero is not in dispute
(especially if they have fill pointers).  What is in dispute is the
utility and/or legality of arrays with element type NIL, that is, arrays
whose elements are constrained to be members of the empty set.

My position is 1) arrays of size zero with element type nil are arguably
legal but useless, and 2) arrays of size>0 with element type nil are
expressly prohibited on the grounds that the standard requires that "An
array contains a set of objects called elements that can be referenced
individually according to a rectilinear coordinate system."  An array with
element type NIL is the same as an array that is expressly prohibited from
containing any elements.  Such a thing is not an array for the same reason
that an integer that is both odd and even is not an integer
(notwithstanding the fact that its description begins with the phrase "an
integer").

> >The master handed the student a box of chocolates and said, "This is an
> >elephant cage."  The student protested saying, "But it is full of
> >chocolates, not elephants.  And besides, it is nowhere near large enough
> >to hold an elephant."  The master replied, "It is just like an elephant
> >cage except that I can never put elephants in it so that they can later be
> >taken out."
> >
> >Alas, the student was apparently not enlightened.  :-(
> 
> Alas, I too am not enlightened.  Does this mean I can
> eat the chocolates in ignorant bliss, without fear of
> getting bits of elephant dung in my teeth?  ;-)

The point is that an elephant cage that cannot contain elephants is not an
elephant cage, just as an array that cannot contain elements is not an
array.

Note the use of the qualifier "cannot" as opposed to "does not".  An
elephant cage that does not (but could) contain elephants is still an
elephant cage.  Paul Dietz advanced the argument that an array with
element type NIL is the same as an uninitialized array.  I dispute this. 
An array with element type NIL is much worse that uninitialized, it is
UNINITIALIZABLE!  The idea that an array with element type NIL is an array
is just as defensible (and by exactly the same reasoning) as the idea that
a box of chocolates is an elephant cage.

E.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2406031630240001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
···@jpl.nasa.gov (Erann Gat) wrote:

> An array with element type NIL is much worse that uninitialized, it is

That should, of course, read "much worse THAN uninitialized".

> UNINITIALIZABLE!  The idea that an array with element type NIL is an array
> is just as defensible (and by exactly the same reasoning) as the idea that
> a box of chocolates is an elephant cage.

L'esprit d'escalier strikes again:  An array with element type NIL is not
an array just as a Zen koan with a logical explanation is not a Zen koan.

E.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwbrwm3i9z.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <····················@k-137-79-50-101.jpl.nasa.gov>,
> ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> > An array with element type NIL is much worse that uninitialized, it is
> 
> That should, of course, read "much worse THAN uninitialized".
> 
> > UNINITIALIZABLE!  The idea that an array with element type NIL is an array
> > is just as defensible (and by exactly the same reasoning) as the idea that
> > a box of chocolates is an elephant cage.
> 
> L'esprit d'escalier strikes again:  An array with element type NIL is not
> an array just as a Zen koan with a logical explanation is not a Zen koan.

There is a subtle difference between "uninitializable" and "uncreatable".
I don't see a problem creating such an array.  One should just make it so
that they can't be read or written.  I might implement this by making a
displaced array and displacing it to non-existent memory. ;)

I don't see why
 (length (list (make-array nil :element-type nil)))
shouldn't quietly return 1.  That is, these arrays may not be useful for
everything other arrays are, but they still make nice doorstops, 
paperweights, etc.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2406032220310001@192.168.1.51>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <····················@k-137-79-50-101.jpl.nasa.gov>,
> > ···@jpl.nasa.gov (Erann Gat) wrote:
> > 
> > > An array with element type NIL is much worse that uninitialized, it is
> > 
> > That should, of course, read "much worse THAN uninitialized".
> > 
> > > UNINITIALIZABLE!  The idea that an array with element type NIL is an array
> > > is just as defensible (and by exactly the same reasoning) as the idea that
> > > a box of chocolates is an elephant cage.
> > 
> > L'esprit d'escalier strikes again:  An array with element type NIL is not
> > an array just as a Zen koan with a logical explanation is not a Zen koan.
> 
> There is a subtle difference between "uninitializable" and "uncreatable".

That's true.  But in light of the definition of array and array-total-size
I contend that an uninitializable array of size>0 would violate the
standard.  The case of size=0 is arguable but moot.

> I don't see a problem creating such an array.  One should just make it so
> that they can't be read or written.  I might implement this by making a
> displaced array and displacing it to non-existent memory. ;)

Well, it would violate the standard, and it would be totally useless. 
Other than that I don't see any problems either.

> I don't see why
>  (length (list (make-array nil :element-type nil)))
> shouldn't quietly return 1.

Because it would violate the standard to do so.  (Note that an array with
no dimensions contains one element.  Surely you knew that.)

Should (length (list (find-even-prime-greater-than-two))) be allowed to
quietly return 1?  How about (length (list (make-instance nil)))?

>  That is, these arrays may not be useful for
> everything other arrays are, but they still make nice doorstops, 
> paperweights, etc.

No, they don't.  They infest the language with all kinds of useless
special cases and other random stupidity.  If you need a doorstop do:

(defstruct doorstop)
(make-doorstop)

E.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwptl2vcsd.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> > I don't see why
> >  (length (list (make-array nil :element-type nil)))
> > shouldn't quietly return 1.
> 
> Because it would violate the standard to do so.  (Note that an array with
> no dimensions contains one element.  Surely you knew that.)

I don't see that it violates the standard to have an object of type ARRAY
that has element-type NIL and array-rank whatever and has dimensions and
maybe even a fill pointer that simply signals an error if you try to access 
it ...

> Should (length (list (find-even-prime-greater-than-two))) be allowed to
> quietly return 1?  How about (length (list (make-instance nil)))?

What a function should or shouldn't return is related to its contract.
Where does it say MAKE-ARRAY must be the one to signal an error.
Perhaps if left alone, no error would be required. 

> >  That is, these arrays may not be useful for
> > everything other arrays are, but they still make nice doorstops, 
> > paperweights, etc.
> 
> No, they don't.  They infest the language with all kinds of useless
> special cases and other random stupidity.  If you need a doorstop do:

This isn't a useless special case.  Perhaps the reason NIL is being used
is that some program put NIL there because it knew that a reference would
never occur.  The useless special case is to have to do:

 (defmacro define-frob (foo &key ...(result-type nil)...)
   `(progn ... 
      ,@(when result-type
          `((defvar ,foo (make-array 10 :element-type ,result-type))))
       ....))

rather than

(defmacro define-frob (foo &key ...(result-type nil)...)
   `(progn ... 
       (defvar ,foo
          ;; might never get used if result-type is NIL
          (make-array 10 :element-type ,result-type)) 
       ....))

Sometimes the naturally right result just falls out of failing to use 
something.

I'd be surprised if you can find a place in CLHS where it says "if an array
would be created with an element-type that would be illegal to access, the
implementation must aggressively signal an error at array creation time rather 
than signaling an error at the time of the illegal access."  But I've been
wrong before and so you might just surprise me.  Go ahead and cite your
best reference.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <DiGdncVzn7gl6mSjXTWJhw@dls.net>
Kent M Pitman wrote:

> I'd be surprised if you can find a place in CLHS where it says "if an array
> would be created with an element-type that would be illegal to access, the
> implementation must aggressively signal an error at array creation time rather 
> than signaling an error at the time of the illegal access."  But I've been
> wrong before and so you might just surprise me.  Go ahead and cite your
> best reference.

I looked in the glossary under 'elephant' and 'chocolate', but it wasn't there.

	Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506030842090001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> Kent M Pitman wrote:
> 
> > I'd be surprised if you can find a place in CLHS where it says "if an array
> > would be created with an element-type that would be illegal to access, the
> > implementation must aggressively signal an error at array creation
time rather 
> > than signaling an error at the time of the illegal access."  But I've been
> > wrong before and so you might just surprise me.  Go ahead and cite your
> > best reference.
> 
> I looked in the glossary under 'elephant' and 'chocolate', but it wasn't
there.

The prohibition on upgrading type NIL isn't there either.

E.
From: Joe Marshall
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ptl29u6k.fsf@ccs.neu.edu>
"Paul F. Dietz" <·····@dls.net> writes:

> Kent M Pitman wrote:
> 
> > I'd be surprised if you can find a place in CLHS where it says "if an array
> > would be created with an element-type that would be illegal to access, the
> > implementation must aggressively signal an error at array creation
> > time rather than signaling an error at the time of the illegal
> > access."  But I've been
> 
> > wrong before and so you might just surprise me.  Go ahead and cite your
> > best reference.
> 
> I looked in the glossary under 'elephant' and 'chocolate', but it wasn't there.

Of course not!  Arrays of type NIL can't hold those objects.

You have to *not* look under 'elephant' and 'chocolate'.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506030841150001@192.168.1.51>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > > I don't see why
> > >  (length (list (make-array nil :element-type nil)))
> > > shouldn't quietly return 1.
> > 
> > Because it would violate the standard to do so.  (Note that an array with
> > no dimensions contains one element.  Surely you knew that.)
> 
> I don't see that it violates the standard to have an object of type ARRAY
> that has element-type NIL and array-rank whatever and has dimensions and
> maybe even a fill pointer that simply signals an error if you try to access 
> it ...

Why should it signal an error?  Since such an array would necessarily be
uninitialized the consequences of accessing it are undefined (specified in
the description of make-array).  That means it might signal an error, or
it might return a value, or it might dump core, or it might enter an
infinite loop.

> > Should (length (list (find-even-prime-greater-than-two))) be allowed to
> > quietly return 1?  How about (length (list (make-instance nil)))?
> 
> What a function should or shouldn't return is related to its contract.
> Where does it say MAKE-ARRAY must be the one to signal an error.

Who said anything about an error?  (find-even-prime-greater-than-two)
would not return an error, it would enter an infinite loop.

> Perhaps if left alone, no error would be required.

Yes.  This applies equally well to both of the examples I cited.  They
were very carefully chosen to make this point.  (make-instance nil)
results in an error because nil is a type, not a standard class, but
stretch your imagination a bit: why should (make-instance nil) not be
allowed to return (values)?

> > >  That is, these arrays may not be useful for
> > > everything other arrays are, but they still make nice doorstops, 
> > > paperweights, etc.
> > 
> > No, they don't.  They infest the language with all kinds of useless
> > special cases and other random stupidity.  If you need a doorstop do:
> 
> This isn't a useless special case.  Perhaps the reason NIL is being used
> is that some program put NIL there because it knew that a reference would
> never occur.

If the program knew the reference would never occur why did it bother to
make the array?

> The useless special case is to have to do:
> 
>  (defmacro define-frob (foo &key ...(result-type nil)...)
>    `(progn ... 
>       ,@(when result-type
>           `((defvar ,foo (make-array 10 :element-type ,result-type))))
>        ....))
> 
> rather than
> 
> (defmacro define-frob (foo &key ...(result-type nil)...)
>    `(progn ... 
>        (defvar ,foo
>           ;; might never get used if result-type is NIL
>           (make-array 10 :element-type ,result-type)) 
>        ....))

All you've done is move the special case (or the error) out of the code
that creates the array to the code that references it (which you
conveniently didn't show).  (And if there is no code that references the
array then you really should have used (make-doorstop) instead.)

BTW, I'd be really astonished if there were real code out there that had a
legitimate reason for having a result-type default to (or even be) nil.

> Sometimes the naturally right result just falls out of failing to use 
> something.

Yes, and if you pursue this idea to its logical conclusion you end up with
Haskell.

> I'd be surprised if you can find a place in CLHS where it says "if an array
> would be created with an element-type that would be illegal to access, the
> implementation must aggressively signal an error at array creation time
rather 
> than signaling an error at the time of the illegal access."  But I've been
> wrong before and so you might just surprise me.  Go ahead and cite your
> best reference.

I'd be surprised if you can find a place in the spec where it says that
the type NIL cannot be upgraded, and yet it is true.  It is, as Paul
pointed out to start this whole discussion, a logical consequence of other
things that the spec says.

Likewise, the spec does not say that an array cannot be created with
element type nil.  This is a logical consequence of other things the spec
says.

BTW, your paraphrase of my position is a straw man.  I never said that the
spec requires that "if an array would be created with an element-type that
would be illegal to access, the implementation must aggressively signal an
error at array creation time rather than signaling an error at the time of
the illegal access."

In fact, the phrase "an element type that would be illegal to access" is
nonsensical.  You don't access the element type, you access the PLACES
where those elements are stored.  The element type is a constraint on the
type of element that you may put there (and therefore a constraint on the
type of element that you may find there).  But places don't have types,
only the elements in those places have types.

The illegality of an element type of nil follows from the definition of
array-total-size.  This definition stipulates unconditionally that the
number of elements in an array is the product of the array's dimensions. 
It follows logically from this that the elements of any array with size>0
must be members of a non-empty set, and therefore the element type cannot
be nil.

(Strictly speaking, the upgraded-array-element-type cannot be nil, but
since the nil type cannot be upgraded these amount to the same thing.)

BTW, I can also argue for my position based on accepted practice.  Neither
MCL, Lispworks nor CLisp handle this "correctly" on your and Paul's view. 
MCL upgrades NIL to T.  Lispworks signals an error, saying (array nil) is
an illegal type.  Clisp upgrades NIL to BIT.  I don't hear very many
people clamoring for this to change.

E.
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sq4r2ennmk.fsf@lambda.jcn.srcf.net>
···@jpl.nasa.gov (Erann Gat) writes:

> Yes.  This applies equally well to both of the examples I cited.  They
> were very carefully chosen to make this point.  (make-instance nil)
> results in an error because nil is a type, not a standard class, but
> stretch your imagination a bit: why should (make-instance nil) not be
> allowed to return (values)?

Because (values) is not of type NIL.

> [ Kent: ]
>> This isn't a useless special case.  Perhaps the reason NIL is being used
>> is that some program put NIL there because it knew that a reference would
>> never occur.
>
> If the program knew the reference would never occur why did it bother to
> make the array?

Do you ever put assertions in your code?  Do you ever put assertions
in your generated code?  I know I do.

> BTW, I'd be really astonished if there were real code out there that had a
> legitimate reason for having a result-type default to (or even be) nil.

I'm inclined to agree here, given that as far as I know no vendor has
ever received a bug report on the absence of arrays specialized on NIL
(but do speak up if you have!).  I don't really buy the idea of (ARRAY
NIL) objects as poison pills, enabling one to trace the location of an
error, either.

On the other hand, I question your assertion that (ARRAY NIL) is a
wart.  If we're going to argue aesthetics (and, let's face it, that's
all we can argue if we're going to agree that no-one is likely to use
these things except in test suites) then I think the case for the
absence of this type is quite tricky to make.  Or in other words, I
invite you to try to specify a replacement for 15.1.2.1.

Cheers,

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506030956030001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
<·····@cam.ac.uk> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > Yes.  This applies equally well to both of the examples I cited.  They
> > were very carefully chosen to make this point.  (make-instance nil)
> > results in an error because nil is a type, not a standard class, but
> > stretch your imagination a bit: why should (make-instance nil) not be
> > allowed to return (values)?
> 
> Because (values) is not of type NIL.

Of course it is.

? (subtypep (values) nil)
T
T


> On the other hand, I question your assertion that (ARRAY NIL) is a
> wart.

I do not claim it is a wart.  I claim it would be a wart if it were
allowed, but it isn't, so it isn't.

> If we're going to argue aesthetics (and, let's face it, that's
> all we can argue if we're going to agree that no-one is likely to use
> these things except in test suites) then I think the case for the
> absence of this type is quite tricky to make.  Or in other words, I
> invite you to try to specify a replacement for 15.1.2.1.

I don't think we need a replacement.  I think it's just hunky dory the way
it is.  (Well, I don't understand the rationale for the restriction that
prevents nil from being upgraded, but I'm willing to take it on faith that
someone who understands these things better than I do had a good reason
for it.)

E.
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sqwufam3ys.fsf@lambda.jcn.srcf.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
> <·····@cam.ac.uk> wrote:
>
>> ···@jpl.nasa.gov (Erann Gat) writes:
>> 
>> > Yes.  This applies equally well to both of the examples I cited.  They
>> > were very carefully chosen to make this point.  (make-instance nil)
>> > results in an error because nil is a type, not a standard class, but
>> > stretch your imagination a bit: why should (make-instance nil) not be
>> > allowed to return (values)?
>> 
>> Because (values) is not of type NIL.
>
> Of course it is.
>
> ? (subtypep (values) nil)
> T
> T

*boggle* Of course it isn't:

* (typep (values) nil)
NIL

Of course the value of (VALUES) is subtypep NIL, because the value of
(VALUES) [in a single-value context] _is_ NIL.

(make-instance 'foo) returns an object of type FOO.  By your
hypothesized extension, (make-instance nil) returns an object of type
NIL.  There are none; ergo, (make-instance nil) must signal an error,
even in your world with an extended MAKE-INSTANCE.

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031133170001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
<·····@cam.ac.uk> wrote:

> >> Because (values) is not of type NIL.
> >
> > Of course it is.
> >
> > ? (subtypep (values) nil)
> > T
> > T
> 
> *boggle* Of course it isn't:
> 
> * (typep (values) nil)
> NIL

But you're not testing the value of (values), you're testing the
"upgraded" value of (values) when used in a context that expects a value. 
That value is NIL, which is indeed not of type NIL, it is of type NULL,
which is not the same thing.

> Of course the value of (VALUES) is subtypep NIL, because the value of
> (VALUES) [in a single-value context] _is_ NIL.

No.  The value of (values) is <no values>.  (Look it up.)  When <no
values> is used in a context where a value is expected, the value NIL is
substituted instead of generating an error.  But this does not mean that
<no values> and nil are the same thing:

(multiple-value-list (values)) --> ()
(multiple-value-list nil) --> (nil)

Note by the way that one way to justify the fact that (car nil) is nil is
to argue that (car nil) is *actually* <no values>, and that this gets
"upgraded" to nil whenever this (non-)value is actually used.

BTW, one way to resolve this dispute is to change the standard (don't
shoot me!) to stipulate that (upgraded-array-element-type nil) can be
NULL, notwithstanding the rules that currently prohibit this from
happening.

I am actually surprised now that no one on the other side of this argument
has yet suggested something like the following as the correct behavior for
arrays with element type nil:

(setf x (make-array nil :element-type nil))
(setf (aref x) (values))
(aref x) --> <no values>

(If you find yourself tempted to make this argument, explain to me why you
are not actually attempting to set the value to NIL in the second step.)

> (make-instance 'foo) returns an object of type FOO.  By your
> hypothesized extension, (make-instance nil) returns an object of type
> NIL.  There are none; ergo, (make-instance nil) must signal an error,
> even in your world with an extended MAKE-INSTANCE.

In a language where every function had to return a value I would agree
with you.  But in Common Lisp functions may return zero values.  I contend
that returning zero values would be a perfectly legitimate (and in fact
the only reasonable) behavior for (make-instance nil) if it were not (as
it is) a priori prohibited.

E.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031139480001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
···@jpl.nasa.gov (Erann Gat) wrote:

> Note by the way that one way to justify the fact that (car nil) is nil is
> to argue that (car nil) is *actually* <no values>, and that this gets
> "upgraded" to nil whenever this (non-)value is actually used.

Just to clarify: I did not mean that this argument applies in Common
Lisp.  (It doesn't, because if it did (multiple-value-list (car nil))
would be (), not (nil).)  I meant that this narrative could be used in a
hypothetical dialect of Lisp where one wished for (car nil) to not
generate an error while preserving the invariant (equal x (cons (car x)
(cdr x))) for all x, including nil.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <_P-dnTtt-cnioWejXTWJhw@dls.net>
Erann Gat wrote:

>>>? (subtypep (values) nil)
>>>T
>>>T
>>
>>*boggle* Of course it isn't:
>>
>>* (typep (values) nil)
>>NIL
> 
> 
> But you're not testing the value of (values), you're testing the
> "upgraded" value of (values) when used in a context that expects a value. 
> That value is NIL, which is indeed not of type NIL, it is of type NULL,
> which is not the same thing.

(values) is not a legal type specifier here.  The (values) type
specifier cannot be used with subtypep or typep, but 'only as the value-type
in a function type specifier or a the special form.'

(values) as you used it produces the value NIL.  subtypep is being
passed two arguments, NIL and NIL.  It is not being passed a 'no values
value' and NIL.

	Paul
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sq4r2d7vv0.fsf@lambda.jcn.srcf.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
> <·····@cam.ac.uk> wrote:
>
>> >> Because (values) is not of type NIL.
>> >
>> > Of course it is.
>> >
>> > ? (subtypep (values) nil)
>> > T
>> > T
>> 
>> *boggle* Of course it isn't:
>> 
>> * (typep (values) nil)
>> NIL
>
> But you're not testing the value of (values), you're testing the
> "upgraded" value of (values) when used in a context that expects a value. 
> That value is NIL, which is indeed not of type NIL, it is of type NULL,
> which is not the same thing.

And just what did you think your SUBTYPEP thing was doing?  Hint:
what's the difference between what you typed and (subtypep '(values)
nil)?  [ Note that the latter isn't portable CL, though your
implementation will probably do operations like it internally. ] How
was your SUBTYPEP form relevant _in any way_ to your argument?

>> Of course the value of (VALUES) is subtypep NIL, because the value of
>> (VALUES) [in a single-value context] _is_ NIL.
>
> No.  The value of (values) is <no values>.  (Look it up.)  When <no
> values> is used in a context where a value is expected, the value NIL is
> substituted instead of generating an error.  

Yes, this is what I said, with a slight shift in emphasis.

> But this does not mean that <no values> and nil are the same thing:
>
> (multiple-value-list (values)) --> ()
> (multiple-value-list nil) --> (nil)

No argument on this point.

> BTW, one way to resolve this dispute is to change the standard (don't
> shoot me!) to stipulate that (upgraded-array-element-type nil) can be
> NULL, notwithstanding the rules that currently prohibit this from
> happening.

Well, that might be one way of doing it, though I'd like to know why
you want to mandate that every implementation have arrays specialized
to hold objects of the NULL type.  A better solution might be to make
an exception for the empty type: that is, to specify that it upgrades
to BIT.  Since this discussion started as a discussion of
probably-unintended consequences of the current standard, no-one is
going to shoot you for exploring what might have been done better.  At
least, I hope not.

>> (make-instance 'foo) returns an object of type FOO.  By your
>> hypothesized extension, (make-instance nil) returns an object of type
>> NIL.  There are none; ergo, (make-instance nil) must signal an error,
>> even in your world with an extended MAKE-INSTANCE.
>
> In a language where every function had to return a value I would agree
> with you.  But in Common Lisp functions may return zero values.  I contend
> that returning zero values would be a perfectly legitimate (and in fact
> the only reasonable) behavior for (make-instance nil) if it were not (as
> it is) a priori prohibited.

No, it isn't the only reasonable behaviour.  Functions that signal
non-continuable errors have a return type of NIL; since they never
return, they never return values, be it in a single-value context or
not.  Thus signalling errors is correct always for contexts expecting
NIL types; returning <no values>, because of the single-value
"misinterpretation" (if you like), is less good.

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031403110001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
<·····@cam.ac.uk> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
> > <·····@cam.ac.uk> wrote:
> >
> >> >> Because (values) is not of type NIL.
> >> >
> >> > Of course it is.
> >> >
> >> > ? (subtypep (values) nil)
> >> > T
> >> > T
> >> 
> >> *boggle* Of course it isn't:
> >> 
> >> * (typep (values) nil)
> >> NIL
> >
> > But you're not testing the value of (values), you're testing the
> > "upgraded" value of (values) when used in a context that expects a value. 
> > That value is NIL, which is indeed not of type NIL, it is of type NULL,
> > which is not the same thing.
> 
> And just what did you think your SUBTYPEP thing was doing?  Hint:
> what's the difference between what you typed and (subtypep '(values)
> nil)?  [ Note that the latter isn't portable CL, though your
> implementation will probably do operations like it internally. ] How
> was your SUBTYPEP form relevant _in any way_ to your argument?

Doh, my mistake.

? (subtypep '(values) nil)
> Error: VALUES type illegal in this context:
>          (VALUES)
> While executing: CCL::SPECIFIER-TYPE
> Type Command-. to abort.
See the Restarts� menu item for further choices.
1 > 

One could still argue that if one were to want to make this legal that the
answer should be T.  CLisp is agnostic on this issue:

[1]> (subtypep '(values) nil)
NIL ;
NIL

I believe MCL's behavior is in fact correct, because there is a
prohibition on the (values ...) typespec that it can only be used in
certain contexts, and subtypep isn't one of them.

My Lispworks seems to have suddenly stopped working (it was a beta, so the
licence probably expired) so I can't test it any more.

> Well, that might be one way of doing it, though I'd like to know why
> you want to mandate that every implementation have arrays specialized
> to hold objects of the NULL type.  A better solution might be to make
> an exception for the empty type: that is, to specify that it upgrades
> to BIT.

That would be fine with me.  What I think should be avoided is the idea
that implementations are required to support (array nil) as a
non-upgradeable sub-type of string.  Another possible solution is to
simply get rid of the NIL type.  It seems to serve no purpose other than
to create confusion.

> Since this discussion started as a discussion of
> probably-unintended consequences of the current standard, no-one is
> going to shoot you for exploring what might have been done better.  At
> least, I hope not.

I have been pilloried in the past for conducting such explorations.  But
in this case I really think no change is needed.  The current standard I
think can be reasonably interpreted as prohibiting a non-upgradeable
(array nil), and this is consistent with current practice.  Nothing is
broken (except perhaps some people's reasoning skills, but that can't be
fixed by changing the standard anyway.)

> >> (make-instance 'foo) returns an object of type FOO.  By your
> >> hypothesized extension, (make-instance nil) returns an object of type
> >> NIL.  There are none; ergo, (make-instance nil) must signal an error,
> >> even in your world with an extended MAKE-INSTANCE.
> >
> > In a language where every function had to return a value I would agree
> > with you.  But in Common Lisp functions may return zero values.  I contend
> > that returning zero values would be a perfectly legitimate (and in fact
> > the only reasonable) behavior for (make-instance nil) if it were not (as
> > it is) a priori prohibited.
> 
> No, it isn't the only reasonable behaviour.  Functions that signal
> non-continuable errors have a return type of NIL; since they never
> return, they never return values, be it in a single-value context or
> not.  Thus signalling errors is correct always for contexts expecting
> NIL types; returning <no values>, because of the single-value
> "misinterpretation" (if you like), is less good.

Well now, this brings up an interesting issue.  Is there a difference
between having a return type of NIL and not having a return type at all? 
(If a tree falls in the forest and there's no one there to hear...)  I
would say that yes, there is a difference, and that this difference is
significant in cases like this:

(defun foo ()
  (progn
    (function-having-return-type-of-nil)
    (baz)))

What is the return type of foo?  If having a return type of nil is the
same as a non-continuable error than the return type of foo is nil.  If
having a return type of nil means returning (values) then the return type
of foo is the return type of baz.  IMO the latter interpretation has more
utility.

E.
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sq3chxz1ay.fsf@lambda.jcn.srcf.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
> <·····@cam.ac.uk> wrote:
>
>> And just what did you think your SUBTYPEP thing was doing?  Hint:
>> what's the difference between what you typed and (subtypep '(values)
>> nil)?  [ Note that the latter isn't portable CL, though your
>> implementation will probably do operations like it internally. ] How
>> was your SUBTYPEP form relevant _in any way_ to your argument?
>
> Doh, my mistake.
>
> ? (subtypep '(values) nil)
>> Error: VALUES type illegal in this context:
>>          (VALUES)
>> While executing: CCL::SPECIFIER-TYPE
>> Type Command-. to abort.
> See the Restarts menu item for further choices.
> 1 > 
>
> One could still argue that if one were to want to make this legal that the
> answer should be T.  CLisp is agnostic on this issue:

One could, but then at least some implementors who go to some effort
to use the CL type system would be very unhappy.  NIL is a type for
functions which do not return, and is distinct from the type
(VALUES).  Why?  Well, let's explain by considering
  (defun two-arg-+ (x y)
    (if (and (typep x 'number) (typep y 'number))
        (+ x y) ; open-coded
        (error 'type-error ...)))
and
  (defun bogoid (x y)
    (if (and (typep x 'number) (typep y 'number))
        (+ x y) ; open-coded
        (values)))

Let's look at the compiler's view of these functions.  TWO-ARG-+ has a
type return of the union of the returns of its branches: NUMBER and
whatever type we assign to ERROR; similarly, BOGOID has a type return
of the union of NUMBER and whatever type we assign to (VALUES).  Why
should these types be distinct? Well, consider
  (defun foo (x y)
    (values (typep (two-arg-+ x y) 'number)
            (typep (bogoid x y) 'number)))

The first return value from FOO, as I hope is clear to readers, will
always[1] be true.  The second return value might be true or false.
So whatever types we assign to TWO-ARG-+ and BOGOID, we would like
them to be distinct, since the functions have distinct type behaviour.

Further, the union of NUMBER and what we assign to ERROR, by this
reasoning, had better be type-equivalent to NUMBER, meaning that the
type we assign to ERROR must be a subtype of NUMBER.  By similar
reasoning with other types, we can show that the type assigned to
ERROR must be the universal subtype.  So the function ERROR has return
type NIL.  Rather than invent a new type for (VALUES), it seems
reasonable to represent said type as simply (VALUES), but it is a
distinct type and is not NIL.

To go back to your example, which I've snipped,
  (defun bar ()
    (progn
      (nil-function)
      (baz)))
since PROGN is a sequencing operation, the return type of BAR is the
return type of BAZ _provided NIL-FUNCTION returns_.  The use of NIL as
a function return type for functions that do not return allows us to
reason about this case.

* (sb-kernel:csubtypep (sb-kernel:values-specifier-type '(values)) 
                       (sb-kernel:specifier-type 'nil))
NIL
T

(I'm guessing that your examples were from OpenMCL; if so, then
replace sb-kernel: with ccl:: to see what that thinks of it).

Christophe

[1] if the functions are defined in the same file, or the compiler has
otherwise been given latitude to treat TWO-ARG-+ as inline.
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2606030821300001@192.168.1.51>
In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
<·····@cam.ac.uk> wrote:

> > ? (subtypep '(values) nil)
> >> Error: VALUES type illegal in this context:
> >>          (VALUES)
> >> While executing: CCL::SPECIFIER-TYPE
> >> Type Command-. to abort.
> > See the Restarts menu item for further choices.
> > 1 > 
> >
> > One could still argue that if one were to want to make this legal that the
> > answer should be T.  CLisp is agnostic on this issue:
> 
> One could, but then at least some implementors who go to some effort
> to use the CL type system would be very unhappy.  NIL is a type for
> functions which do not return, and is distinct from the type
> (VALUES).  Why?

[argument snipped]

OK, I'm convinced, but I suspect Kent will be unhappy, because this
argument can be extended to argue that (make-array nil :element-type nil)
must signal an error.

From the definition of array-total-size we can infer than array if size X
and element type Y must contain X elements of type Y.  (Note that this is
true regardless of whether the array was explicitly initialized.  One can
infer from the definition of array-total-size an implicit requirement that
all arrays be initialized by the system to something, even if the system
has to choose an initialization value or values itself.  AFAIK, all
implementations of Common Lisp do do this.)

Therefore, when constructing an array of size X and type Y the system must
have constructed (at least implicitly) X elements of type Y.  If it tried
to construct >0 elements of type NIL then it must have signalled an error
(or entered an infinite loop I suppose).

E.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwfzlwokw4.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> when constructing an array of size X and type Y the system must
> have constructed (at least implicitly) X elements of type Y.  If it tried
> to construct >0 elements of type NIL then it must have signalled an error
> (or entered an infinite loop I suppose).

This neglects the possibility of uninitialized arrays.

There is no necessary requirement that constructing places for these items
forces you to create such items.

 "If initial-contents is not supplied, the consequences of later
  reading an uninitialized element of new-array are undefined unless
  either initial-element is supplied or displaced-to is non-nil."
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bdf6d0$ila$1@f1node01.rhrz.uni-bonn.de>
Erann Gat wrote:

> From the definition of array-total-size we can infer than array if size X
> and element type Y must contain X elements of type Y.

I have been following this discussion only very loosely, so I am not 
completely sure if I am on topic.

Anyway, I think that the definition of array-total-size doesn't imply 
that the array must contain any well-defined elements.

The glossary defines it like this:

array total size n. the total number of elements in an array, computed 
by taking the product of the dimensions of the array.

and:

dimension n. 1. a non-negative integer indicating the number of objects 
an array can hold along one axis.

So the definition of dimension seems to imply that it's sufficient that 
an array just has the potential to hold a specific number of elements - 
it doesn't need to actually hold them.


Furthermore, the spec of make-array suggests that undefined elements are 
explicitly allowed. Both in the description of :inital-contents and 
:initial-element it is stated that

"If initial-element/initial-contents is not supplied, the consequences 
of later reading an uninitialized element of new-array are undefined [...]"

This seems to be reasonable because it allows for efficient allocation 
of arrays.


Together, these definitions would support the notion that an array with 
element types NIL can exist as long as its elements are never initialized.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Rob Warnock
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <mLmdnSy0i44HT2ejXTWc-g@speakeasy.net>
Christophe Rhodes  <·····@cam.ac.uk> wrote:
+---------------
| ···@jpl.nasa.gov (Erann Gat) writes:
| > BTW, one way to resolve this dispute is to change the standard (don't
| > shoot me!) to stipulate that (upgraded-array-element-type nil) can be
| > NULL, notwithstanding the rules that currently prohibit this from
| > happening.
| 
| Well, that might be one way of doing it, though I'd like to know why
| you want to mandate that every implementation have arrays specialized
| to hold objects of the NULL type.  A better solution might be to make
| an exception for the empty type: that is, to specify that it upgrades
| to BIT.
+---------------

Oh, you mean the way CMUCL[1] does?  ;-}  ;-}

	cmucl> (upgraded-array-element-type nil)
	BIT
	cmucl> 


-Rob

[1] cmucl-18e, if it matters.

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sqy8zpxf4j.fsf@lambda.jcn.srcf.net>
····@rpw3.org (Rob Warnock) writes:

> Christophe Rhodes  <·····@cam.ac.uk> wrote:
> +---------------
> | Well, that might be one way of doing it, though I'd like to know why
> | you want to mandate that every implementation have arrays specialized
> | to hold objects of the NULL type.  A better solution might be to make
> | an exception for the empty type: that is, to specify that it upgrades
> | to BIT.
> +---------------
>
> Oh, you mean the way CMUCL[1] does?  ;-}  ;-}
>
> 	cmucl> (upgraded-array-element-type nil)
> 	BIT
> 	cmucl> 

Look a bit deeper.

cmucl> (upgraded-array-element-type nil)
BIT
cmucl> (type-of (make-array 0 :element-type nil))
(SIMPLE-BASE-STRING 0)

I don't think that cmucl's behaviour on this issue is the result of
deep consideration :-)

For what it's worth, I've been doing some exploratory programming on
this issue, and the preliminary results indicate that it doesn't seem
to be an intolerable burden for sbcl to support the letter of the
standard.  At this stage, of course, I haven't benchmarked my changes
against all possible workloads, nor, probably, have all of the corner
cases been properly explored.  But in my development branch, which
will probably be merged onto CVS HEAD for wider testing shortly:

* (make-array 0 :element-type nil)
""
* (type-of *)
(SIMPLE-ARRAY NIL (0))
* (intern **)
||
NIL
* (eq * (intern (make-array 0 :element-type 'base-char)))
T
* (char (make-array 4 :element-type nil) 2)

debugger invoked on condition of type SB-KERNEL:NIL-ARRAY-ACCESSED-ERROR:
  An attempt to access an array of element-type NIL was made.  Congratulations!
* (setf (schar (make-array 4 :element-type nil) 1) #\Space)

debugger invoked on condition of type TYPE-ERROR:
  The value #\Space is not of type NIL.

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Rob Warnock
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <UxudnWtyFtCKQWejXTWc-w@speakeasy.net>
Christophe Rhodes  <·····@cam.ac.uk> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) writes:
| > Oh, you mean the way CMUCL[1] does?  ;-}  ;-}
| > 	cmucl> (upgraded-array-element-type nil)
| > 	BIT
| > 	cmucl> 
| 
| Look a bit deeper.
| 
| cmucl> (upgraded-array-element-type nil)
| BIT
| cmucl> (type-of (make-array 0 :element-type nil))
| (SIMPLE-BASE-STRING 0)
| 
| I don't think that cmucl's behaviour on this issue is the result of
| deep consideration :-)
+---------------

Yeah, I noticed that just after hitting <SEND> on the previous reply.
(Oops.)

+---------------
| For what it's worth, I've been doing some exploratory programming on
| this issue, and the preliminary results indicate that it doesn't seem
| to be an intolerable burden for sbcl to support the letter of the
| standard.
+---------------

Interesting.  Hmmm...  How does it now respond to Erran's original
question? CMUCL-18e says:

	cmucl> (subtypep 'string '(vector character))
	T
	T
	cmucl> 

Your new SBCL says...?


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sq3chxf2ld.fsf@lambda.jcn.srcf.net>
····@rpw3.org (Rob Warnock) writes:

> Christophe Rhodes  <·····@cam.ac.uk> wrote:
> +---------------
> | For what it's worth, I've been doing some exploratory programming on
> | this issue, and the preliminary results indicate that it doesn't seem
> | to be an intolerable burden for sbcl to support the letter of the
> | standard.
> +---------------
>
> Interesting.  Hmmm...  How does it now respond to Erran's original
> question? CMUCL-18e says:
>
> 	cmucl> (subtypep 'string '(vector character))
> 	T
> 	T
> 	cmucl> 
>
> Your new SBCL says...?

* (lisp-implementation-version)
"0.8.0.78.vector-nil-string.11"
* (subtypep 'string '(vector character))
NIL
T

[ since SBCL uses the type system quite extensively in compiling code,
it is actually essential that SUBTYPEP reflect the implementation
types.  Thus (VECTOR CHARACTER) is internalised as
  * (sb-kernel:specifier-type '(vector character))
  #<SB-KERNEL:ARRAY-TYPE BASE-STRING>
and STRING is 
  * (sb-kernel:specifier-type 'string)
  #<SB-KERNEL:UNION-TYPE STRING>
these results reflecting the facts that, in sbcl currently, CHARACTER
is the same as BASE-CHAR, so (VECTOR CHARACTER) is indeed the same as
BASE-STRING; STRING, however, is no longer the same as BASE-STRING, so
it must be a union of all possible string types: 
  * (sb-kernel:union-type-types (sb-kernel:specifier-type 'string))
  (#<SB-KERNEL:ARRAY-TYPE (VECTOR NIL)> 
   #<SB-KERNEL:ARRAY-TYPE BASE-STRING>)
]

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwy8zp21e4.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> Why should it signal an error?  Since such an array would necessarily be
> uninitialized the consequences of accessing it are undefined (specified in
> the description of make-array).  That means it might signal an error, or
> it might return a value, or it might dump core, or it might enter an
> infinite loop.

So you argue that in any program where the compiler can prove that there is
possible dataflow to something that might have undefined consequences, the
compiler at the outset should generate an "I refuse to execute this program
because it is potentially undefined under some set of inputs" error message?
Interesting...

> If the program knew the reference would never occur why did it bother to
> make the array?

The dataflow might be such that two different parts of the program generate
these pieces.  Generating a type NIL array allows the data to flow as far
as possible through typed pipes, as it were, that is, through lexical data
flow that might be type ARRAY, before reaching a critical moment where the
code either does or doesn't do an access.  Passing something that says
"everything is ok up to the access" seems to satisfy the "no prior restraint"
theory of programming.  Perhaps you were wanting to program in the myriad
other language designed around prior restraint?  What IS dynamic programming
and/or late-binding if not allowing programs to continue when the user has
not necessarily done anything wrong?
> BTW, I'd be really astonished if there were real code out there that had a
> legitimate reason for having a result-type default to (or even be) nil.

I'm nearly positive that I've had situations where macros of mine would
sometimes generate this.  Whether they've ever been put in situations where
they do, I doubt, since apparently LispWorks (which I often use for debugging)
does blow up on this (quite to my surprise--I had assumed it would not).

> I'd be surprised if you can find a place in the spec where it says that
> the type NIL cannot be upgraded, and yet it is true.  It is, as Paul
> pointed out to start this whole discussion, a logical consequence of other
> things that the spec says.

I haven't thought enough about this issue to have a strong opinion, however
I believe this to be irrelevant / a red herring.

> Likewise, the spec does not say that an array cannot be created with
> element type nil.  This is a logical consequence of other things the spec
> says.

Which other things?

> In fact, the phrase "an element type that would be illegal to access" is
> nonsensical.  You don't access the element type, you access the PLACES
> where those elements are stored.  The element type is a constraint on the
> type of element that you may put there (and therefore a constraint on the
> type of element that you may find there).  But places don't have types,
> only the elements in those places have types.

Not necessarily so.  I don't see why a type T array can't be used at the
representation level for a type NIL array, with an element type that says
"when you extract this, coerce it to type NIL" and then have the coercer
for type NIL say "This was an illegal access."  I don't see why an array of
type BASE-CHAR can't just be, at the representational level, an array of
type CHARACTER that is simply flagged to only allow BASE-CHAR on store and
access.  The langauge doesn't require a specific storage technique; it 
requires only that the operators behave in a specific way.  In this case,
it requires that if you meke an array that you get an object of type array,
that its element-type report back in a manner compatible with (modulo
upgrading) what element-type you requested, that it allow you to store
objects of the indicated element-type, that it allow you to access those
elements, that it allow you to query its rank, etc.  For all you know, AREF
is defined by
 (defun (setf aref) (new array i)
   (format *storage-io* 
           "Hey, dude.  Make a note that in ~S at location ~D, I've stored ~S."
           array i new))
and
 (defun aref (array i)
   (format *storage-io*
           "Hey, dude.  What did I store in ~S at location ~D?" array i)
   (read *storage-io*))
or even:
 (defvar *aref-fun* #'(lambda (array i) (error "You've never set that.")))
 (defun aref (array i)
   (funcall *aref-fun* array i))
 (defun (setf aref) (new stored-array stored-i)
   (setq *aref-fun* 
         (let ((old-aref-fun *aref-fun*))
           (lambda (array i)
             (if (and (eql array stored-array) (eql i stored-i)) new
               (funcall old-aref-fun array i)))))
   new)
You may even think you have a right to not only O(1) access time but a
small constant factor.  I don't think you'll find either "right" in the spec.
Commercial factors tend to screen out this kind of implementation, but my
point is that there is SUBSTANTIALLY more implementational latitude than
you seem to continually presuppose in your messages.

(Yes, I know aref is n-dimentional.  Use your imagination for the 
generalization of the above... I'm too lazy to do the same in its more
elaborate form, and it won't help either of us understand what I'm saying
if I do.)

> The illegality of an element type of nil follows from the definition of
> array-total-size.

No, it doesn't.  ARRAY-TOTAL-SIZE tells you the number of allocated cells.
Cells can take up more space than the item itself.

> This definition stipulates unconditionally that the
> number of elements in an array is the product of the array's dimensions. 
> It follows logically from this that the elements of any array with size>0
> must be members of a non-empty set, and therefore the element type cannot
> be nil.

No, it doesn't follow from that at all.  The following meta-circular 
definition seems to me to comply with the spec:

 (defstruct my-array
   element-type
   storage)

 (defun my-make-array (size &rest array-keywords &key element-type
                            &allow-other-keys)
   (make-my-array :element-type element-type
                  :storage (apply #'make-array size
                                  :element-type 't
                                  array-keywords)))

 (defun my-aref (the-my-array &rest indices)
   (check-type the-my-array my-array)
   (let ((value (apply #'aref (my-array-storage the-my-array) indices)))
     (unless (typep value (my-array-element-type the-my-array))
       (error "~S contained illegal value."
         `(my-aref ,the-my-array ,@indices)))))


 (defun (setf my-aref) (new the-my-array &rest indices)
   (check-type the-my-array my-array)
   (unless (typep new (my-array-element-type the-my-array))
     (error "Can't store ~S into ~S." new `(my-aref ,the-my-array ,@indices)))
   (apply #'(setf aref) (my-array-storage the-my-array) indices))

 ..etc.
 
> (Strictly speaking, the upgraded-array-element-type cannot be nil, 

Why?  I don't see why if I return NIL I am not saying "Arrays with this
element type are capable of holding every object of type NIL that you give
them."

> but since the nil type cannot be upgraded these amount to the same thing.)

No, they don't.

You're playing the game I call "deltas and epsilons" and confusing the 
order of who gets to go first.  (I used to fuss a lot when being taught
Calculus about why they would say "For every delta you pick, I can find
an epsilon smaller." I kept saying "No, you go first."  But the fact is
that it matters who goes first.  And I don't see any reason that I
shouldn't ask you to go first in presenting me an object of type NIL before
you claim that the storage container I've built to hold such objects is
insufficient....)

> BTW, I can also argue for my position based on accepted practice.

People often just defer hard decisions.  This argument, carried to its
logical conclusions, would  cause many important bugs not to get fixed.

> Neither
> MCL, Lispworks nor CLisp handle this "correctly" on your and Paul's view. 
> MCL upgrades NIL to T.

I have no problem with this personally.

> Lispworks signals an error, saying (array nil) is
> an illegal type.

I think this is a bug.

> Clisp upgrades NIL to BIT.

This seems random, but acceptable.  I don't think any type NIL objects are
going to fail to be storable in this array.

> I don't hear very many people clamoring for this to change.

Whether implementors do or don't make the code work is a matter of
resources.  I will trust each implementor  to allocate its resources
according to market need.  However, I see no lack of clarity here about
what should work.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506032234400001@192.168.1.51>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > Why should it signal an error?  Since such an array would necessarily be
> > uninitialized the consequences of accessing it are undefined (specified in
> > the description of make-array).  That means it might signal an error, or
> > it might return a value, or it might dump core, or it might enter an
> > infinite loop.
> 
> So you argue that in any program where the compiler can prove that there is
> possible dataflow to something that might have undefined consequences, the
> compiler at the outset should generate an "I refuse to execute this program
> because it is potentially undefined under some set of inputs" error message?
> Interesting...

No.  That is not what I said, and it's not what I meant.  Stop putting
words in my mouth.

> > If the program knew the reference would never occur why did it bother to
> > make the array?
> 
> The dataflow might be such that two different parts of the program generate
> these pieces.  Generating a type NIL array allows the data to flow as far
> as possible through typed pipes, as it were, that is, through lexical data
> flow that might be type ARRAY, before reaching a critical moment where the
> code either does or doesn't do an access.  Passing something that says
> "everything is ok up to the access" seems to satisfy the "no prior restraint"
> theory of programming.  Perhaps you were wanting to program in the myriad
> other language designed around prior restraint?  What IS dynamic programming
> and/or late-binding if not allowing programs to continue when the user has
> not necessarily done anything wrong?

All well and good in the abstract, but I'm still waiting for an example
where an (array nil) is actually needed and/or helpful.

> > BTW, I'd be really astonished if there were real code out there that had a
> > legitimate reason for having a result-type default to (or even be) nil.
> 
> I'm nearly positive that I've had situations where macros of mine would
> sometimes generate this.  Whether they've ever been put in situations where
> they do, I doubt, since apparently LispWorks (which I often use for debugging)
> does blow up on this (quite to my surprise--I had assumed it would not).

Still waiting...

> > I'd be surprised if you can find a place in the spec where it says that
> > the type NIL cannot be upgraded, and yet it is true.  It is, as Paul
> > pointed out to start this whole discussion, a logical consequence of other
> > things that the spec says.
> 
> I haven't thought enough about this issue to have a strong opinion, however
> I believe this to be irrelevant / a red herring.

It may well be a red herring, but it is what this discussion was
originally about, so it's not irrelevant.

> > Likewise, the spec does not say that an array cannot be created with
> > element type nil.  This is a logical consequence of other things the spec
> > says.
> 
> Which other things?

This is Paul Dietz's argument, not mine.  See his earlier post(s) in this
thread.

> > In fact, the phrase "an element type that would be illegal to access" is
> > nonsensical.  You don't access the element type, you access the PLACES
> > where those elements are stored.  The element type is a constraint on the
> > type of element that you may put there (and therefore a constraint on the
> > type of element that you may find there).  But places don't have types,
> > only the elements in those places have types.
> 
> Not necessarily so.  I don't see why a type T array can't be used at the
> representation level for a type NIL array, with an element type that says
> "when you extract this, coerce it to type NIL" and then have the coercer
> for type NIL say "This was an illegal access."

Because attempting to coerce anything to type NIL is required by the
standard (section 4.4) to signal an error of type type-error.

> I don't see why an array of
> type BASE-CHAR can't just be, at the representational level, an array of
> type CHARACTER that is simply flagged to only allow BASE-CHAR on store and
> access.

It can.  CHARACTER/BASE-CHARACTER is a red herring.  This whole argument
applies only to type NIL.  It's an issue only because there are no objects
of type NIL.

> The langauge doesn't require a specific storage technique;

I never said that it did.

>  For all you know, AREF is defined by

[snip]

I never said otherwise.

> You may even think you have a right to not only O(1) access time but a
> small constant factor.

No, I do not think this.  Please stop presuming to know what is in my
head.  Your track record in that area is abysmal.

> Commercial factors tend to screen out this kind of implementation, but my
> point is that there is SUBSTANTIALLY more implementational latitude than
> you seem to continually presuppose in your messages.

You keep imagining that I am saying things that I am not.  You keep
setting up straw men and knocking them down.  You also seem to have lost
track of what this conversation is actually about, which whether or not a
compliant implementation is required to return (nil,T) to (subtypep
'string '(vector character)).

> (Yes, I know aref is n-dimentional.  Use your imagination for the 
> generalization of the above... I'm too lazy to do the same in its more
> elaborate form, and it won't help either of us understand what I'm saying
> if I do.)

The problem is not that your example was one-dimensional.  The problem is
that your example had nothing whatsoever to do with anything that I
actually said.

> > The illegality of an element type of nil follows from the definition of
> > array-total-size.
> 
> No, it doesn't.

Yes it does.  See below.

>  ARRAY-TOTAL-SIZE tells you the number of allocated cells.

No it doesn't.  The standard defines array total size as "the total number
of elements in an array, computed by taking the product of the dimensions
of the array."  Therefore, the array total size of any array that does not
have a zero dimension must be greater than zero.  Therefore the total
number of elements in that array must also be greater than zero. 
Therefore those elements must be members of a non-empty set.  Therefore
those elements cannot be of type nil.

BTW, I'm only being a language lawyer here because, as I said before, this
conversation began as an argument about (subtypep 'string ...).  Paul
Dietz used a language-lawyer argument to defend his position, so I'm using
a language-lawyer argument to defend mine.  The truth is I don't really
care about whether or not (array nil)s can exist or not.  IMO the Right
Answer here is to just stipulate that strings are vectors of all sub-types
of character except NIL.  (But, oh horrors, that would require changing
the standard, so we can't do that.)

> Cells can take up more space than the item itself.

True, but this is an implementation detail.  It has, as you yourself went
to great length to point out, nothing to do with the topic at hand.

> > This definition stipulates unconditionally that the
> > number of elements in an array is the product of the array's dimensions. 
> > It follows logically from this that the elements of any array with size>0
> > must be members of a non-empty set, and therefore the element type cannot
> > be nil.
> 
> No, it doesn't follow from that at all.  The following meta-circular 
> definition seems to me to comply with the spec:

[snip]

I agree with you, this is a perfectly fine implementation of arrays, one I
would have no quarrel with as a user, just as I have no quarrel with MCL
when it tells me what it does about (subtypep 'string ...), and I have no
quarrel with CLisp when it upgrades NIL to BIT, and I have no quarrel with
LispWorks when it refuses to create an (array nil).

What I have a quarrel with is Paul Dietz telling people that conforming
implementations are required to return (nil,t) to (subtypep 'string
'(vector character)), and supporting that position with a logical argument
that takes into account some obscure parts of the spec but not other
obscure parts of the spec.

Before you and I take this conversation any further perhaps we should back
up and make sure that we actually disagree about something.  Do you
believe that a compliant implementation is required to return (nil,t) to
(subtypep 'string '(vector character))?

> > (Strictly speaking, the upgraded-array-element-type cannot be nil, 
> 
> Why?

Becuase it's the upgraded array element type that determines what can
actually be stored in the array.

> You're playing the game I call "deltas and epsilons" and confusing the 
> order of who gets to go first.  (I used to fuss a lot when being taught
> Calculus about why they would say "For every delta you pick, I can find
> an epsilon smaller." I kept saying "No, you go first."  But the fact is
> that it matters who goes first.  And I don't see any reason that I
> shouldn't ask you to go first in presenting me an object of type NIL before
> you claim that the storage container I've built to hold such objects is
> insufficient....)

The spec requires that you go first.  If I do this:

(array-total-size (make-array '(10) :element-type nil))

the spec requires that you return a number that is both (a) the number of
elements in the array and (b) the product of the array dimensions, that
is, 10.  So you are claiming to have ten members of type nil, which can't
possibly be true.  I don't have to know what you think those members are,
I just have to know that you think you have ten of them to know you must
be mistaken.

> > BTW, I can also argue for my position based on accepted practice.
> 
> People often just defer hard decisions.  This argument, carried to its
> logical conclusions, would  cause many important bugs not to get fixed.

True.  Do you consider any of the examples cited in this thread to be
important bugs?  (I guess you answer this below.)

> > Neither
> > MCL, Lispworks nor CLisp handle this "correctly" on your and Paul's view. 
> > MCL upgrades NIL to T.
> 
> I have no problem with this personally.

Me either.

> > Lispworks signals an error, saying (array nil) is
> > an illegal type.
> 
> I think this is a bug.

<shrug> I don't use LispWorks, nor do I forsee wanting to create an (array
nil) so I'm agnostic on this one.

> > Clisp upgrades NIL to BIT.
> 
> This seems random, but acceptable.

It violates the rules for array upgrading specified in section 15.1.2.1 of
the standard.

> I don't think any type NIL objects are
> going to fail to be storable in this array.

Yes, but that's beside the point.

E.
From: Christophe Rhodes
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sq65mtz23o.fsf@lambda.jcn.srcf.net>
···@jpl.nasa.gov (Erann Gat) writes:

> The spec requires that you go first.  If I do this:
>
> (array-total-size (make-array '(10) :element-type nil))
>
> the spec requires that you return a number that is both (a) the number of
> elements in the array and (b) the product of the array dimensions, that
> is, 10.  So you are claiming to have ten members of type nil, which can't
> possibly be true.  I don't have to know what you think those members are,
> I just have to know that you think you have ten of them to know you must
> be mistaken.

So 
  (array-total-size (make-array '(10) :element-type t :initial-element '#:mu))
must, by your (a), return 1, and therefore arrays of element type T
are forbidden by the specification?  Perhaps you want to reconsider
your interpretation of (a) [ or maybe (b), but let's face it, that's a
more solid definition :-) ].

Let's in fact reinterpret
   the total number of elements in an array, computed by taking the
   product of the dimensions of the array.
with my case in mind.  What could "the total number of elements in an
array" possibly be shorthand for?  It has to be invariant under any
possible elements stored in the array, since the array-total-size is a
property of the array and not of its contents... maybe "the total
number of distinct places in an array"?  Or maybe something else?

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2606030808400001@192.168.1.51>
In article <··············@lambda.jcn.srcf.net>, Christophe Rhodes
<·····@cam.ac.uk> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > The spec requires that you go first.  If I do this:
> >
> > (array-total-size (make-array '(10) :element-type nil))
> >
> > the spec requires that you return a number that is both (a) the number of
> > elements in the array and (b) the product of the array dimensions, that
> > is, 10.  So you are claiming to have ten members of type nil, which can't
> > possibly be true.  I don't have to know what you think those members are,
> > I just have to know that you think you have ten of them to know you must
> > be mistaken.
> 
> So 
>   (array-total-size (make-array '(10) :element-type t :initial-element '#:mu))
> must, by your (a), return 1, and therefore arrays of element type T
> are forbidden by the specification?

Hm, good point, but there is precedent for counting the "same" element
more than once.  For example, LENGTH is defined similarly to
array-total-size, but no one argues that the length of '(a a a) is 1 and
not 3.  (I certainly don't.)

Besides, if you want to argue against counting the "same" element more
than once then you have to define what you mean by "same".  Do you mean
eq?  eql?  equal?  Does (1.1 1.1 1.1) have one element or three?  How
about "aaa"?

In any case, all that matters for my argument is that the array-total-size
be non-zero.

> Let's in fact reinterpret
>    the total number of elements in an array, computed by taking the
>    product of the dimensions of the array.
> with my case in mind.  What could "the total number of elements in an
> array" possibly be shorthand for?  It has to be invariant under any
> possible elements stored in the array, since the array-total-size is a
> property of the array and not of its contents... maybe "the total
> number of distinct places in an array"?  Or maybe something else?

Maybe, but "place" is a concept that exists in the spec, so one presumes
that if that is what was meant then that is what would be said.  I had
expected people to suggest that the method of computing array-total-size
by multiplying the array indices was a suggestion, not a requirement
(particularly since it appears in the glossary), but then again one would
expect to see something like, "possibly computed by..." or "normally
computed by..." or some other qualifier like that.

E.
From: Kent M Pitman
Subject: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sfwk7b8okzy.fsf_-_@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> >  ARRAY-TOTAL-SIZE tells you the number of allocated cells.
> 
> No it doesn't.  The standard defines array total size as "the total number
> of elements in an array, computed by taking the product of the dimensions
> of the array."

But they don't have to be initialized.  They just have to be capable of 
containing values of the indicated type.  It's easy to create an 
uninitialized array of the indicated type.

  "If initial-contents is not supplied, the consequences of later
   reading an uninitialized element of new-array are undefined unless
   either initial-element is supplied or displaced-to is non-nil."

> Therefore, the array total size of any array that does not
> have a zero dimension must be greater than zero. 

Wrong.

> Therefore the total
> number of elements in that array must also be greater than zero. 
  
Wrong.

> Therefore those elements must be members of a non-empty set.

Wrong.

> Therefore those elements cannot be of type nil.
  
Wrong.

> > > (Strictly speaking, the upgraded-array-element-type cannot be nil, 
> > 
> > Why?
> 
> Becuase it's the upgraded array element type that determines what can
> actually be stored in the array.
  
So?  Personally, I would have the upgraded-array-element-type be NIL.
That is, have an array that cannot be upgraded.

But if it were upgraded to some other type, then elements of some other
kind could also be stored.  I don't see anything any worse about this than
about any other kind of upgrading, which also inevitably allows the storing
of elements that you are somewhat honorbound not to store there unless 
you have checked the type of the array and operated in your storage attempts
in response to the actual element type, in which case you're still going
to do something reasonable.
 
> > You're playing the game I call "deltas and epsilons" and confusing the 
> > order of who gets to go first.  (I used to fuss a lot when being taught
> > Calculus about why they would say "For every delta you pick, I can find
> > an epsilon smaller." I kept saying "No, you go first."  But the fact is
> > that it matters who goes first.  And I don't see any reason that I
> > shouldn't ask you to go first in presenting me an object of type NIL before
> > you claim that the storage container I've built to hold such objects is
> > insufficient....)
> 
> The spec requires that you go first.  If I do this:
> 
> (array-total-size (make-array '(10) :element-type nil))
  
Then you have asked for an uninitialized array and gotten one.

The array had storage capable of storing elements of type NIL if you 
later find any, and there are ten such slots.

I have gone first.  Your move.

> the spec requires that you return a number that is both (a) the number of
> elements in the array 

It doesn't require you to initialize the array.

 (array-total-size (make-array '(10) :element-type 'fixnum))

doesn't have to put fixnums in the array. It can put unbound markers and it
can signal an error if you try to read out the fixnums.  If it puts in 
fixnums, that's legal.  But claiming it MUST put in fixnums has to make
you wonder why the spec says "has undefined consequences" since clearly 
if you put something in there that was of the expected type, there would only
be "implementation-dependent effect".  As discussed the other day in another
thread, the "undefined consequences" clause specifically connotes the
option of transfer of control, loss of processor data integrity, etc.
  
> and (b) the product of the array dimensions, that
> is, 10.

Surely.

> So you are claiming to have ten members of type nil,

I am not.  No more than I would be in the fixnum case.

> which can't
> possibly be true.

In your overnarrow reading.  Not in mine.

> I don't have to know what you think those members are,
  
"uninitialized" - that is, there are not values there yet! There is no
law against not yet having put non-existing values in something.  I can
give you a box that is not yet full of unicorns and santa clauses and
you can say "when will it be?" and I can wave my hands and say "when you
give me some" and you can say "but how many will it hold?" and I will say
"4" and you will say "but they can't exist" and I will say "then 4 should
be plenty enough".

And it doesn't even matter if I've lied it will only hold 2 or 0.  The
4 just makes the math work out right.  An implementation is permitted
to (and some implementations do) allocate symbol's value cells and
function cells on demand.  What matters is not that the cell exists or
not, but that all the operators on the symbols behave as if there were
a symbol.

> I just have to know that you think you have ten of them to know you must
> be mistaken.

No, I have uninitialized places for them, as I am well entitled to do.
 
> > > BTW, I can also argue for my position based on accepted practice.
> > 
> > People often just defer hard decisions.  This argument, carried to its
> > logical conclusions, would  cause many important bugs not to get fixed.
> 
> True.  Do you consider any of the examples cited in this thread to be
> important bugs?  (I guess you answer this below.)

I have not discussed importance here.  At some level, I consider all
non-conformance "important".  But I am also a practicalist.  I know 
non-conformances in some implementations that I bet it will be a long time
before people detect.  But at the same time, I've been interested by the way,
as time progresses, more and more of these dark corners are run into by
people doing legitimate programming, and one by one implementors do have
to implement them.
 
> > > Clisp upgrades NIL to BIT.
> > 
> > This seems random, but acceptable.
> 
> It violates the rules for array upgrading specified in section 15.1.2.1 of
> the standard.

I didn't say conforming.  I just said that in the situations I can think
of where I want :ELEMENT-TYPE NIL to work, it wouldn't lead to a problem.

> > I don't think any type NIL objects are
> > going to fail to be storable in this array.
> 
> Yes, but that's beside the point.

No, it's not.  You're doing this entire discussion as an abstract because
you don't even use features like this.  I am doing it as a memory exercise.
I do use features like this.  I am trying to operate by explaining to you
the situations that matter, and you are apparently assuming I'm just making 
it up.  It is very much NOT beside the point  what the actual pattern of use
is, and notwithstanding your claim above that I have not provided code
that shows the usage, I have sketched such code earlier in this discussion.
To repeat:

I have said specifically that the case is general macrology in which 
non-communicating arms do the initialization and the use, and in which there
is a typed dataflow path between that needs to get an "array" type flowing
through it but that will ultimately not be used as a natural part of the
"use macrology" but that needs to have SOMETHING allocated as part of the
"setup macrology".

If this is not enough for you to understand, then that is not my
problem.  I do not have an obligation to show use at all in order to make
a claim that the spec supports what I've suggested it does.
From: Erann Gat
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <gat-2606031035300001@k-137-79-50-101.jpl.nasa.gov>
In article <··················@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > >  ARRAY-TOTAL-SIZE tells you the number of allocated cells.
> > 
> > No it doesn't.  The standard defines array total size as "the total number
> > of elements in an array, computed by taking the product of the dimensions
> > of the array."
> 
> But they don't have to be initialized.

Yes, this is the crux of the issue.  But before I deal with it I want to
address a meta-issue:

> > > I don't think any type NIL objects are
> > > going to fail to be storable in this array.
> > 
> > Yes, but that's beside the point.
> 
> No, it's not.  You're doing this entire discussion as an abstract because
> you don't even use features like this.

That is not true.  First, I am not treating this entire discussion as an
abstract, only part of the discussion.  I am switching back and forth
between abstract and practical, except that when I'm in practical mode the
discussion tends to be short because I simply agree with you.

You are also wrong about the *reason* that I am treating (part of) this
discussion as an abstract.  It's not because I don't use (array nil)s. 
(It is true that I don't use them, but this is not the cause of my
treating the discussion in the abstract.)  The reason I am treating the
discussion in the abstract is because the person who started the
discussion (Paul Dietz) treated it that way.  Actually, what he did was
implicitly lay down a ground rule that to conform to the spec an
implementation had to conform to all the possible logical consequences of
everything the spec said, and I'm just pushing harder on that premise to
show where I think it leads.  The bottom line on my own opinion is that
it's perfectly fine for an implementation to support (array nil)s in
exactly the way you describe, and that it's also perfectly fine (though
certainly not required) for an implementation to return (t,t) to (subtypep
'string '(vector character)).

All I'm really saying is that there is no logically tenable argument to
support the claim that (subtypep 'string '(vector character)) must return
(nil,t) because the mode of reasoning you have to adopt to reach that
conclusion also leads you to the conclusion that (array nil)s are illegal,
which undermines the argument for (subtypep 'string '(vector character))
must return (nil,t).

You have consistently in this exchange ascribed to me positions that I do
not in fact hold, and I can't help but wonder why you are doing it.  Why
are you seeking to create conflict where none exists?

So let's start from the top.  The root issue is the following proposition:

P1: A conforming ANSI Common Lisp implementation is required to return
(nil,t) to (subtypep 'string '(vector character)).

The reasoning to support P1 is:

1.  String is defined as (vector X) for all X which are subtypes of character.
2.  NIL is a subtype of character.
3.  NIL cannot be upgraded because doing so would violate a constraint on
upgrading.  Therefore STRING must be at least (or (vector character)
(vector nil)).

Before we proceed any further you have to decide whether or not you buy
this argument and accept P1.  If you do not, then we have no disagreement,
and there is no problem with (array nil).

If (and only if) you accept this argument, then I claim that you have not
thought through all the consequences of the strict mode of reasoning you
have chosen to adopt.   (Note that it is you who have at this point chosen
to adopt this mode of reasoning, not me.  I am perfectly happy to stop at
the previous paragraph by saying that P1 is false.)

In particular, I claim that as a logical consequence of the definition of
array total size it is not possible to create an (array nil) of size>0
because such an array would result in a logical contradiction, to wit,
that a number of elements>0 are all members of the empty set, which is not
possible.

Your counter-argument to this is:

> But they don't have to be initialized.

This is indeed a compelling counter argument, particularly in light of the
fact (as you point out) that the consequences of reading an uninitialized
element of an array are "undefined" rather than "implementation
dependent."

Here is my counter-counter argument to this.  Keep in mind as you read
this that we are in a dynamic context where
*language-lawyer-reasoning-mode* is bound to T, and that you are the one
who at this point chose to enter this context, not me.  If you don't like
this you are free to escape to the previous context where
*language-lawyer-reasoning-mode* is nil and we just agree that P1 is false
and there is no problem with (array nil)s.

Counter-counter argument: The definition of array-total-size implicitly
forbids uninitialized elements in an array, notwithstanding the statement
that accessing such elements has undefined consequences.  Undefined
consequences is a permission, not a requirement, and does not preclude the
possibility that other parts of the spec rule out certain undefined
consequences that might otherwise be possible.  Note that this
interpretation is in fact consistent with current practice (all
implementations that I know of in fact do initialize "uninitialized"
elements of arrays to a sane value).  Note also that if one were to accept
your interpretation, then a conforming implemtation could dump core if
make-array is called at the top level when *print-array* it T, which would
seem to be less than desirable.

Now, a few more miscellaneous comments.

> > (array-total-size (make-array '(10) :element-type nil))
>   
> Then you have asked for an uninitialized array and gotten one.
> 
> The array had storage capable of storing elements of type NIL if you 
> later find any, and there are ten such slots.
> 
> I have gone first.  Your move.

See my counter-counter argument above.


> > the spec requires that you return a number that is both (a) the number of
> > elements in the array 
> 
> It doesn't require you to initialize the array.

Ditto.


> In your overnarrow reading.  Not in mine.

In the over-narrow reading that is required to support P1.  Not "my" reading.


> > > > BTW, I can also argue for my position based on accepted practice.
> > > 
> > > People often just defer hard decisions.  This argument, carried to its
> > > logical conclusions, would  cause many important bugs not to get fixed.
                                              ^^^^^^^^
> > 
> > True.  Do you consider any of the examples cited in this thread to be
> > important bugs?  (I guess you answer this below.)
> 
> I have not discussed importance here.

You injected the term into the conversation, not me.  See the highlighted
portion above.


> > > > Clisp upgrades NIL to BIT.
> > > 
> > > This seems random, but acceptable.
> > 
> > It violates the rules for array upgrading specified in section 15.1.2.1 of
> > the standard.
> 
> I didn't say conforming.  I just said that in the situations I can think
> of where I want :ELEMENT-TYPE NIL to work, it wouldn't lead to a problem.

OK, that's fine.  This means that you consider some kinds of
non-conformance acceptable.  So do I.


> > > I don't think any type NIL objects are
> > > going to fail to be storable in this array.
> > 
> > Yes, but that's beside the point.
> 
> No, it's not.

Yes it is, because *language-lawyer-reasoning-mode* was bound to T.

> If this is not enough for you to understand, then that is not my
> problem.

I understand perfectly well.  Not only that, but I actually *agree* with
nearly everything you've said, but that's because my default value for
*language-lawyer-reasoning-mode* is nil.

>  I do not have an obligation to show use at all in order to make
> a claim that the spec supports what I've suggested it does.

True.  I'm only asking for code examples because I am genuinely curious
how an (array nil) would really be used in practice.

E.
From: Erann Gat
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <gat-2606031233040001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
···@jpl.nasa.gov (Erann Gat) wrote:

> In article <··················@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > >  ARRAY-TOTAL-SIZE tells you the number of allocated cells.
> > > 
> > > No it doesn't.  The standard defines array total size as "the total number
> > > of elements in an array, computed by taking the product of the dimensions
> > > of the array."
> > 
> > But they don't have to be initialized.
> 
> Yes, this is the crux of the issue.

Postscript: anyone seriously interested in pursuing this topic should read:

http://www-2.cs.cmu.edu/Groups/AI/html/hyperspec/HyperSpec/Issues/iss356-writeup.html

E.
From: Kent M Pitman
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sfwadc48vqo.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> You have consistently in this exchange ascribed to me positions that I do
> not in fact hold, and I can't help but wonder why you are doing it.  Why
> are you seeking to create conflict where none exists?

You're probably seeing this as me trying to paint you a certain way.
Rather, I'm trying to restate your claims in ways I understand by
rephrasing what you say in the way I hear it.  To the extent that
there is subtlety lost in the restatement, I expect you to note it.
It's arguably passive aggressive, but it's not meant in a bad way.
I don't see any other way of moving forward in a discussion; to do
otherwise is to leave a statement I don't understand misunderstood
because I'm not allowed to inspect it.  And sometimes it's easier 
just to restate something than to analyze why it was a problem in the
first place.

> So let's start from the top.  The root issue is the following proposition:
> 
> P1: A conforming ANSI Common Lisp implementation is required to return
> (nil,t) to (subtypep 'string '(vector character)).

> The reasoning to support P1 is:
> 
> 1.  String is defined as (vector X) for all X which are subtypes
>     of character.

Yes, I agree with this.

> 2.  NIL is a subtype of character.

Yes, I agree with this.

> 3.  NIL cannot be upgraded because doing so would violate a constraint on
>     upgrading.  Therefore STRING must be at least 
>     (or (vector character) (vector nil)).

Yes, I agree with this.

However, this is not the [only] reason that 
 (subtypep 'string '(vector character))
returns false.
 (subtypep '(vector base-char) '(vector character)) => NIL, T
 (subtypep '(vector character) '(vector base-char)) => NIL, T

So even if there were no type NIL, it would still necessarily be the
case that 
 (subtypep 'string '(vector character))
would return false.

The real reason that 
 (subtypep 'string '(vector character))
yields false is that STRING is a union type and it is a union of objects
with disparate representations.

Let me step back a second to talk about history.  The decision to make
a two-dimensional type system for arrays seemed ill-advised to me.
Perhaps I didn't see some of the reasoning that probably preceded it.
All I knew was that it was all done one relatively hurried and hot
summer afternoon among the CLTL designers.  We were toward the end of
the day and people just wanted resolution.  Proposals were flying fast
and furious and it seemed to me (but I was _very_ new to the process
at the time) that it was just conjured out of thin air.  I expected it
to be a disaster.  As a meta-theory of design, a hot room full of
tired people is not always a good brew for long-term
decision-making...  sometimes it breaks deadlocks that cannot be
broken at lower temperatures, but sometimes in doing so it applies
strange reasoning techniques that would not be available at more
conventional temperatures... ;)

I have come to respect the decision better and also to believe there was
more advanced thought in the proposal than was apparent to me at the 
time.  Nevertheless, it has its rough edges.

I now view what was done as a kind of odd trade-off.  If you look at the 
ISLISP type system, you see that their type tree is different than ours
(though not as elaborate) and yet more complicated than ours in some ways
even though simpler in others.  In particular
  <basic-array>
    <basic-array*>
      <general-array*>
    <basic-vector>
      <general-vector>
      <string>

That is, they had to make named atomic types for thing that we see as
unified.  To us, STRING is a kind of VECTOR.  But ISLISP had to make
<basic-vector> and say that <string> is a kind of <basic-vector> but
_not_ a kind of <general-vector>.  This addresses an important confusion
that sometimes people have about vectors.  The term vector embraces a
bunch of kinds of vectors, some of which are incompatible with others.
A general vector (VECTOR T) can hold characters, but is not structured
like a STRING. A compiled access to VECTOR T would extract a whole q
(ie., a pointer-length) of bits, while a compiled access to VECTOR CHARACTER
would extra just a character-width of bits.  These are different beasts
representationally, and so
 (subtypep '(vector character) '(vector t)) => NIL, T
even though
 (subtypep '(vector character) 'vector)
and even though all objects are contained in type T.

So when you say (VECTOR CHARACTER) you're saying, among other things,
"an array whose elements are allocated CHARACTER-wide" but when you
say (VECTOR BASE-CHAR) you're saying 
"an array whose elements are allocated BASE-CHAR-wide".  The implementation
may choose to do upgrading here that will confuse a bit of this, but the
point is that potentially these are different.  It's even the case that
you may have two 8bit encodings that are the same width but that do not
overlap.  (VECTOR ISO-8859-1-CHAR) might be different than
(VECTOR ISO-8859-15-CHAR), for example.  So it's not just storage width.

The real point is that when you do subtyping, you're saying "does my knowledge
of the supertype suffice to understand how to manipulate the subtype".
And SUBTYPEP's job is to say "no, you are going to get into trouble if you
apply your knowledge of type 2 when dealing with type 1. This would happen
whether or not there were a type NIL.

Had we done the ISLISP thing, we would have had some godawful tree that
would have made this all plain.  That is, we'd have had 

                      STRING                             
               /      /          \                   \   \
    WINDOWS-STRING  UNIX-STRING  ISO-8859-1-STRING   ...  GENERAL-STRING

and there might be GENERAL-STRING hidden way over in the side that would
confuse everyone because it would probably be secretly [representationally]
an element-type T object, since it would probably secretly just have 
vanilla pointers to character objects, and that might be because there
were so many encoding systems there was no point in making a specialized
representation today only to find tomorrow it was insufficient to some new
spec the world came out with for 190-byte characters.  And yet, you wouldn't
want every other string to use characters that wide.  Each string type would 
basically contain a set of codes and a width, and no type woudl be a subtype
of any others, so the type STRING would be more general than all others,
even though you never instantiated it.  If you instantiated it, you'd still
get a specific type (general-string) that was not a subtype nor supertype
of the others, and this would happen wheter or not there was  type NIL
manifest in the language.

> 
> Before we proceed any further you have to decide whether or not you buy
> this argument and accept P1.  If you do not, then we have no disagreement,
> and there is no problem with (array nil).

I don't buy that the type NIL has anything to do with this.
I do agree with points 1,2,3 above.
I don't know how to answer this remark.
 
> If (and only if) you accept this argument, then I claim that you have not
> thought through all the consequences of the strict mode of reasoning you
> have chosen to adopt.   (Note that it is you who have at this point chosen
> to adopt this mode of reasoning, not me.  I am perfectly happy to stop at
> the previous paragraph by saying that P1 is false.)
> 
> In particular, I claim that as a logical consequence of the definition of
> array total size it is not possible to create an (array nil) of size>0
> because such an array would result in a logical contradiction, to wit,
> that a number of elements>0 are all members of the empty set, which is not
> possible.

"initialization"


> Your counter-argument to this is:
> 
> > But they don't have to be initialized.
> 
> This is indeed a compelling counter argument, particularly in light of the
> fact (as you point out) that the consequences of reading an uninitialized
> element of an array are "undefined" rather than "implementation
> dependent."
> 
> Here is my counter-counter argument to this.  Keep in mind as you read
> this that we are in a dynamic context where
> *language-lawyer-reasoning-mode* is bound to T, and that you are the one
> who at this point chose to enter this context, not me.  If you don't like
> this you are free to escape to the previous context where
> *language-lawyer-reasoning-mode* is nil and we just agree that P1 is false
> and there is no problem with (array nil)s.
> 
> Counter-counter argument: The definition of array-total-size implicitly
> forbids uninitialized elements in an array,

This is, in my mind, proof that you are misreading the definition.

[Not that this has any standing in this argument, but as a historical aside:
 I wrote the definition.   I know the rigor that I intended.
 I never meant it to imply what you say it implies.]

You still haven't said whether you think
 (array-total-size (make-array 5 :element-type 'fixnum))
is ill-defined because this doesn't have any necessarily assigned
elements either.  I absolutely don't think it's ill-defined, and so
I believe you can compute the array-total-size of an uninitialized array
and so I think your reading of array-total-size is not just ambiguous
but utterly bogus.
From: Erann Gat
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <gat-2606031517090001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> > So let's start from the top.  The root issue is the following proposition:
> > 
> > P1: A conforming ANSI Common Lisp implementation is required to return
> > (nil,t) to (subtypep 'string '(vector character)).
> 
> > The reasoning to support P1 is:
> > 
> > 1.  String is defined as (vector X) for all X which are subtypes
> >     of character.
> 
> Yes, I agree with this.
> 
> > 2.  NIL is a subtype of character.
> 
> Yes, I agree with this.
> 
> > 3.  NIL cannot be upgraded because doing so would violate a constraint on
> >     upgrading.  Therefore STRING must be at least 
> >     (or (vector character) (vector nil)).
> 
> Yes, I agree with this.
> 
> However, this is not the [only] reason that 
>  (subtypep 'string '(vector character))
> returns false.
>  (subtypep '(vector base-char) '(vector character)) => NIL, T
>  (subtypep '(vector character) '(vector base-char)) => NIL, T
> 
> So even if there were no type NIL, it would still necessarily be the
> case that 
>  (subtypep 'string '(vector character))
> would return false.

No, because "an implementation can define base-char to be the same type as
character."  (Section 13.2, base-char)  Similar provisos can be found for
all other character sub-types (except NIL of course).

P1 is only *necessarily* true if one requires (vector nil) to be a
non-upgradeable sub-type of string.

> The real reason that 
>  (subtypep 'string '(vector character))
> yields false is that STRING is a union type and it is a union of objects
> with disparate representations.

P1 potentially yields false because string is potentially a union of
objects with disparate representations, but it is not necessarily a union
unless, as I said, one requires etc. etc. etc.

> As a meta-theory of design, a hot room full of
> tired people is not always a good brew for long-term
> decision-making...  sometimes it breaks deadlocks that cannot be
> broken at lower temperatures,

:-)


> > Before we proceed any further you have to decide whether or not you buy
> > this argument and accept P1.  If you do not, then we have no disagreement,
> > and there is no problem with (array nil).
> 
> I don't buy that the type NIL has anything to do with this.
> I do agree with points 1,2,3 above.
> I don't know how to answer this remark.

I think your answer has to be one of the following:

1.  I don't accept P1.  If you choose this (which you apparently don't)
then we agree, and we're done.

2.  I do accept P1, and I accept the supporting argument for it (in which
case you must buy that NIL has something to do with it because NIL is
integral to the argument).

3.  I do accept P1, but I accept it for some other reason (in which case
you should say what that other reason is, keeping in mind that
implementations are allowed to conflate all character types (except NIL of
course) into the single type CHARACTER.)

I am at this pont genuinely baffled how you can accept P1 and still "not
buy that the type NIL has anything to do with this."

> > Counter-counter argument: The definition of array-total-size implicitly
> > forbids uninitialized elements in an array,
> 
> This is, in my mind, proof that you are misreading the definition.
> 
> [Not that this has any standing in this argument, but as a historical aside:
>  I wrote the definition.   I know the rigor that I intended.
>  I never meant it to imply what you say it implies.]

I appreciate your intellectual honesty in disclaiming any special standing
with respect to interpreting the meaning of this passage despite the fact
that you are its author.

> You still haven't said whether you think
>  (array-total-size (make-array 5 :element-type 'fixnum))
> is ill-defined because this doesn't have any necessarily assigned
> elements either.

(let ( (*language-lawyer-mode* t) )

  I think it's perfectly well defined because I read into the standard an
  implicit requirement to initialize (or at least giving the appearance of
  having initialized, at least with respect to calls to array-total-size)
  all arrays whether or not the user has specified an initialization
  argument, notwithstanding the fact that referencing an uninitialized
  element of an array is specified by the standard to have undefined
  consequences.

)

> I absolutely don't think it's ill-defined

And I absolutely agree with you.

> , and so
> I believe you can compute the array-total-size of an uninitialized array
> and so I think your reading of array-total-size is not just ambiguous
> but utterly bogus.

Interesting.  We reach the exact same conclusion (albeit by different
routes) and you still choose to label my reasoning "utterly bogus".  Shall
I give you a lecture about my views on the subtle nuances of "bogus"
similar to the one you gave me about "unambiguous"?  I find myself
continuing to wonder why you choose to create conflict where none exists. 
What are you hoping to accomplish?

E.
From: Christophe Rhodes
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sqr85gzju7.fsf@lambda.jcn.srcf.net>
Kent M Pitman <······@world.std.com> writes:

> However, this is not the [only] reason that 
>  (subtypep 'string '(vector character))
> returns false.
>  (subtypep '(vector base-char) '(vector character)) => NIL, T
>  (subtypep '(vector character) '(vector base-char)) => NIL, T
>
> So even if there were no type NIL, it would still necessarily be the
> case that 
>  (subtypep 'string '(vector character))
> would return false.
>
> The real reason that 
>  (subtypep 'string '(vector character))
> yields false is that STRING is a union type and it is a union of objects
> with disparate representations.

A side note: this is, in fact, not the case, and it is this point that
as an implementor reveals a non-trivial (though empirically fairly
small) cost of fully supporting (array nil).

The reasoning above depends on the assumption that the types BASE-CHAR
and CHARACTER are distinct.  In many if not all of the released
versions of current implementations, this assumption is in fact false;
in those implementations, (subtypep 'character 'base-char) returns T.
Consequently, in those implementations, STRING need not be a union
type, but may representationally be only one type, except for this
newly-discovered (VECTOR NIL) issue.  This causes a certain amount of
code rearrangement, and a small degradation in performance in
accessing general STRINGs (since historically they have not been
general).

The reason that this is probably an acceptable cost, at least for the
implementation in which I have been exploring this issue, is that it
will eventually wish to support Unicode, while still supporting
ASCII-like strings too.  Given this, the unionization of STRING is
necessary in any case, so adding one more type to the mix isn't a big
deal (though maybe if (VECTOR NIL) weren't the first such new string
type this argument would be easier for users to swallow :-).

Christophe
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Kent M Pitman
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sfw65ms8tav.fsf@shell01.TheWorld.com>
Christophe Rhodes <·····@cam.ac.uk> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > However, this is not the [only] reason that 
> >  (subtypep 'string '(vector character))
> > returns false.
> >  (subtypep '(vector base-char) '(vector character)) => NIL, T
> >  (subtypep '(vector character) '(vector base-char)) => NIL, T
> >
> > So even if there were no type NIL, it would still necessarily be the
> > case that 
> >  (subtypep 'string '(vector character))
> > would return false.
> >
> > The real reason that 
> >  (subtypep 'string '(vector character))
> > yields false is that STRING is a union type and it is a union of objects
> > with disparate representations.
> 
> A side note: this is, in fact, not the case, and it is this point that
> as an implementor reveals a non-trivial (though empirically fairly
> small) cost of fully supporting (array nil).
> 
> The reasoning above depends on the assumption that the types BASE-CHAR
> and CHARACTER are distinct.  In many if not all of the released
> versions of current implementations, this assumption is in fact false;
> in those implementations, (subtypep 'character 'base-char) returns T.
> Consequently, in those implementations, STRING need not be a union
> type, but may representationally be only one type, except for this
> newly-discovered (VECTOR NIL) issue.  This causes a certain amount of
> code rearrangement, and a small degradation in performance in
> accessing general STRINGs (since historically they have not been
> general).
> 
> The reason that this is probably an acceptable cost, at least for the
> implementation in which I have been exploring this issue, is that it
> will eventually wish to support Unicode, while still supporting
> ASCII-like strings too.  Given this, the unionization of STRING is
> necessary in any case, so adding one more type to the mix isn't a big
> deal (though maybe if (VECTOR NIL) weren't the first such new string
> type this argument would be easier for users to swallow :-).

Yes, I guess I buy this.  Although, as you say, it's a distinction that
increasingly with time will be without a difference, since the need to
support unicode will increase, and it's likely that at least some strings
won't want to be unicode, just for space efficiency (though i can imagine
some implementors not bothering, and preferring just 
all-unicode-all-the-time).
From: Duane Rettig
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <44r2c4j5a.fsf@beta.franz.com>
[ Replying both to Kent and Christophe ]:

Kent M Pitman <······@world.std.com> writes:

> Christophe Rhodes <·····@cam.ac.uk> writes:
> 
> > Kent M Pitman <······@world.std.com> writes:
> > 
> > > However, this is not the [only] reason that 
> > >  (subtypep 'string '(vector character))
> > > returns false.
> > >  (subtypep '(vector base-char) '(vector character)) => NIL, T
> > >  (subtypep '(vector character) '(vector base-char)) => NIL, T

Christophe correctly explains why the above is not true.

> > > So even if there were no type NIL, it would still necessarily be the
> > > case that 
> > >  (subtypep 'string '(vector character))
> > > would return false.

> > > The real reason that 
> > >  (subtypep 'string '(vector character))
> > > yields false is that STRING is a union type and it is a union of objects
> > > with disparate representations.

Again, I agree with Christophe's refutation of these assertions.

> > A side note: this is, in fact, not the case, and it is this point that
> > as an implementor reveals a non-trivial (though empirically fairly
> > small) cost of fully supporting (array nil).

I have trouble with this.  The cost is real, and if you follow it all
the way through Paul Dietz' tests, there are at least 50 tests which
fail due to this surprising new interpretation.  The costs are cumulative,
and for a lisp which represents strings as only one type code for efficiency,
that efficiency is destroyed.

> > The reasoning above depends on the assumption that the types BASE-CHAR
> > and CHARACTER are distinct.  In many if not all of the released
> > versions of current implementations, this assumption is in fact false;
> > in those implementations, (subtypep 'character 'base-char) returns T.
> > Consequently, in those implementations, STRING need not be a union
> > type, but may representationally be only one type, except for this
> > newly-discovered (VECTOR NIL) issue.  This causes a certain amount of
> > code rearrangement, and a small degradation in performance in
> > accessing general STRINGs (since historically they have not been
> > general).

All agreed.

> > The reason that this is probably an acceptable cost, at least for the
> > implementation in which I have been exploring this issue, is that it
> > will eventually wish to support Unicode, while still supporting
> > ASCII-like strings too.  Given this, the unionization of STRING is
> > necessary in any case, so adding one more type to the mix isn't a big
> > deal (though maybe if (VECTOR NIL) weren't the first such new string
> > type this argument would be easier for users to swallow :-).

Yes, in those implementations which choose to make that tradeoff between
speed and versatility, the incremental cost of conforming to (vector nil)
will be already amortized in the cost of having two different string
types.  However, those of us who choose the efficiency option are
hosed.  The real rub is not that there is an efficiency hit (to which all
lisps will have to conform, if they want to be called conformant) but that
_nobody_ had ever implemented an (array nil) type before Paul had started
his test suite development, and not many people are likely to use it.
Althugh I believe Erann Gatt is wrong in his interpretation of the spec,
I also agree with the spirit of what he says, and repeat his question,
which has still gone unanswered: "What practical use can anyone make
of an (array nil)?" and its followon "Do you really think anyone will
ever do so?" (I suppose the answer to the second is "yes, if the feature
is there, it will be used".)

> Yes, I guess I buy this.  Although, as you say, it's a distinction that
> increasingly with time will be without a difference, since the need to
> support unicode will increase, and it's likely that at least some strings
> won't want to be unicode, just for space efficiency (though i can imagine
> some implementors not bothering, and preferring just 
> all-unicode-all-the-time).

We do this, by providing two lisps; one with 8-bit character strings
and limited Unicode support (it is useful for those who have punned
strings and octet vectors for years) and one with 16-bit character
strings which provide the full 16-bit unicode capability.  We're still
looking at the fact that unicode has started using more than the 16 bits
it has been using for many years (Unicode is actually defined for up to
32 bits), but the decision on how to approach that problem years down
the road does not necessarily mean requiring more than one string in
a lisp.  However, the presence of (array nil) apparently does :-(

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Kent M Pitman
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sfwr85gseat.fsf@shell01.TheWorld.com>
Duane Rettig <·····@franz.com> writes:

> > > A side note: this is, in fact, not the case, and it is this point that
> > > as an implementor reveals a non-trivial (though empirically fairly
> > > small) cost of fully supporting (array nil).
> 
> I have trouble with this.  The cost is real, and if you follow it all
> the way through Paul Dietz' tests, there are at least 50 tests which
> fail due to this surprising new interpretation.  The costs are cumulative,
> and for a lisp which represents strings as only one type code for efficiency,
> that efficiency is destroyed.

Limiting my remarks only to this, I personally don't have a problem with
describing this as a simple bug in the definition of STRING, that is,
I don't think we intended to include (VECTOR NIL) in STRING, nor do I think
we intended that element-type NIL arrays should be upgradable to anything
other than a special kind of type NIL arrays.  I don't think fixing these
two things would require any change in the type alegbra, and moreover, I
don't think this "rises to the level" (to use the impeachment term) of 
requiring a new standard if all vendors simply agreed to interpret the
definition of STRING as exclusing (VECTOR NIL).

I certainly think that this is the correct place to make the fix.  The
reason is that both STRING's definition and the upgrading definition
are (IMO) utterly arbitrary in nature, and I doubt that anything but
language lawyering follows as direct implications of them.  However,
the concept of having a type NIL vector or of having a type NIL or not
are more "well-founded", I'd think, and so should not be what's played
with.

I don't think the bug is having a type NIL per se nor having upgrading.
We didn't have a lot of experience with either upgrading or union types
when we defined these concepts, and it's hardly suprrising there are 
rough edges.

> > > The reason that this is probably an acceptable cost, at least for the
> > > implementation in which I have been exploring this issue, is that it
> > > will eventually wish to support Unicode, while still supporting
> > > ASCII-like strings too.  Given this, the unionization of STRING is
> > > necessary in any case, so adding one more type to the mix isn't a big
> > > deal (though maybe if (VECTOR NIL) weren't the first such new string
> > > type this argument would be easier for users to swallow :-).
> 
> Yes, in those implementations which choose to make that tradeoff between
> speed and versatility, the incremental cost of conforming to (vector nil)
> will be already amortized in the cost of having two different string
> types.  However, those of us who choose the efficiency option are
> hosed.  The real rub is not that there is an efficiency hit (to which all
> lisps will have to conform, if they want to be called conformant) but that
> _nobody_ had ever implemented an (array nil) type before Paul had started
> his test suite development, and not many people are likely to use it.
> Althugh I believe Erann Gatt is wrong in his interpretation of the spec,
> I also agree with the spirit of what he says, and repeat his question,
> which has still gone unanswered: "What practical use can anyone make
> of an (array nil)?" and its followon "Do you really think anyone will
> ever do so?" (I suppose the answer to the second is "yes, if the feature
> is there, it will be used".)

If this can't be resolved efficiently, it's a reasonable question to ask.

But let me momentarily divert the question back to my "fix" above and ask
whether that would be enough.
 
> > Yes, I guess I buy this.  Although, as you say, it's a distinction that
> > increasingly with time will be without a difference, since the need to
> > support unicode will increase, and it's likely that at least some strings
> > won't want to be unicode, just for space efficiency (though i can imagine
> > some implementors not bothering, and preferring just 
> > all-unicode-all-the-time).
> 
> We do this, by providing two lisps; one with 8-bit character strings
> and limited Unicode support (it is useful for those who have punned
> strings and octet vectors for years) and one with 16-bit character
> strings which provide the full 16-bit unicode capability.  We're still
> looking at the fact that unicode has started using more than the 16 bits
> it has been using for many years (Unicode is actually defined for up to
> 32 bits), but the decision on how to approach that problem years down
> the road does not necessarily mean requiring more than one string in
> a lisp.  However, the presence of (array nil) apparently does :-(

Noted.
From: Erann Gat
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <gat-2606031521100001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> Limiting my remarks only to this, I personally don't have a problem with
> describing this as a simple bug in the definition of STRING, that is,
> I don't think we intended to include (VECTOR NIL) in STRING, nor do I think
> we intended that element-type NIL arrays should be upgradable to anything
> other than a special kind of type NIL arrays.  I don't think fixing these
> two things would require any change in the type alegbra, and moreover, I
> don't think this "rises to the level" (to use the impeachment term) of 
> requiring a new standard if all vendors simply agreed to interpret the
> definition of STRING as exclusing (VECTOR NIL).

...

> But let me momentarily divert the question back to my "fix" above and ask
> whether that would be enough.

Not that my view ought to carry any weight in this decision, but just for
the record: I would be perfectly happy with this.

E.
From: Mario S. Mommer
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <fz65ms4h3o.fsf@cupid.igpm.rwth-aachen.de>
Duane Rettig <·····@franz.com> writes:
> Yes, in those implementations which choose to make that tradeoff between
> speed and versatility, the incremental cost of conforming to (vector nil)
> will be already amortized in the cost of having two different string
> types.  However, those of us who choose the efficiency option are
> hosed.  The real rub is not that there is an efficiency hit (to which all
> lisps will have to conform, if they want to be called conformant) but that
> _nobody_ had ever implemented an (array nil) type before Paul had started
> his test suite development, and not many people are likely to use it.

This is what worries me most about (array nil). It only came into
existence through pedantic interpretation of the letter of the
standard. I do not think that this sort of process is a good source of
language features.

Nobody ever cared before P. F. Dietz sat down and interpreted it out
of the standard. Why should anyone care now?

> Althugh I believe Erann Gatt is wrong in his interpretation of the spec,
> I also agree with the spirit of what he says, and repeat his question,
> which has still gone unanswered: "What practical use can anyone make
> of an (array nil)?" and its followon "Do you really think anyone will
> ever do so?" (I suppose the answer to the second is "yes, if the feature
> is there, it will be used".)

I really wonder if that is true. Nobody ever needed it. Nobody ever
thought of them. The folks saying that they might be usefull cite some
hipothetical code that might "reason" (whatever that means) on arrays
without using them.

If they are only reasoning about array dimensions, why do they need an
array?

> We do this, by providing two lisps; one with 8-bit character strings
> and limited Unicode support (it is useful for those who have punned
> strings and octet vectors for years) and one with 16-bit character
> strings which provide the full 16-bit unicode capability.  We're still
> looking at the fact that unicode has started using more than the 16 bits
> it has been using for many years (Unicode is actually defined for up to
> 32 bits), but the decision on how to approach that problem years down
> the road does not necessarily mean requiring more than one string in
> a lisp.  However, the presence of (array nil) apparently does :-(

This is a bad thing in my opinion. An irrelevant pathological case
having effects in the real world.
From: Paul F. Dietz
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <wgidnWgjSMfiA2ajXTWJhw@dls.net>
Mario S. Mommer wrote:

> This is what worries me most about (array nil). It only came into
> existence through pedantic interpretation of the letter of the
> standard. I do not think that this sort of process is a good source of
> language features.
> 
> Nobody ever cared before P. F. Dietz sat down and interpreted it out
> of the standard. Why should anyone care now?

I care, because I want to be able to test implementations against
the standard.  This requires knowing what the standard requires.

A secondary goal is to test the standard itself, by following its
requirements to their logical conclusions, ignoring issues of practicality
or utility.

As it turns out, this particular issue, while not useful in and of itself,
did serve as a 'dry run' for implementation of full-blown alternate string
types in SBCL.  So at least one implementor found it to be useful, if only
internally.

I'll add that (vector nil) as a string really needn't have that much of
an effect on a lisp that has only one 'real' string type, since any
operation that accesses elements of the string can assume that the string
is not a (vector nil).

	Paul
From: Erann Gat
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <gat-2706030853590001@192.168.1.51>
In article <·············@beta.franz.com>, Duane Rettig <·····@franz.com> wrote:

> Althugh I believe Erann Gatt is wrong in his interpretation of the spec,

Why?  (Not that I want to extend the argument, I'm just curious to know
what part of my argument you found unconvincing.  You can respond via
email if you wish.)

> I also agree with the spirit of what he says, and repeat his question,
> which has still gone unanswered: "What practical use can anyone make
> of an (array nil)?" and its followon "Do you really think anyone will
> ever do so?" (I suppose the answer to the second is "yes, if the feature
> is there, it will be used".)

Well, according to Kent, it *has* been used, and I have no reason to doubt
his word (except his surprise at learning that Lispworks doesn't allow
them).

But I would like to clarify one thing about my position:

> but the decision on how to approach that problem years down
> the road does not necessarily mean requiring more than one string in
> a lisp.  However, the presence of (array nil) apparently does :-(

It is not merely the presence of (array nil) that results in this
problem.  It is a confluence of three things:

1.  The presence of (array nil)
2.  The (implicit) requirement that (array nil) be a subtype of string
3.  The (implicit) prohibition against upgrading nil

Fixing any one of these would fix the problem as far as I'm concerened
(once again with the disclaimer that my opinion should not carry much
weight since I'm neither an implementor nor a user of the feature in
question).

E.
From: Christophe Rhodes
Subject: Re: array element type NIL [Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sqznk4q6ej.fsf@lambda.jcn.srcf.net>
Duane Rettig <·····@franz.com> writes:

> Yes, in those implementations which choose to make that tradeoff between
> speed and versatility, the incremental cost of conforming to (vector nil)
> will be already amortized in the cost of having two different string
> types.  However, those of us who choose the efficiency option are
> hosed.  The real rub is not that there is an efficiency hit (to which all
> lisps will have to conform, if they want to be called conformant) but that
> _nobody_ had ever implemented an (array nil) type before Paul had started
> his test suite development, and not many people are likely to use it.

I agree, and for what it's worth, even as a developer of an
implementation with the philosophy of aggressive (for want of a better
word) conformance[1] to the ANSI standard, the main thing I would like
to see come out of this discussion is some means of recording the
consensus that this is a flaw in the standard.  Though the ANSI
standard may well never be revised (though "never" is a long time),
some kind of permanent record might be a way of noting the fact that
the implementors and users have thought about this issue.

> Althugh I believe Erann Gat is wrong in his interpretation of the spec,
> I also agree with the spirit of what he says, and repeat his question,
> which has still gone unanswered: "What practical use can anyone make
> of an (array nil)?" and its followon "Do you really think anyone will
> ever do so?" (I suppose the answer to the second is "yes, if the feature
> is there, it will be used".)

For what it's worth, I agree that the lack of bug reports ever
received by any implementor until Paul's suite came along indicates
that there has been no community need for arrays specialized on NIL.
(and likewise that no-one has bemoaned the fact that, despite
specializations on (SIGNED-BYTE 32) and (UNSIGNED-BYTE 32), no
specialization on (UNSIGNED-BYTE 31) was present).

Christophe

[1] While the primary rationale behind this is the idea that the
standard defines the language, there is a secondary rationale in
sbcl's case: since it is intended to be written in portable Common
Lisp, and since it would be embarrassing not to be able to compile
itself, the system it describes needs to be at least as conforming as
its source code.
-- 
http://www-jcsu.jesus.cam.ac.uk/~csr21/       +44 1223 510 299/+44 7729 383 757
(set-pprint-dispatch 'number (lambda (s o) (declare (special b)) (format s b)))
(defvar b "~&Just another Lisp hacker~%")    (pprint #36rJesusCollegeCambridge)
From: Joe Marshall
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <wuf97ynl.fsf@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > MCL, Lispworks nor CLisp handle this "correctly" on your and Paul's view. 
> > MCL upgrades NIL to T.
> 
> I have no problem with this personally.

I'd expect MCL to upgrade all arrays to T as well, then.

  `Also, if a type Tx is a subtype of another type Ty, then then the
   upgraded array element type of Tx must be a subtype of the upgraded
   array element type of Ty.'

Letting Tx be NIL, then for any type Ty, Tx is a subtype.  So the
upgraded array element type of Tx (T, on MCL) must be a subtype of the
upgraded array element type of Ty.  The only type of which T is a
subtype is T.

> > Lispworks signals an error, saying (array nil) is
> > an illegal type.
> 
> I think this is a bug.

mumble

> > Clisp upgrades NIL to BIT.
> 
> This seems random, but acceptable.  I don't think any type NIL objects are
> going to fail to be storable in this array.

I think BASE-CHAR is a better choice.  Since NIL is a subtype of
CHARACTER, then (vector nil) ought to be a string.  bit-vector doesn't
encompass (vector nil).

> > I don't hear very many people clamoring for this to change.
> 
> Whether implementors do or don't make the code work is a matter of
> resources.  I will trust each implementor  to allocate its resources
> according to market need.  However, I see no lack of clarity here about
> what should work.

Maybe JonL had the right idea with just flushing upgrading altogether.
From: Ingvar Mattsson
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87ptl2jvbk.fsf@gruk.tech.ensign.ftech.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <···············@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
[SNIP, arrays with ELEMENT-TYPE nil]
> >  That is, these arrays may not be useful for
> > everything other arrays are, but they still make nice doorstops, 
> > paperweights, etc.
> 
> No, they don't.  They infest the language with all kinds of useless
> special cases and other random stupidity.  If you need a doorstop do:

I think the canonical use for an array of this kind is to make sure
that code taht reasons abstractly about arrays (if there are such
beasts and I expect there to be, since I can see a use for it) only
looks at meta-information of the array and not at the innards.

//Ingvar
-- 
When in douFNORD! This signature has been hi-jacked by Fnord Information
systems, to fnordprovide you with unfnordlimited information.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506030844160001@192.168.1.51>
In article <··············@gruk.tech.ensign.ftech.net>, Ingvar Mattsson
<······@cathouse.bofh.se> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <···············@shell01.TheWorld.com>, Kent M Pitman
> > <······@world.std.com> wrote:
> [SNIP, arrays with ELEMENT-TYPE nil]
> > >  That is, these arrays may not be useful for
> > > everything other arrays are, but they still make nice doorstops, 
> > > paperweights, etc.
> > 
> > No, they don't.  They infest the language with all kinds of useless
> > special cases and other random stupidity.  If you need a doorstop do:
> 
> I think the canonical use for an array of this kind is to make sure
> that code taht reasons abstractly about arrays (if there are such
> beasts and I expect there to be, since I can see a use for it) only
> looks at meta-information of the array and not at the innards.

Well, I'm certainly no expert on such code so I'm willing to be concinced
about this.  But it seems to me that code that reasons abstractly about
arrays without ever accessing them would have no reason ever to make an
actual array, but could simply make objects that describe arrays.  Making
the actual arrays would seem to be a horrific waste of memory.

E.
From: Ingvar Mattsson
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <878yrqjex4.fsf@gruk.tech.ensign.ftech.net>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@gruk.tech.ensign.ftech.net>, Ingvar Mattsson
> <······@cathouse.bofh.se> wrote:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > In article <···············@shell01.TheWorld.com>, Kent M Pitman
> > > <······@world.std.com> wrote:
> > [SNIP, arrays with ELEMENT-TYPE nil]
> > > >  That is, these arrays may not be useful for
> > > > everything other arrays are, but they still make nice doorstops, 
> > > > paperweights, etc.
> > > 
> > > No, they don't.  They infest the language with all kinds of useless
> > > special cases and other random stupidity.  If you need a doorstop do:
> > 
> > I think the canonical use for an array of this kind is to make sure
> > that code taht reasons abstractly about arrays (if there are such
> > beasts and I expect there to be, since I can see a use for it) only
> > looks at meta-information of the array and not at the innards.
> 
> Well, I'm certainly no expert on such code so I'm willing to be concinced
> about this.  But it seems to me that code that reasons abstractly about
> arrays without ever accessing them would have no reason ever to make an
> actual array, but could simply make objects that describe arrays.  Making
> the actual arrays would seem to be a horrific waste of memory.

Having them being fed an array makes it simpler to couple them wuith
programs using arrays (that is, using the object as its own
meta-description object). I'd be happier for code reasoning in the
abstract about lisp arrays actually handling lisp arrays, since that
means I can pass the code an array and be handed back the expected
answer, rather than having to create an intermediary object.

On top of that, with suitable special-casing in the compiler, I can
see the actual size of (make-array '(100 100) :element-type nil) and
(make-array '(1000 1000 1000 1000) :element-type nil) to be on the
same side.

But, also admittedly, this is coming uncomfortably close to splitting
hairs.

//Ingvar
-- 
When it doesn't work, it's because you did something wrong.
Try to do it the right way, instead.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031015170001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@gruk.tech.ensign.ftech.net>, Ingvar Mattsson
<······@cathouse.bofh.se> wrote:

> Having them being fed an array makes it simpler to couple them wuith
> programs using arrays (that is, using the object as its own
> meta-description object).

Really?  Could you show me an example?

> But, also admittedly, this is coming uncomfortably close to splitting
> hairs.

I think the phrase you're looking for is "grasping at straws."

E.
From: Kalle Olavi Niemitalo
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87smpy2ak8.fsf@Astalo.kon.iki.fi>
Ingvar Mattsson <······@cathouse.bofh.se> writes:

> On top of that, with suitable special-casing in the compiler, I can
> see the actual size of (make-array '(100 100) :element-type nil) and
> (make-array '(1000 1000 1000 1000) :element-type nil) to be on the
> same side.

Note that the latter is likely to exceed ARRAY-TOTAL-SIZE-LIMIT,
whose value must be a fixnum.
From: Mario S. Mommer
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <fzbrwm86wq.fsf@cupid.igpm.rwth-aachen.de>
···@jpl.nasa.gov (Erann Gat) writes:
> Well, I'm certainly no expert on such code so I'm willing to be concinced
> about this.  But it seems to me that code that reasons abstractly about
> arrays without ever accessing them would have no reason ever to make an
> actual array, but could simply make objects that describe arrays.  Making
> the actual arrays would seem to be a horrific waste of memory.

Why? The arrays have nothing in them, and can have nothing in them. So
it is a failure of the compiler if it allocates memory for entries
which cannot be there :-)

How many angels fit in (vector nil 5)?
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031007530001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
<········@yahoo.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> > Well, I'm certainly no expert on such code so I'm willing to be concinced
> > about this.  But it seems to me that code that reasons abstractly about
> > arrays without ever accessing them would have no reason ever to make an
> > actual array, but could simply make objects that describe arrays.  Making
> > the actual arrays would seem to be a horrific waste of memory.
> 
> Why? The arrays have nothing in them, and can have nothing in them. So
> it is a failure of the compiler if it allocates memory for entries
> which cannot be there :-)

I'm going to take a guess here that what you are really suggesting is that
objects of type (array nil) have utility as stand-ins for real arrays in
programs that reason about (but do not actually use) arrays.

If that's not correct, then please clarify.

It that is correct, then I say that (array nil)s would not be well suited
for this use because a program that reasons about arrays might want to
reason about the element type of the array, but this information has been
lost.

E.
From: Mario S. Mommer
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <fzy8zpx46l.fsf@cupid.igpm.rwth-aachen.de>
···@jpl.nasa.gov (Erann Gat) writes:
> In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
> <········@yahoo.com> wrote:
> > Why? The arrays have nothing in them, and can have nothing in them. So
> > it is a failure of the compiler if it allocates memory for entries
> > which cannot be there :-)
> 
> I'm going to take a guess here that what you are really suggesting is that
> objects of type (array nil) have utility as stand-ins for real arrays in
> programs that reason about (but do not actually use) arrays.
> 
> If that's not correct, then please clarify.

My point was that a compiler that allocates lots of memory for holding
elements of type nil would be wasting space.

> It that is correct, then I say that (array nil)s would not be well suited
> for this use because a program that reasons about arrays might want to
> reason about the element type of the array, but this information has been
> lost.

Well, the information is there: the element type of the array is nil
:-)

I find this whole discussion amusing but not really important. Is
there any actual code out there that actually depends on this
"feature"?
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031435010001@k-137-79-50-101.jpl.nasa.gov>
In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
<········@yahoo.com> wrote:

> I find this whole discussion amusing but not really important.

Yes, I would agree with you.  The fact that this is pretty unimportant in
the scheme of things (no pun intended) is one of the reasons I decided to
have a little fun and undulge in some (un-)Zen koans.

Howver, I think it's not completely trivial either.  There's a lot of
muddled thinking in the world disguised as clear thinking, or worse,
smugness, and sometimes it can lead to serious consequences (like the
current mess in Iraq).  I think it is best to try to nip such things in
the bud.  Being smug is bad enough, but being smug and *wrong* at the same
time should not go unchallenged IMHO.

E.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfw65mt3hce.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
> <········@yahoo.com> wrote:
> 
> > I find this whole discussion amusing but not really important.
> 
> Yes, I would agree with you.  The fact that this is pretty unimportant in
> the scheme of things (no pun intended) is one of the reasons I decided to
> have a little fun and undulge in some (un-)Zen koans.

You're the one who's always pushing for presentational simplicity in the
language.  That argues for allowing degenerate cases.  Otherwise, you have
lots of special cases to explain.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031628570001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In article <··············@cupid.igpm.rwth-aachen.de>, Mario S. Mommer
> > <········@yahoo.com> wrote:
> > 
> > > I find this whole discussion amusing but not really important.
> > 
> > Yes, I would agree with you.  The fact that this is pretty unimportant in
> > the scheme of things (no pun intended) is one of the reasons I decided to
> > have a little fun and undulge in some (un-)Zen koans.
> 
> You're the one who's always pushing for presentational simplicity in the
> language.  That argues for allowing degenerate cases.  Otherwise, you have
> lots of special cases to explain.

Let's decide whether we're arguing about what *is* or what *should be*.  I
thought we were arguing about what *is*, in which case the standard says
what it says, and it seems to me to be pretty unambiguous in disallowing
(array nil)s in light of the definition of array and array-total-size.  I
have yet to hear a convincing refutation of that argument.

If you're asking me what I think *should be*, I would have no problem with
an (array nil) type as long as it had some utility.  Right now I don't see
any, but I haven't read your most recent post on this, since it's long and
I want to allocate enough time to give it serious consideration.  My
knee-jerk (having thought about it for less than an hour) response is that
a reasonable way to treat type nil, at least in principle, is to treat it
as if it were equivalent to type '(values).  I also think if one were to
go down this road it would be a good idea to allow implementations the
freedom to not treat (array nil) as a sub-type of string, but I have not
thought through the implications of that.  Another alternative that would
fix this problem is to change the array element type upgrading rules to
allow nil to be upgraded, but I don't understand the rationale behind the
current rules so I can't comment intelligently on that.

In any case, if I were to seriously undertake to bring about change in
Common Lisp (which at this point I am not at all chomping at the bit to
do) this is probably not where I would choose to begin.

E.
From: Kent M Pitman
Subject: array element type NIL [was Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sfwisqthe4i.fsf_-_@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> ... it seems to me to be pretty unambiguous in disallowing
> (array nil)s ...

In all my discussions of language semantics over many years I have used
the following working definition of "unambiguous":

 No two people, competent in the matter (that is, made aware of the same
 set of passages, and generally competent in reasoning in general), and
 speaking seriously, arrive at divergent points of view.

Whenever there has been a divergence of claims, I have tried to accept
as a fact that there is an ambiguity merely because of the existence of
divergent points of view.

The only exception I've ever made is when someone is willing to call the
other incompetent (at least on the matter in point, not necessarily as a
general matter).

If you want to use another definition of ambiguity, one that can be measured
objectively, I'd like to hear it.  If you don't, then my believing otherwise
AT LEAST makes the passage "ambiguous".

In fact, I don't yet regard your remarks on this matter as competent
in that you have not in reply to my request advanced any clear theory
that establishes why an error must be signaled aggressively (at
make-array time) rather than lazily (at aref time) given that your
case seems to hinge on what you seem to regard as "derived facts"
(although I doubt your derivation chain) and not "stated facts", and
since you have not "derived" a time at which an error must be signaled.

And, although you have cited numerous implementations with the behavior,
the implementors of none of those implementations have come forward to
defend their positions, and as such your evidence in that regard is "not
best evidence" / "hearsay".  I'd prefer that those implementors speak for
themselves and answer the same questions I've asked you.  Perhaps one of
them can be more clear.
From: Erann Gat
Subject: Re: array element type NIL [was Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <gat-2506032305180001@192.168.1.51>
In article <··················@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > ... it seems to me to be pretty unambiguous in disallowing
> > (array nil)s ...
> 
> In all my discussions of language semantics over many years I have used
> the following working definition of "unambiguous":
> 
>  No two people, competent in the matter (that is, made aware of the same
>  set of passages, and generally competent in reasoning in general), and
>  speaking seriously, arrive at divergent points of view.
> 
> Whenever there has been a divergence of claims, I have tried to accept
> as a fact that there is an ambiguity merely because of the existence of
> divergent points of view.
> 
> The only exception I've ever made is when someone is willing to call the
> other incompetent (at least on the matter in point, not necessarily as a
> general matter).
> 
> If you want to use another definition of ambiguity, one that can be measured
> objectively, I'd like to hear it.  If you don't, then my believing otherwise
> AT LEAST makes the passage "ambiguous".
> 
> In fact, I don't yet regard your remarks on this matter as competent
> in that you have not in reply to my request advanced any clear theory
> that establishes why an error must be signaled aggressively (at
> make-array time) rather than lazily (at aref time) given that your
> case seems to hinge on what you seem to regard as "derived facts"
> (although I doubt your derivation chain) and not "stated facts", and
> since you have not "derived" a time at which an error must be signaled.

Please note that my characterization of the spec's position as
"unambiguous" was preceded by the qualifier "it seems to me."

The spec states that the number of elements in an array is the product of
the array dimensions.  Unless the size of the array is zero, then this can
be true if and only if the elements of the array are members of a
non-empty set, that is, if the elements are of a type other than nil. 
This seems to me like a very simple and straightforward logical
consequence of a straightforward (albeit obscure) statement in the spec.

As to whether or when this should result in an error, I do not believe I
have actually taken a position on this, except to point out that if you
are going to do lazy evaluation there's nothing particularly magical about
array creation that should cause that to be lazy and other things (like
dividing by zero) to require "aggresive error signalling."  Should (progn
(/ 1 0) t) be required to "aggressively signal an error"?  I think it
could reasonably be argued either way.

E.
From: Kent M Pitman
Subject: Re: array element type NIL [was Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sfwbrwkokn4.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> Please note that my characterization of the spec's position as
> "unambiguous" was preceded by the qualifier "it seems to me."
> 
> The spec states that the number of elements in an array is the product of
> the array dimensions.  Unless the size of the array is zero, then this can
> be true if and only if the elements of the array are members of a
> non-empty set,

The use of logic jargon "if and only if" makes it sound like there is no
room for argument here, but you can only use that jargon when you've shown
that there is no other option.  The fact is that you _can_ create an array
that has not been initialized, and there is no exception in the spec that
requires the number of elements to not be computable for such arrays.  So
there are other possibilities.

 "If initial-contents is not supplied, the consequences of later reading
  an uninitialized element of new-array are undefined unless either
  initial-element is supplied or displaced-to is non-nil."

Note also the use of "later reading", suggesting that the burden of correct
use is at access time, not at creation time.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwadc53hee.fsf@shell01.TheWorld.com>
Mario S. Mommer <········@yahoo.com> writes:

> I find this whole discussion amusing but not really important. Is
> there any actual code out there that actually depends on this
> "feature"?

People probably code around it.  The question is whether they should have
to.  I think they shouldn't have to code around it.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwel1h3hj4.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

...
> Well, I'm certainly no expert on such code so I'm willing to be concinced
> about this.  But it seems to me that code that reasons abstractly about
> arrays without ever accessing them would have no reason ever to make an
> actual array, but could simply make objects that describe arrays.  Making
> the actual arrays would seem to be a horrific waste of memory.

Well, it's rare that I actively claim any kind of "expertise"
credential but I suppose I'll claim that I'm as expert as the next
expert on the issue of program-writing programs, and I can tell you
that the place where arrays or other data objects are created is often
very different than the places where they're used, and it's often
necessary in program-writing programs to go ahead and allow something
to be allocated exactly so that a piece of dataflow is satisfied.
While it is certainly possible to supply an arbitrary object such as
NIL or '*NO-ARRAY-ALLOCATED* or some such thing rather than (MAKE-ARRAY ...)
I prefer to use degenerate case where one is possible in order to minimally
perturb the program structure.

This is really no different in my mind than how CASE in the far distant past
used to complain if there were no cases.  That is,
 (CASE FOO
   (() ...)
   ...)
used to whine because it expanded into 
 (COND ((OR) ...))
and either (OR) or a COND with a NIL car was something that made the Maclisp
compiler unhappy.  My code used to be littered with

 `(CASE FOO
    (($MAKE-COMPILER-HAPPY$ ,@FROBS-THAT-MIGHT-BE-EMPTY) ...))

which maybe other people would have written as

 `(CASE FOO
    ,@(WHEN FROBS-THAT-MIGHT-BE-EMPTY
        `((,FROBS-THAT-MIGHT-BE-EMPTY ;but are now known not to be
          ...)))
    ...)

but the problem with this latter construction is the following:

 `(DEFUN FOO (X)
    (CASE FOO
       ,@(WHEN FROBS-THAT-MIGHT-BE-EMPTY
           `((,FROBS-THAT-MIGHT-BE-EMPTY ;but are now known not to be
             X)))
       (OTHERWISE 'LOSE)))

where the X is only referenced in potentially unreachable code.  You want
the compiler to notice that X is "used" even if unreachable (yet another
seemingly odd case, and yet not something the compiler should complain about).
X would have been used, and its not being used is a proper fallout of 
compiler optimization.  But if _I_ remove it, then X's absence will be
complained about--the compiler needs to remove it.

The kind of foolery I've just mentioned was the beginning of my personal
quest to get degenerate cases accepted because

 `(CASE FOO
    (,FROBS-THAT-MIGHT-BE-EMPTY ...))

is still better _even if_ the code in the ... might not be reached in some
circumstances.

And the same about array references.  Asking for an odd kind of 
unreferenceable array is not materially different than asking for 
unreachable code.  The only slight difference is that in the former case
it's up to you to make sure the reference code  is never executed and
in the second case it's up to the compiler.  But each is sort of 
philosophically odd in some ways.  Still, each has its purpose in life.
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2506031643280001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

[snip]

> This is really no different in my mind than how CASE in the far distant past
> used to complain if there were no cases.

[snip]

Your point is well taken, but one has to be very careful about arguments
from analogy.  In this case I think your analogy is flawed.  The correct
analogy for the CASE example is the degenerate case of an array with no
dimensions, which Common Lisp handles properly (and kudos to its designers
for this I might add).

The salient difference is that CASE clauses and array dimensions both
represent (at least potentially) *structural* differences in the code,
whereas an array element type of nil is not structurally different from
other valid cases where the array element type is a symbol.

(I suspect your counter-argument here is going to be that special cases
are ugly and should be discharged whenever possible.  I agree.  But the
special case here is already woven into the fabric of the standard by
choosing to make NIL denote the empty set.  *That* is the special case
that needs to be discharged - or to have its utility defended - if we're
going to proceed on the general principle of eliminating special cases.)

In any case, you could short-circuit this discussion by providing an
actual example of how an (array nil) could be useful rather than trying to
argue by analogy.

E.
From: Kent M Pitman
Subject: array element type NIL [was Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <sfwd6h1hd6u.fsf_-_@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <···············@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> [snip]
> 
> > This is really no different in my mind than how CASE in the far distant past
> > used to complain if there were no cases.
> 
> [snip]
> 
> Your point is well taken, but one has to be very careful about arguments
> from analogy.  In this case I think your analogy is flawed.  The correct
> analogy for the CASE example is the degenerate case of an array with no
> dimensions, which Common Lisp handles properly (and kudos to its designers
> for this I might add).

Analogies are, by their nature, not canonical.  Just because the situation
you cite is an analogy, that does not preclude other analogies.  My argument
was intended to inform the notion of 'degenerate cases' in general, since
many people have argued that it's idiotic to support them as 'always trivial'.
This would argue for not including the type NIL at all.  Once you include
the type NIL, you have to assume that someone might use it.

For example, I claim that:

 (defun foo (x)
   (declare (type nil x))
   x)

is a valid program.  It is just a function you must never call.  
The compiler should be permitted to rewrite it as:

 (defun foo (x)
   (error "The function foo received ~S as an argument." x))

> The salient difference is that CASE clauses and array dimensions both
> represent (at least potentially) *structural* differences in the code,
> whereas an array element type of nil is not structurally different from
> other valid cases where the array element type is a symbol.

The point of any anology is that some issues are carried over and some are
not.  I did not intend the 'structural' bit to be carried over.  It is not
your right to assert that my only option was to do that.  I suggest, by
meta-analogy, that this reasoning error is analogous to the reasoning error
you are making in the other case, that is, your space of 'available 
possibilities in a given space' appears to sometimes be more limited 
than mine is.

> (I suspect your counter-argument here is going to be that special cases
> are ugly and should be discharged whenever possible.  I agree.  But the
> special case here is already woven into the fabric of the standard by
> choosing to make NIL denote the empty set.

The empty set is not a contagious disease.  It does not infect everything
it touches.  It merely says that there are no enumerable members of the set 
in question.  

While you may not be able to purchase a unicorn, there is no reason that 
someone can't sell you a doghouse with a sign on it saying "home suitable 
for the housing of unicorns".

> *That* is the special case that needs to be discharged - or to have
> its utility defended - if we're going to proceed on the general
> principle of eliminating special cases.)

The fact of empty sets is not something that either is or is not to be
discharged.  It's just a fact.

> In any case, you could short-circuit this discussion by providing an
> actual example of how an (array nil) could be useful rather than trying to
> argue by analogy.

It doesn't come up often, but I'm nearly certain I have some code that
expects it to work.  I just think there's no reason for it not to work,
_and_ I think the spec says it will work.  It says I can pass an element-type,
it says NIL is a type, it doesn't say NIL is an invalid element-type, etc.

I've even offered an "implementation" in another message.
From: Erann Gat
Subject: Re: array element type NIL [was Re: Lisp-2 or Lisp-1]
Date: 
Message-ID: <gat-2506032245120001@192.168.1.51>
In article <··················@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> While you may not be able to purchase a unicorn, there is no reason that 
> someone can't sell you a doghouse with a sign on it saying "home suitable 
> for the housing of unicorns".

Or a box of chocolates labeled, "Elephant cage.  Capacity: zero elephants."

> The fact of empty sets is not something that either is or is not to be
> discharged.  It's just a fact.

It is a fact.  It is also a special case.

> > In any case, you could short-circuit this discussion by providing an
> > actual example of how an (array nil) could be useful rather than trying to
> > argue by analogy.
> 
> It doesn't come up often, but I'm nearly certain I have some code that
> expects it to work.

It's not really important that we resolve this so don't knock yourself
out, but if you do happen to come across this code I'd love to see it.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <HsCdnaf8h55NamWjXTWJjQ@dls.net>
Erann Gat wrote:

> My position is 1) arrays of size zero with element type nil are arguably
> legal but useless, and 2) arrays of size>0 with element type nil are
> expressly prohibited on the grounds that the standard requires that "An
> array contains a set of objects called elements that can be referenced
> individually according to a rectilinear coordinate system."  An array with
> element type NIL is the same as an array that is expressly prohibited from
> containing any elements.

But these arrays do contain a set of elements -- an *empty* set of elements.
They in no way violate the letter of the specification.


> The point is that an elephant cage that cannot contain elephants is not an
> elephant cage, just as an array that cannot contain elements is not an
> array.

I think you would do better to read the spec with precision rather
than resort to inane analogies.

	Paul
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2406031958360001@192.168.1.51>
In article <······················@dls.net>, "Paul F. Dietz"
<·····@dls.net> wrote:

> Erann Gat wrote:
> 
> > My position is 1) arrays of size zero with element type nil are arguably
> > legal but useless, and 2) arrays of size>0 with element type nil are
> > expressly prohibited on the grounds that the standard requires that "An
> > array contains a set of objects called elements that can be referenced
> > individually according to a rectilinear coordinate system."  An array with
> > element type NIL is the same as an array that is expressly prohibited from
> > containing any elements.
> 
> But these arrays do contain a set of elements -- an *empty* set of elements.
> They in no way violate the letter of the specification.

And a box of chocolates contains a set of elephants -- the empty set of
elephants.

> > The point is that an elephant cage that cannot contain elephants is not an
> > elephant cage, just as an array that cannot contain elements is not an
> > array.
> 
> I think you would do better to read the spec with precision rather
> than resort to inane analogies.

And I think you would do better to apply a little common sense.  The spec
is written in natural language, and can be read with no more precision
than that allows.  Our argument is essentially this: I say "It does not
contain a set of elements."  You say, "Sure it does.  It contains the
empty set."  This dispute cannot be resolved with mathematical precision
because mathematically we are both saying exactly the same thing.  The
inanity you perceive in my analogy is a reflection of the inanity of your
position.  It doesn't have to be chocolate boxes and elephants.  It can be
anything.  Any X that does not contain Y is a container of that contains
an empty set of Y.  A thimble is an office building with an empty set of
offices.  A donut is a filing cabinet with an empty set of files.  Inane? 
These are no more and no less inane than saying that an array constrained
to contain an empty set of elements is an array.

But OK, have it your way:

array total size n. the total number of elements in an array, computed by
taking the product of the dimensions of the array.

An (array nil) of size>0 would violate this, since the number of elements
in such an array, if they existed, would necessarily be zero regardless of
the array dimensions.

Is that precise enough for you?

Now why don't we argue over something practical, like whether or not all
the unicorns on Mars are pink.

E.
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ThednTDu9_n_xmujXTWJjA@dls.net>
Erann Gat wrote:

> Yes, there is.  Section 15.1.1:
> 
> "An array contains a set of objects called elements that can be referenced
> individually according to a rectilinear coordinate system."

Let's look more closely:

element n. 2. (of an array) an object that is stored in the array.

When an array is created without :initial-contents or :initial-element,
no objects have been stored into the array.  The set of elements is therefore empty,
even if the rectilinear coordinate system has a nonzero number of places
in which to store elements.

NIL arrays are just arrays in which this initial unstored state cannot
be changed.

	Paul
From: Paul F. Dietz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <N_ydnUzM5NqnZ2ijXTWJkg@dls.net>
·············@attbi.com wrote:

> But seeing as type string is *defined* as (vector character) in
> section 16.2, `System Class STRING', I don't see why 'string and
> '(vector character) aren't identical definitions.

It's not defined to be (vector character).  It's defined to be
the union of (vector character) and all (vector foo) where foo
is a specialized array element type that is a subtype of character.

	Paul
From: ·············@attbi.com
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ptl56hqw.fsf@attbi.com>
"Paul F. Dietz" <·····@dls.net> writes:

> ·············@attbi.com wrote:
>
>> But seeing as type string is *defined* as (vector character) in
>> section 16.2, `System Class STRING', I don't see why 'string and
>> '(vector character) aren't identical definitions.
>
> It's not defined to be (vector character).  It's defined to be
> the union of (vector character) and all (vector foo) where foo
> is a specialized array element type that is a subtype of character.

Ok, I see now.

But therefore strings are a supertype of (vector character).  Why
would one expect (subtypep 'string '(vector character)) to be T?
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2206031337510001@192.168.1.51>
In article <············@attbi.com>, ·············@attbi.com wrote:

> "Paul F. Dietz" <·····@dls.net> writes:
> 
> > ·············@attbi.com wrote:
> >
> >> But seeing as type string is *defined* as (vector character) in
> >> section 16.2, `System Class STRING', I don't see why 'string and
> >> '(vector character) aren't identical definitions.
> >
> > It's not defined to be (vector character).  It's defined to be
> > the union of (vector character) and all (vector foo) where foo
> > is a specialized array element type that is a subtype of character.
> 
> Ok, I see now.
> 
> But therefore strings are a supertype of (vector character).  Why
> would one expect (subtypep 'string '(vector character)) to be T?

Because in a given implementation, '(vector character) and 'string *could*
be identical types in the sense that (typep x 'string) if and only if
(typep x '(vector character)) for any X -- with a single exception that
has no possible practical use.  In the spirit of this discussion I'll
leave it as an excercise for the reader to figure out what that lone
exception is.

E.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwadccfrs4.fsf@shell01.TheWorld.com>
Joe Marshall <···@ccs.neu.edu> writes:

> There are no namespace qualifiers for tagbody/go or block/return-from.
> They only work in the special forms that refer to them.

So?  The function namespace is restricted in access, just not as much.
There's no Turing-dictated (so to speak) minimum list of capabilities
for being a namespace.

No, but they can only be referred to by special forms and they are possible
to close over.

 (block foo
   (let ((foo #'(lambda (foo) (return-from foo foo))))
     (flet ((foo (bar) (funcall foo bar)))
       (tagbody
       (flet ((foo () (go foo)))
          (let ((foo (block foo
                       (let ((foo #'(lambda (foo) (return-from foo foo))))
                         (list 'foo
                           (block foo
                             (let ((foo #'(lambda (bar) (funcall foo bar))))
                               (flet ((foo () (foo)))
                                 (tagbody
                                   (funcall foo #'foo)
                                  foo
                                   (print 'foo))))))))))
            (funcall foo)))
       (print 'foo)
      foo
       (foo 'phew)))))
 => PHEW

Note that it's tricky to test this because in practice CL inserts a named
block into FLET expressions by the name of the function, and you might think
that this meant that functions and blocks move in lockstep since
 (block foo
   (flet ((foo (x) (return-from foo x)))
     ...))
returns from the local function FOO, not the function block.
But really, all four of these namespaces is separately closeable and is 
properly a different namespace.

The reason that go tags may not feel like a namespace is that there is
no operator which extracts their value without calling them.  However,
if you posited special forms BLOCK-FUNCTION (that returned a function
that must be applied or multiple-value-called) and TAG-FUNCTION (that
returned a function that must be called with no args), you'd see these
were really namespaces.  However, in the process, you'd probably also
complicate the data flow analysis of the compiler by a fair amount.
I'd bet that the inability to assign these into regular dataflow
considerably simplifies the tests needed in order to tell what
closures are "simple" and what are not...
From: ·············@attbi.com
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <4r2knzpq.fsf@attbi.com>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
>
>> There are no namespace qualifiers for tagbody/go or block/return-from.
>> They only work in the special forms that refer to them.
>
> So?  The function namespace is restricted in access, just not as much.
> There's no Turing-dictated (so to speak) minimum list of capabilities
> for being a namespace.

By that argument, any function that takes a symbol as an argument
defines a namespace.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwznkcz5un.fsf@shell01.TheWorld.com>
·············@attbi.com writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> >> There are no namespace qualifiers for tagbody/go or block/return-from.
> >> They only work in the special forms that refer to them.
> >
> > So?  The function namespace is restricted in access, just not as much.
> > There's no Turing-dictated (so to speak) minimum list of capabilities
> > for being a namespace.
> 
> By that argument, any function that takes a symbol as an argument
> defines a namespace.

I think this is one of those things that forces you to ask the question:
why do you want to know?

I don't think there is a definite answer to the question.  Philosophically,
my usual approach to such questions is to reduce them to a canonical question
of similar kind that I've already reasoned thoroughly about.  The one that
comes to mind here is the riddle of the "Ship of Theseus".  I'm not going
to re-explain this, but you can read about it in:
 http://www.google.com/groups?selm=sfwlmcwfe6e.fsf%40shell01.TheWorld.com

The relevance in this context of that story is that there are issues of
naming which people fight over endlessly because they want claim to a name
absent semantics, not because they want to understand a semantics.

Choosing how many namespaces depends on what you want to count as a namespace.
You do not allow the set of namespaces you find (or want to find or want not
to find) to dictate what a namespace is because then you just come up with
a purposeless number.  As in my analysis of the referenced story above, 
you might mean namespaces for the purposes of "what do I close over if I'm
writing a compiler" and  you might mean namespaces for the purpose of "what
names can I use in x or y or z context without them clobbering othre names"
and those might yield different results.

The 4 I spoke of is the number of namespaces that a compiler has to maintain
for closure purposes, but certainly it's also true that defining a class by
a given name will not clobber any of those 4.   Indeed, making a special
variable X will also not clobber any of the lexical bindings of variables,
functions, go tags, or block tags.

What we learn from the ship of theseus story is that deciding to fight over
the use of the unqualified term is an endlessly contentious task, and that
trying to derive grand truths about anything specific merely because having
labeled it by the generic term is devisive.

An example of the same tactic in another context entirely that came up
in recent news was the re-submission of Roe v. Wade before the US
courts, claiming that there have been "recent developments in
understanding about 'when life begins' which the court should take
into account".  It is clear that "when life begins" is a generic term
that has multiple, incompatible meanings and that fighting voer whose
meaning is the right one is doomed to never terminate.  It is further
clear that having labeled something as "alive" or "not alive" is not
proof of anything that will be useful to any court, since the labeling
will have been for the purpose of some specific need, not for the
purpose of establishing Grand Truth, and re-generalizing such a label
is (IMO) philosophically inappropriate.  Not that the court will see
it that way.  Bleah.  I wonder if there is still time to write an
amicus brief...
From: ·············@attbi.com
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <he6jcx9h.fsf@attbi.com>
Kent M Pitman <······@world.std.com> writes:

> ·············@attbi.com writes:
>> 
>> By that argument, any function that takes a symbol as an argument
>> defines a namespace.
>
> I don't think there is a definite answer to the question.  Philosophically,
> my usual approach to such questions is to reduce them to a canonical question
> of similar kind that I've already reasoned thoroughly about.  The one that
> comes to mind here is the riddle of the "Ship of Theseus".  I'm not going
> to re-explain this, but you can read about it in:
>  http://www.google.com/groups?selm=sfwlmcwfe6e.fsf%40shell01.TheWorld.com
>
> The relevance in this context of that story is that there are issues of
> naming which people fight over endlessly because they want claim to a name
> absent semantics, not because they want to understand a semantics.

As much as I like to argue for the sake of argument (this is, after
all, usenet), I mostly wanted to support the name `Lisp-2' because I
think that the function and variable namespaces are the most prominent
ones and the most confusing to beginning lispers (especially those
from the Scheme community).  I think that explaining FUNCALL and
FUNCTION in terms of namespace qualifiers might ease the transition.

There doesn't seem to be much confusion about tagbody/go tags.
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwn0gbqxij.fsf@shell01.TheWorld.com>
·············@attbi.com writes:

> As much as I like to argue for the sake of argument (this is, after
> all, usenet), I mostly wanted to support the name `Lisp-2' because I
> think that the function and variable namespaces are the most prominent
> ones and the most confusing to beginning lispers (especially those
> from the Scheme community).  I think that explaining FUNCALL and
> FUNCTION in terms of namespace qualifiers might ease the transition.
> 
> There doesn't seem to be much confusion about tagbody/go tags.

I think of FUNCTION as a "qualifier" but FUNCALL is not a qualifier.
Its function is not syntactic.  The problem with explaining 
 (FUNCALL #'CAR ...)
as a qualifier is that you have to then explain
 (FUNCALL X ...)
I don't really like a convenient lie that creates as many problems
as it covers over.

The real truth is that there is no primitive special form for calling a
function other than to name the function.

Both Scheme and CL lack the CALL special operator that I claim they
should both have, where
 (call f ...) == (funcall #'f ...)  ;CL
or
 (call f ...) == (apply f (list ...))
The essential part just being a way to ensure that you don't accidentally
CALL a macro or special form. hence,
 (call f ...) != (f ...)
to avoid things like (call lambda (x) x) => (lambda (x) x).
If there were a CALL operator, then you could say that CALL had a syntactic
component as a qualifier.

Underlying the issue is that in CL, there is a way to get data out of the
function namespace, but no way to put data back.  Neither is there a way
to (SETF #'f ...) nor is there a way to (let ((#'f ...)) ...).  Once 
something is taken out of the function namespace, you can compute all you 
want on it in the regular arena, and you can finally call it.  

In Scheme, there is no special namespace to get a function out of, but
neither is there a named operator whose purpose is to do calling.  For
places where this is syntactically apparent, just doing (f ...) is
fine, but for places where the object to be received is being received
as data, I find this terminologically inconvenient.  It means that in
various dataflows you have to write (lambda (f . x) (apply f x))
because there is no primitive name for this idiom.
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-E8F7EF.14004321062003@copper.ipg.tsnz.net>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > Common Lisp is a Lisp 2.  Scheme is a Lisp 1.  Carrying the 
> > classification over to totally different languages doesn't necessarily 
> > make a lot of sense, but Perl is something like a Lisp 5, if you count 
> > Perl's globs as being the same sort of thing as CL's symbols -- you 
> > don't just have a function and data value for each identifier, but a 
> > scalar, a vector, a hash, a format, and a filehandle.
> 
> It's worth noting as long as you're counting namespaces that we had
> the whole debate about namespaces in the context of a question about
> whether variables and functions should collapse their namespaces, and
> so I grabbed Lisp2 as the name to use in the debate.  In fact, though,
> CL is at least a Lisp4 in the sense that it has 4 legitimate lexically
> closable namespaces: variables, functions, tagbody/go tags, and
> block/return-from tags.

You can close over block/return-from tags?  Really?

That means CL has full first-class continuations.

-- Bruce
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwu1akx6zx.fsf@shell01.TheWorld.com>
Bruce Hoult <·····@hoult.org> writes:

> You can close over block/return-from tags?  Really?
> 
> That means CL has full first-class continuations.

No, it means when you hope over them downwards, it doesn't do the
wrong thing.  When you pass through their scope upward, they get
invalidated.  That they know where you wanted to go means they are
lexically closed over; that the place to which they wanted to go is or
is not still "valid" is not an issue of scope, it's an issue of
dynamic control that is orthogonal.
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-22BF22.00140122062003@copper.ipg.tsnz.net>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > You can close over block/return-from tags?  Really?
> > 
> > That means CL has full first-class continuations.
> 
> No, it means when you hope over them downwards, it doesn't do the
> wrong thing.  When you pass through their scope upward, they get
> invalidated.  That they know where you wanted to go means they are
> lexically closed over; that the place to which they wanted to go is or
> is not still "valid" is not an issue of scope, it's an issue of
> dynamic control that is orthogonal.

I'm sure there's a difference here.

You can certainly *implement* the scoping of block/return-from tags 
using closures, but it's not at all necessary to use such a powerful 
(and consing) method.  The less powerful and general techniques of using 
a display vector or a static link chain between stack frames will do 
perfectly well.

Note that your example in the other message will work perfectly well 
with labels and gotos and nested functions in Pascal.  (After allowances 
for other language features being missing, of course)


re "the Ship of Theseus": CL and Dylan and Perl clearly have something 
that Pascal doesn't -- upward closures, which is what I think most 
people mean when they talk without qualification about something having 
"closures" rather than mere lexical scoping (which is what Pascal has).

In the same way, Scheme clearly has something that CL and Dylan and Perl 
don't -- continuations that still work (indeed, more than once) after 
their creating context is gone.  CL and Dylan both have more limited 
continuations, created via the "block" special form in each language.  
This seems like a *real* difference to me, not a mere difference of 
terminology.

Certainly it means that for something I'm doing at work at the moment -- 
lightweight switching between some tens of thousands of threads in a 
single process (representing in-progress prepaid cell phone calls) -- I 
can use Scheme easily but I can't use Dylan or CL unless I'm prepared to 
obfuscate the code considerably.


Now, there is something else that just came to mind.

In Dylan, these "block" continuations are limited (you can't pass them 
upwards), but they are *first* *class* -- you can assign them to 
variables, store them in data structures etc.  In fact they act just 
like any other function: you say (done x) (to use S-expr syntax), not 
(return-from done x).

I don't know, but from your description that they form a 4th namespace, 
can I assume that in CL block/return-from tags are not first class?

Of course this would have little practical effect, since you can (as you 
have demonstrated) wrap a lambda around them, and that *is* first class.

-- Bruce
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwr85nqyu3.fsf@shell01.TheWorld.com>
Bruce Hoult <·····@hoult.org> writes:

> In Dylan, these "block" continuations are limited (you can't pass them 
> upwards), but they are *first* *class* -- you can assign them to 
> variables, store them in data structures etc.  In fact they act just 
> like any other function: you say (done x) (to use S-expr syntax), not 
> (return-from done x).
> 
> I don't know, but from your description that they form a 4th namespace, 
> can I assume that in CL block/return-from tags are not first class?
> 
> Of course this would have little practical effect, since you can (as you 
> have demonstrated) wrap a lambda around them, and that *is* first class.

If by first-class you mean "can they be held in your hand", the answer is
that there is no operator that would make them manifest such that they could
enter the normal data-flow stream.

(Another use of first-class I've seen is the question of whether they are
downward-only or not.  In this regard, CL restarts are not entirely first 
class, yet they can enter the normal data flow stream.)
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-7DDEE2.11061222062003@copper.ipg.tsnz.net>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In Dylan, these "block" continuations are limited (you can't pass them 
> > upwards), but they are *first* *class* -- you can assign them to 
> > variables, store them in data structures etc.  In fact they act just 
> > like any other function: you say (done x) (to use S-expr syntax), not 
> > (return-from done x).
> > 
> > I don't know, but from your description that they form a 4th namespace, 
> > can I assume that in CL block/return-from tags are not first class?
> > 
> > Of course this would have little practical effect, since you can (as you 
> > have demonstrated) wrap a lambda around them, and that *is* first class.
> 
> If by first-class you mean "can they be held in your hand",

I mean (as above): "you can assign them to variables, store them in data 
structures etc".


> the answer is that there is no operator that would make them manifest 
> such that they could enter the normal data-flow stream.

OK, thanks.


I really don't think they're a "namespace", then.  It's more like each 
block has an undocumented lexical binding called something like 
__return_from_tag__, that happens to only be used by the block special 
form and return-from.

-- Bruce
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwsmq3ar75.fsf@shell01.TheWorld.com>
Bruce Hoult <·····@hoult.org> writes:

> I really don't think they're a "namespace", then.  It's more like each 
> block has an undocumented lexical binding called something like 
> __return_from_tag__, that happens to only be used by the block special 
> form and return-from.

The same can be said of the function namespace.  Can we now say we have only
one namespace?
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-D6A1BD.13131522062003@copper.ipg.tsnz.net>
In article <···············@shell01.TheWorld.com>,
 Kent M Pitman <······@world.std.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > I really don't think they're a "namespace", then.  It's more like each 
> > block has an undocumented lexical binding called something like 
> > __return_from_tag__, that happens to only be used by the block special 
> > form and return-from.
> 
> The same can be said of the function namespace.  Can we now say we have only
> one namespace?

But they're quite different things, aren't they?  The function namespace 
is a global thing -- there is no such thing as a local binding in the 
function namespace.  Wheras there are no global bindings for return-from 
tags, only local ones.

-- Bruce
From: Erann Gat
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <gat-2106031903290001@192.168.1.51>
In article <···························@copper.ipg.tsnz.net>, Bruce Hoult
<·····@hoult.org> wrote:

> ... there is no such thing as a local binding in the function namespace.

Surely you've heard of FLET?

E.
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-20B38C.18431722062003@copper.ipg.tsnz.net>
In article <····················@192.168.1.51>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

> In article <···························@copper.ipg.tsnz.net>, Bruce Hoult
> <·····@hoult.org> wrote:
> 
> > ... there is no such thing as a local binding in the function namespace.
> 
> Surely you've heard of FLET?

Oops.  Brain fart there for sure.

I think I've never actually used FLET, but then I've used LABELS a lot, 
which is of course the same idea (well, could have been called FLETREC 
in an alternate reality).

-- Bruce
From: Andreas Eder
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <m31xxnqz8m.fsf@elgin.eder.de>
Bruce Hoult <·····@hoult.org> writes:

> Certainly it means that for something I'm doing at work at the moment -- 
> lightweight switching between some tens of thousands of threads in a 
> single process (representing in-progress prepaid cell phone calls) -- I 
> can use Scheme easily but I can't use Dylan or CL unless I'm prepared to 
> obfuscate the code considerably.

Not I know what you are doing, but have you had a look at Erlang?
(http://www.erlang.org)

Andreas

-- 
Wherever I lay my .emacs, there�s my $HOME.
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-09DC49.10470422062003@copper.ipg.tsnz.net>
In article <··············@elgin.eder.de>,
 Andreas Eder <············@t-online.de> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > Certainly it means that for something I'm doing at work at the moment -- 
> > lightweight switching between some tens of thousands of threads in a 
> > single process (representing in-progress prepaid cell phone calls) -- I 
> > can use Scheme easily but I can't use Dylan or CL unless I'm prepared to 
> > obfuscate the code considerably.
> 
> Not I know what you are doing, but have you had a look at Erlang?
> (http://www.erlang.org)

Yes.  In fact our product development manager is an Erlang fan.

The big problem is that it really wants to own the world and it doesn't 
seem as if it will fit into the existing C/C++ framework very well.

I'd be happy to be shown to be wrong on this...

-- Bruce
From: Peter Seibel
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <m38yrwjbc7.fsf@javamonkey.com>
Bruce Hoult <·····@hoult.org> writes:

> In article <···············@shell01.TheWorld.com>,
>  Kent M Pitman <······@world.std.com> wrote:
> 
> > Bruce Hoult <·····@hoult.org> writes:
> > 
> > > Common Lisp is a Lisp 2.  Scheme is a Lisp 1.  Carrying the 
> > > classification over to totally different languages doesn't necessarily 
> > > make a lot of sense, but Perl is something like a Lisp 5, if you count 
> > > Perl's globs as being the same sort of thing as CL's symbols -- you 
> > > don't just have a function and data value for each identifier, but a 
> > > scalar, a vector, a hash, a format, and a filehandle.
> > 
> > It's worth noting as long as you're counting namespaces that we had
> > the whole debate about namespaces in the context of a question about
> > whether variables and functions should collapse their namespaces, and
> > so I grabbed Lisp2 as the name to use in the debate.  In fact, though,
> > CL is at least a Lisp4 in the sense that it has 4 legitimate lexically
> > closable namespaces: variables, functions, tagbody/go tags, and
> > block/return-from tags.
> 
> You can close over block/return-from tags?  Really?
> 
> That means CL has full first-class continuations.

I don't think so. Per the CLHS:
 
  "The block named name has lexical scope and dynamic extent."

So you can close over it and pass it down the stack, but you can't
save it after the block has returned. Including if you use the closure
to return from the block--it just unwinds the stack.

-Peter
-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Bruce Hoult
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bruce-14D1F5.18400721062003@copper.ipg.tsnz.net>
In article <··············@javamonkey.com>,
 Peter Seibel <·····@javamonkey.com> wrote:

> Bruce Hoult <·····@hoult.org> writes:
> 
> > In article <···············@shell01.TheWorld.com>,
> >  Kent M Pitman <······@world.std.com> wrote:
> > 
> > > Bruce Hoult <·····@hoult.org> writes:
> > > 
> > > > Common Lisp is a Lisp 2.  Scheme is a Lisp 1.  Carrying the 
> > > > classification over to totally different languages doesn't necessarily 
> > > > make a lot of sense, but Perl is something like a Lisp 5, if you count 
> > > > Perl's globs as being the same sort of thing as CL's symbols -- you 
> > > > don't just have a function and data value for each identifier, but a 
> > > > scalar, a vector, a hash, a format, and a filehandle.
> > > 
> > > It's worth noting as long as you're counting namespaces that we had
> > > the whole debate about namespaces in the context of a question about
> > > whether variables and functions should collapse their namespaces, and
> > > so I grabbed Lisp2 as the name to use in the debate.  In fact, though,
> > > CL is at least a Lisp4 in the sense that it has 4 legitimate lexically
> > > closable namespaces: variables, functions, tagbody/go tags, and
> > > block/return-from tags.
> > 
> > You can close over block/return-from tags?  Really?
> > 
> > That means CL has full first-class continuations.
> 
> I don't think so. Per the CLHS:
>  
>   "The block named name has lexical scope and dynamic extent."
> 
> So you can close over it and pass it down the stack, but you can't
> save it after the block has returned. Including if you use the closure
> to return from the block--it just unwinds the stack.

Which makes them identical to Dylan's block exit functions (no surprise 
there).

Which means you *can't* close over block/return-from tags -- or, at 
least, not unless you want to debate the commonly understood meaning of 
the term "closure" and claim that Pascal has closures.

-- Bruce
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwn0gbyizg.fsf@shell01.TheWorld.com>
Bruce Hoult <·····@hoult.org> writes:

> Which means you *can't* close over block/return-from tags -- or, at 
> least, not unless you want to debate the commonly understood meaning of 
> the term "closure" and claim that Pascal has closures.

See my remarks earlier today about the Ship of Theseus.  I believe those
remarks to apply to your post, too.

What would do you prefer in order to describe the property that:

 (defun foo (x)
   (list 'foo-result
     (block done
       (bar #'(lambda (x) (return-from done x))
            x))))

 (defun bar (cont x)
   (list 'bar-result
     (block done
       (funcall cont (reverse x)))))

 (foo '(a b c)) => (FOO-RESULT (C B A))

Note, too, that just because:

 (funcall (block done #'(lambda (x) (return-from done x))) 1)

is defined in CL to yield an error, it doesn't mean that a closure doesn't
result from the BLOCK.  It just means that the closed-over return point
has been invalidated.  The closure part is about naming, and the naming 
works as correctly in CL as in Scheme.  The invalidation is about control.

Even in Scheme, if you use the DYNAMIC-WIND stuff that I keep promising to
look over for Clinger and friends, you can cause a closure to effectively
invalidate itself in certain situations.  So be careful with your 
terminological arguments lest you claim that this power to invalidate an
upward closure renders even Scheme "without closures".

Unless you're meaning to claim that there is a material difference between
a closure that is valid but cannot be called without error and a closure
that is not valid.  While I'll grant you some implementational difference,
I'm not sure how far even _I_ am willing to push on that one...  But please
do say whether you find this a material difference.  I'd like to know.
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <costanza-BC3683.20402220062003@news.netcologne.de>
In article <············@nobel.pacific.net.sg>,
 "Gavin" <······@pacific.net.sg> wrote:

> However on the other hand, some tasks are more easily done in some languages
> than others.
> Languages are just tools, you dont have to use one language to solve all
> your programming problems.
> You can use different tools for varying programming problems or use a
> combination-hybrid solution.
> eg Perl or Python (frontend) and Java or C++ frontend or Lisp( frontend) and
> C/C++ or Java (backend) etc

Sounds nice in theory, but misses the point that for some hard problems 
you might not know upfront what language is the best suited. Or you 
might have an idea but it turns out to be wrong during development.

What really matters is not so much the traditional notion of programming 
language - a selection and purification of one or two programming 
paradigms - but rather the ability of a language to accomodate a broad 
range of programming styles so that you can change your mind late in the 
game without having to abandon all your invested work.


Pascal
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0306021819450.15764-100000@gwen.sixfingeredman.net>
On 1 Jun 2003, Raffael Cavallaro wrote:
> The point is, it will be significantly different from programming now
> because it will not require the user to have the slightest idea what
> is going on under the hood. All the user will need to know is the
> domain specific specifications. How many spreadsheet users have the
> faintest clue what's really going on behind the scenes.

How many javascripters have the faintest idea about compiler theory?

> That's simply not true. Perl is decidedly more vague than scheme. The
> same token means different things in different contexts in Perl.

Having different (explicit) contexts is not vague. Having different
implicit ones is. In a natural language, much of the meaning is carried by
implicit context -- shared culture. This is why natural languages are
convenient, flexible, and very bad for stating anything precisely.

> 1. Natural language makes extensive use of context to determine meaning.
> Computer languages do this only rarely, and in very limited ways.

Languages which make more use of context (like Perl) tend to be ill-suited
for large programming tasks.

> 2. Natural language speakers resolve ambiguity by asking for
> clarification of specific terms, referents, or utterances.

Interactivity in programming is the place of the environment, not the
language. I agree that error messages can be more helpful.
From: Ray Blaak
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ullwk46gd.fsf@STRIPCAPStelus.net>
·······@mediaone.net (Raffael Cavallaro) writes:
> The point is, it will be significantly different from programming now
> because it will not require the user to have the slightest idea what
> is going on under the hood. All the user will need to know is the
> domain specific specifications. How many spreadsheet users have the
> faintest clue what's really going on behind the scenes.

Users will need to understand how the mental abstractions they are working
with fit together. 

Sure, they don't need to understand how they are implemented, but they do need
to understand what they do, what inputs they take, and results they give.

Even spreadsheet users, once working with the slightest bit of complicated
formulas must "program" the pieces together.

That is, users must work with the rules as they understand them *and* be
precise about it.

> That's simply not true. Perl is decidedly more vague than scheme. The
> same token means different things in different contexts in Perl. This
> is not true in Scheme - remember the thread subject - Lisp-2 or
> Lisp-1. The point is, natural language varies from computer languages
> in the following ways (at least):

Perl is in no way a "natural language". Perl has more context specific
meanings, that is true, but for every expression anywhere, the meaning is
unambiguous -- confusing to maintainers perhaps, but unambiguous.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0306021914.3f63e9fe@posting.google.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...

> Users will need to understand how the mental abstractions they are working
> with fit together. 

But this is domain specific knowledge, which is what they already have
expertise in. They also have expertise in natural language use (we all
do, if we can speak and understand a human language). The point is to
enable them to express the former, using the latter.

> 
> Sure, they don't need to understand how they are implemented, but they do need
> to understand what they do, what inputs they take, and results they give.

But it is the need to understand how things are implemented that is
stopping most people from programming, not the need to understand how
the abstractions of their domain work - they already know that, or
they wouldn't be doing it for a living.


> Even spreadsheet users, once working with the slightest bit of complicated
> formulas must "program" the pieces together.

But spreadsheet users don't have a problem with the formulas - they
already know them - mortgage interest calculations if they're in real
estate, etc. They have problems with how the damn spreadsheet package
works, not the specifics that come from a domain they are already
expert in.

> 
> That is, users must work with the rules as they understand them *and* be
> precise about it.

Not a problem, despite what almost everyone here seems to think.
People who have domain expertise are perfectly capable of both
abstract thought, and precision. They use both all the time in their
daily work. What they are not capable of is understanding why it
should be necessary to specify what machine representation a
particular piece of data should take (integer, string, list, etc.)
before they can use it. Or why precise statements need to be crafted
in a syntax that is easy for compilers to understand, but hard for
most speakers of natural languages. In fact, they're right on both
counts - it *shouldn't* be necessary to specify what machine
representation should be used for a particular piece of data - the
parser and compiler should be able to infer that from context and
usage. It *shouldn't* be necessary for precise statements to be
written in a syntax that bears little relationship to the natural
languages we all grew up speaking - natural languages are perfectly
capable of making precise statements.


> Perl is in no way a "natural language".

I never said it was - I simply said that it is more vague than scheme,
because the exact same token can have different meanings in different
contexts. This is what makes a statement vague - its ambiguity,
whether in natural language or computer language. Pronouns in english,
like "it" are vague, because their meaning is not self contained. You
have to look outside of the utterance in which they appear to
determine their referent. Tokens whose meaning changes depending on
context are similarly ambiguous or vague.

> Perl has more context specific
> meanings, that is true, but for every expression anywhere, the meaning is
> unambiguous -- confusing to maintainers perhaps, but unambiguous.

Then there is no ambiguity in natural language either, since, with
enough context, all the ambiguities go away. You can't have it both
ways. That which makes a natural language statement ambiguous - i.e.,
the need for context to determine meaning - is precisely the same
thing that makes a computer language token ambiguous. If context is
*necessary* to determine meaning, then that token, alone, is
ambiguous. This is true of Perl.
From: Ray Blaak
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ur86bskld.fsf@STRIPCAPStelus.net>
·······@mediaone.net (Raffael Cavallaro) writes:
> Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...
> > That is, users must work with the rules as they understand them *and* be
> > precise about it.
> 
> Not a problem, despite what almost everyone here seems to think.  People who
> have domain expertise are perfectly capable of both abstract thought, and
> precision. [...] It *shouldn't* be necessary for precise statements to be
> written in a syntax that bears little relationship to the natural languages
> we all grew up speaking - natural languages are perfectly capable of making
> precise statements.

If your point is really about the usefulness and utility of domain-specific
languages, then I agree with you. *Of course* people can think abstractly and
with precision. The point is that to work with computers, they *must* think.
There is no other choice.

Your mistake is to confuse domain specific languages with "natural languages".
A domain specific language that runs on a computer must be as precise as any
other computer language. It cannot have the inherent imprecision present in
languages used for human communication.

> > Perl is in no way a "natural language".
> 
> I never said it was - I simply said that it is more vague than scheme,
> because the exact same token can have different meanings in different
> contexts. This is what makes a statement vague - its ambiguity,
> whether in natural language or computer language. 

Except that Perl is not one whit more vague than Scheme. There is no
ambiguity. In every situation in Perl (and Scheme), there is a single precise
meaning, determined from context to be sure, but not vague or ambiguous in any
way.

> Pronouns in english, like "it" are vague, because their meaning is not self
> contained. You have to look outside of the utterance in which they appear to
> determine their referent. Tokens whose meaning changes depending on context
> are similarly ambiguous or vague.

Depending on context does not make things ambiguous. Ambiguity results from
having multiple possible meanings in a given context.

Computer languages would be unworkable with ambiguity.

> That which makes a natural language statement ambiguous - i.e.,
> the need for context to determine meaning - is precisely the same
> thing that makes a computer language token ambiguous. If context is
> *necessary* to determine meaning, then that token, alone, is
> ambiguous. This is true of Perl.

The token alone can be ambiguous, sure. Working programs are not comprised
of single tokens, however. In any valid program there is no ambiguity, since
otherwise the computer would not "know" what to execute.

It is not context that gives rise to ambiguity, but lack of precision,
multiple possible interpretations, "wiggle room", if you will.

The very thing that gives human language its spark, its art and beauty, is
precisely that which would condemn any computer language.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Bourguignon
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <874r37efal.fsf@thalassa.informatimago.com>
Ray Blaak <········@STRIPCAPStelus.net> writes:
> 
> > > Perl is in no way a "natural language".
> > 
> > I never said it was - I simply said that it is more vague than scheme,
> > because the exact same token can have different meanings in different
> > contexts. This is what makes a statement vague - its ambiguity,
> > whether in natural language or computer language. 
> 
> Except that Perl is not one whit more vague than Scheme. There is no
> ambiguity. In every situation in Perl (and Scheme), there is a
> single precise meaning, determined from context to be sure, but not
> vague or ambiguous in any way.

It's difficult  to say that,  because perl has no  specification, only
one  implementation.   We  can   agree  that  this  implementation  is
deterministic, but if there was  at least a second implementation, you
may  discover that  there is  not a  single precise  meaning  for some
constructs.  And when  I say perl in this paragraph,  I only mean perl
at  a given  version, because  it's known  that the  behavior  of perl
changed from version  to version for some constructs.   Just check its
ChangeLog.


> Computer languages would be unworkable with ambiguity.

Ambiguity is  not necessarily unworkable.   For example in  a language
where  you can  say: "Go  to  the kitchen  and bring  me water.",  the
compute could come back with tap water or bottle water, in the bottle,
in a buck, in a glass, or in  its hand in the form of solid water.  It
could  even  come  back  with  all  of  these  instances  at  once  or
successively.  Prolog does this, bringing  _all_ the water it can find
in the kitchen.

In  addition, there is  a obvious  addition that  could help  to avoid
embarrassing situations:  add a notion  of cost to all  actions.  When
ambiguity leads to a small or  null cost, behave like Prolog, when the
cost is high, behave like non-stupid human: ask.


> > That which makes a natural language statement ambiguous - i.e.,
> > the need for context to determine meaning - is precisely the same
> > thing that makes a computer language token ambiguous. If context is
> > *necessary* to determine meaning, then that token, alone, is
> > ambiguous. This is true of Perl.
> 
> The token alone can be ambiguous, sure. Working programs are not comprised
> of single tokens, however. In any valid program there is no ambiguity, since
> otherwise the computer would not "know" what to execute.
> 
> It is not context that gives rise to ambiguity, but lack of precision,
> multiple possible interpretations, "wiggle room", if you will.
> 
> The very thing that gives human language its spark, its art and beauty, is
> precisely that which would condemn any computer language.

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Ray Blaak
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <usmqrt8hu.fsf@STRIPCAPStelus.net>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> > Computer languages would be unworkable with ambiguity.
> 
> Ambiguity is  not necessarily unworkable.   For example in  a language
> where  you can  say: "Go  to  the kitchen  and bring  me water.",  the
> compute could come back with tap water or bottle water, in the bottle,
> in a buck, in a glass, or in  its hand in the form of solid water.  It
> could  even  come  back  with  all  of  these  instances  at  once  or
> successively.  Prolog does this, bringing  _all_ the water it can find
> in the kitchen.

In the presence of ambiguity, it is specified precisely what to do,
however. Choose all, choose the first, the last, the best according to some
cost measure, etc. That is, the situation is not really ambiguous.

In such a language, "go bring me water" is essentially a method call to some
logic that does "find the first/best water that meets this default criteria".

One could think about random algorithms too, with core operators behaving
according to statistical profiles. Ambiguous or not?

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Bourguignon
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87d6huf4bm.fsf@thalassa.informatimago.com>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> > > Computer languages would be unworkable with ambiguity.
> > 
> > Ambiguity is  not necessarily unworkable.   For example in  a language
> > where  you can  say: "Go  to  the kitchen  and bring  me water.",  the
> > compute could come back with tap water or bottle water, in the bottle,
> > in a buck, in a glass, or in  its hand in the form of solid water.  It
> > could  even  come  back  with  all  of  these  instances  at  once  or
> > successively.  Prolog does this, bringing  _all_ the water it can find
> > in the kitchen.
> 
> In the presence of ambiguity, it is specified precisely what to do,
> however. Choose all, choose the first, the last, the best according to some
> cost measure, etc. That is, the situation is not really ambiguous.
> 
> In such a language, "go bring me water" is essentially a method call to some
> logic that does "find the first/best water that meets this default criteria".
> 
> One could think about random algorithms too, with core operators behaving
> according to statistical profiles. Ambiguous or not?

Which  is to  say that  there does  not exist  any ambiguity  ever. At
least, for  action centered  logics (pure logics  could request  for a
unique response and thus ambiguity would be an internal problem).


-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Ray Blaak
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <uof1dlsig.fsf@STRIPCAPStelus.net>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
> > In the presence of ambiguity, it is specified precisely what to do,
> > however. Choose all, choose the first, the last, the best according to some
> > cost measure, etc. That is, the situation is not really ambiguous.
> > 
> > In such a language, "go bring me water" is essentially a method call to some
> > logic that does "find the first/best water that meets this default criteria"
> 
> Which  is to  say that  there does  not exist  any ambiguity  ever. At
> least, for  action centered  logics (pure logics  could request  for a
> unique response and thus ambiguity would be an internal problem).

For a consistent system on a computer, no, there is no ambiguity. Ambiguity
arises when different actors interpret the sentence differently, giving
varying results.

People do this all the time, and are good at it.

Computers need predictability, however. Imagine if your Prolog code ran
differently on different systems.

Now granted, you can in fact have differences (i.e. different library
implementations, different random seeds), but if your code does not account
for them there are problems.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Ray Blaak
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <uof1g4706.fsf@STRIPCAPStelus.net>
·······@mediaone.net (Raffael Cavallaro) writes:
> "Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...
> 
> > Clippy for programmers.
> > 
> >     "It looks like you're writing a conditional!  Do you want your
> >      program to:
> > 
> >         (*) Take the first branch which matches at all;
> >         ( ) Take the branch which matches closest, using Joe's
> >             Discount Fuzzy Logic and Bait Shop Algorithm, for an
> >             extra 50 cent surcharge per invocation;
> ... [more lame mockery snipped]
[excellent mockery, actually]
> And it is precisely this sort of superior attitude which will have
> "real programmers" scratching their heads and wondering what they did
> wrong when they have been made redundant by software that can ...
> write software.
> 
> What programmers do is, for the most part, really not all that
> special. Most of the hard work is in the libraries - that's why
> clearly inferior languages like Java do so well - because much of the
> work has already been done for you.

Violent disagreement here. Programmers are required to specify the precise
steps, simply because there is no one else able to. What programmers do might
not be special, even banal in the usual case, but it is crucially necessary.

We, as a species, simply do not have the abilities to make "smart" program
generators, except in very controlled circumstances that immediately break as
soon as any attempt to generalize is done.

> But what's even more telling is the truly frightened and visceral
> reaction to these ideas, even though any one with any sense of
> historical perspective can see that they are inevitable. Libraries
> will get more powerful; parsers and compilers will get smarter;
> surface syntax will become more natural-language-like; error and
> warning messages will become more helpful, and allow users to choose
> rewrites. The language to put these features together first, will
> become the language that most people write software in.

It is all very well to chide programmes' fears of being obsolete.

Unfortunately, outside of carefully prepared canned (read: programmed)
solutions that a user can essentially only just select, as soon as a user is
"rewriting" they need to be precise about things. That is, they need to
program. If they do not, unintended consequences from underspecified actions
result.

Now, that is not to say that situations cannot be found that are amenable to
automation. That happens all the time. Usually that translates into
programming at a higher level, instead of tediously at a low level.

However, at no time are things imprecise.

Computers are dumb. We do not know yet how to make them smart in general.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0306041109.6a27d921@posting.google.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...

> Unfortunately, outside of carefully prepared canned (read: programmed)
> solutions that a user can essentially only just select, as soon as a user is
> "rewriting" they need to be precise about things. That is, they need to
> program. If they do not, unintended consequences from underspecified actions
> result.

But this is the case only because the user is not allowed a chance to
disambiguate his vague instructions. With current computer languages,
you are missing the key facility that allows natural language to
function with ambiguity - conversation. If the listener isn't sure of
the precise meaning, she can ask for clarification.

I've repeated this about a half dozen times in this thread, and yet
people keep taking individual pieces and saying that, alone, they
won't work. No one has ever maintained that they could. But together,
they can, because these pieces already work together in natural
language *even when precision is required.*

Specifically:

1. Leverage ordinary users' facility with natural language grammar, by
using a syntax that conforms to the structure of natural language
grammars.

2. Disambiguate instructions that are insufficiently precise by
querying the user and providing a choice of possible alternatives.
Rewrite the program text in a more precise fashion to reflect the
user's choice.

3. Incorporate libraries for the functionality that ordinary users
will not have the expertise to write themselves. This will include
much of what software does today, in other words, all of the "solved
problems" of computer science.  Provide an interface to these
libraries that uses the natural-language-like syntax referred to in
no. 1 above.

Individually, each would fail to allow ordinary users to program. For
example, armed only with the libraries of no. 3, most people would
still make a hash of things. And, as you have pointed out, with just a
syntax that conformed to natural language patterns, users could still
write insufficiently precise statements.

However, the three together would allow ordinary users, with no real
skill in programming, to create much of the software that they need.
From: Ray Blaak
Subject: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <u65nlauzs.fsf_-_@STRIPCAPStelus.net>
·······@mediaone.net (Raffael Cavallaro) writes:
> Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...
> > [users need to program]
> 
> But this is the case only because the user is not allowed a chance to
> disambiguate his vague instructions. With current computer languages,
> you are missing the key facility that allows natural language to
> function with ambiguity - conversation. If the listener isn't sure of
> the precise meaning, she can ask for clarification.
[...]
> 2. Disambiguate instructions that are insufficiently precise by
> querying the user and providing a choice of possible alternatives.
> Rewrite the program text in a more precise fashion to reflect the
> user's choice.

And this is the crux. People can readily ask for clarification in a
conversation, since they know what they don't know, or at least can quickly
find out. Computers don't even know what to ask in general.

What can be done is to recognize particular situations and ask the user what
to do. However these situations must be very controlled in order to present
domain-specific questions that a non-programmer can readily understand.

Outside of canned scenarios prepared in advance, very quickly all a computer
can do is say "I don't know what to do with this".

It is not much of a conversation.

Formal systems, on the other hand, can quite readily indicate the
unproved/unknown portions of the current program text, but the kind and amount
of such information reported is beyond the ability of most mortals to deal
with, including many programmers. Furthermore, the detail needed to resolve
such situations is even more difficult than usual programming.

> 3. Incorporate libraries for the functionality that ordinary users
> will not have the expertise to write themselves. This will include
> much of what software does today, in other words, all of the "solved
> problems" of computer science.  Provide an interface to these
> libraries that uses the natural-language-like syntax referred to in
> no. 1 above.

Imagine such a library. Choosing the appropriate algorithm from its reportoire
for the problem at hand is not easy to do in general. By what criteria does one
decide? How does it match the intended results? What are the trade offs to
consider?

In fact, I maintain that such a choosing process is exactly programming.

> However, the three together would allow ordinary users, with no real
> skill in programming, to create much of the software that they need.

Without question the situation can be improved for such users so that it is
easier for them to make computers do what they want.

My point, though, is that in general, it is not possible to eliminate the need
for programming ability, even in the presence of more natural syntax, language
environments that can ask for clarification, and large libraries of
preprogrammed functionality.

Very very quickly one steps out of the preconceived prepared boundaries of the
system, and then one is back to needing to think about consequences and how
things fit together.

The only way around this is to solve the AI problem for real, so that the
computer can have a proper conversation with the user, after which the
computer then properly executes (i.e. programs and implements) what was
requested.

But this is eqivalent to a user sitting down to discuss requirements with a
human programmer, which is exactly what non-programmers do today when they
need a computer to do something for them.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Raffael Cavallaro
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <aeb7ff58.0306051305.2a3d040@posting.google.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<················@STRIPCAPStelus.net>...



> And this is the crux. People can readily ask for clarification in a
> conversation, since they know what they don't know, or at least can quickly
> find out. Computers don't even know what to ask in general.

We're not asking for "general" ability to ask questions, just the
ability to present precise, canonical rewrites of user input where
that input is ambiguous. I.e., when user input can be interpreted in a
number of different ways, list those different ways and ask the user
which one she meant. Then rewrite the program text to reflect that
choice.

> 
> What can be done is to recognize particular situations and ask the user what
> to do. However these situations must be very controlled in order to present
> domain-specific questions that a non-programmer can readily understand.


No, they just have to be the possible precise interpretations of the
user's own input. Why the insistence that AI completeness is necessary
to present the possible parser interpretations of ambiguous input?

> 
> Outside of canned scenarios prepared in advance, very quickly all a computer
> can do is say "I don't know what to do with this".

No, it can say, "I won't know what to do with this until you tell me
which of these three possible interpretations you meant."

> 
> It is not much of a conversation.

Not if you unnecessarily insist that the parser be able to deal with
arbitrary natural language input, and not be allowed to present
choices, no. But then, I never suggested that, did I.


> Imagine such a library. Choosing the appropriate algorithm from its reportoire
> for the problem at hand is not easy to do in general. By what criteria does one
> decide? How does it match the intended results? What are the trade offs to
> consider?
> 
> In fact, I maintain that such a choosing process is exactly programming.

Call it whatever you like  if it makes you happier, but it would bear
very little resemblance to what most programmers do now:

1. The program statements would be in a syntax that ordinary users
would understand because of it's conformance to natural language
grammar structure. (not true of any existing language)

2. The parser/compiler would present choices for disambiguation.
Ambiguous statements would *not* be errors, but would initiate a
dialog with the parser/compiler. (not true of any existing language)

3. The user would have access to a large library of functionality via
the same natural-language-like interface described in 1. (not true of
any existing language)



> Without question the situation can be improved for such users so that it is
> easier for them to make computers do what they want.

That's all I'm talking about.

> 
> My point, though, is that in general, it is not possible to eliminate the need
> for programming ability, even in the presence of more natural syntax, language
> environments that can ask for clarification, and large libraries of
> preprogrammed functionality.

But what you call "programming ability," when it is stripped of the
current need to know fairly intimately what is going on under the
covers, is just the ability to think and write precisely. This is an
ability that many people have. I'd guess that the number of such
people is an order of magnitude or more greater than the number of
reasonably proficient programmers.

This is not because the precise specification of instructions is so
difficult, but because current languages force the user to know
details about machine implementation of software that are semantically
orthogonal to problem at hand. Much of this is because the choice of
under-the-cover details greatly affects performance. Most languages
now in use were conceived at a time when resources were much scarcer
than at present. Programmers had to know about these implementation
choices, or their programs would run out of resources.

This is much less so now, and, with pre-written libraries to handle
the many known cases which can be optimized, lay users would be able
to program without knowing such details. Their software wouldn't be
optimal, but it would get the job done. A program that produces the
needed output slowly, while using a lot of resources, is better than
no program at all, which is the choice most users face.

Computer science is now a fairly mature field. It's time that the
accumulated knowledge of the field were put at the disposal of a wider
user audience. 20 years ago, programmers decried the waste of
resources inherent in the Mac GUI. They simply missed the big picture
- that by wasting resources, computing could be made available to a
much larger group of users. The same thing is now true of programming.
I say, let's waste the resources, and give users the ability to write
their own software.
From: Ray Blaak
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <u3cioyzyh.fsf@STRIPCAPStelus.net>
·······@mediaone.net (Raffael Cavallaro) writes:
> Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<················@STRIPCAPStelus.net>...
> > What can be done is to recognize particular situations and ask the user what
> > to do. However these situations must be very controlled in order to present
> > domain-specific questions that a non-programmer can readily understand.
>
> No, they just have to be the possible precise interpretations of the
> user's own input. Why the insistence that AI completeness is necessary
> to present the possible parser interpretations of ambiguous input?

Because giving possible precise interpretations of the user's own input is
*hard*.

> > Outside of canned scenarios prepared in advance, very quickly all a computer
> > can do is say "I don't know what to do with this".
> 
> No, it can say, "I won't know what to do with this until you tell me
> which of these three possible interpretations you meant."

Again, an explicit list of possible interpretations can be done in
preconceived situations. It can't be done in general.

So I assert. Can you point me to a working counterexample?

> > It is not much of a conversation.
> 
> Not if you unnecessarily insist that the parser be able to deal with
> arbitrary natural language input, and not be allowed to present
> choices, no. But then, I never suggested that, did I.

And neither did I. The problem is not natural language parsing per se
(although that too is hard). Rather, the problem is figuring out possible
precise alternatives for a user's imprecise specifications.

> But what you call "programming ability," when it is stripped of the
> current need to know fairly intimately what is going on under the
> covers, is just the ability to think and write precisely. This is an
> ability that many people have. I'd guess that the number of such
> people is an order of magnitude or more greater than the number of
> reasonably proficient programmers.

Agreed. The whole issue of domain-specific vs "pure" computer programming is
orthogonal to my point.

However, note that non-programmers are used interacting with people, who are
quite good at dealing with imprecision. Programmers are used to dealing with
precision precisely because computers demand it.

Non-programmers can certainly learn to "program", and indeed have to when they
start using their computerized domain-specific tools.

> I say, let's waste the resources, and give users the ability to write
> their own software.

I support this goal. We have to have realistic expectations, however.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Raffael Cavallaro
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <aeb7ff58.0306052024.158dae8a@posting.google.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...
> 
> Because giving possible precise interpretations of the user's own input is
> *hard*.
> 

Ah, but "hard" is not the same as "impossible," - even when it is
enclosed in asterisks ;^)

> Again, an explicit list of possible interpretations can be done in
> preconceived situations. It can't be done in general.

I think you're misinterpreting what I intend. I don't mean to do
parsing of arbitrary natural language. Rather, to parse a rather
limited language - i.e., limited in the same way that existing
computer languages are limited - that is easier to learn and use
because its syntax conforms to the grammatical structure of natural
languages.

> The problem is not natural language parsing per se
> (although that too is hard). Rather, the problem is figuring out possible
> precise alternatives for a user's imprecise specifications.

This problem is less hard when the possible imprecise statements the
user can make is limited by the range of the language. There are many,
many more things one can say in english than in C. If you gave C an
english-like syntax, the range of things one could say in it would
still be far smaller than the range of possible english expressions.

> 
> However, note that non-programmers are used interacting with people, who are
> quite good at dealing with imprecision. Programmers are used to dealing with
> precision precisely because computers demand it.
> 

Yes, but I don't see this as a necessary fact of programming, but
rather an historical artifact of the former scarcity of resources
relative to today's abundance.

> Non-programmers can certainly learn to "program", and indeed have to when they
> start using their computerized domain-specific tools.
> 
> > I say, let's waste the resources, and give users the ability to write
> > their own software.
> 
> I support this goal. We have to have realistic expectations, however.

Agreed. I think that what I'm proposing is realistic because I'm *not*
advocating parsing arbitrary natural language expressions, but rather,
parsing the expressions of a programming language which has a
natural-language-like syntax to ease learning and use by
non-programmers.
From: Ray Blaak
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <uadcvhci2.fsf@STRIPCAPStelus.net>
·······@mediaone.net (Raffael Cavallaro) writes:
> Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...
> > 
> > Because giving possible precise interpretations of the user's own input is
> > *hard*.
> > 
> 
> Ah, but "hard" is not the same as "impossible," - even when it is
> enclosed in asterisks ;^)

True. We don't know that AI is impossible on Von Neuman architectures, for
example. We also don't know how to do it at all either.

> I think you're misinterpreting what I intend. I don't mean to do
> parsing of arbitrary natural language. Rather, to parse a rather
> limited language - i.e., limited in the same way that existing
> computer languages are limited - that is easier to learn and use
> because its syntax conforms to the grammatical structure of natural
> languages.

I am not talking about the parsing of an arbitrary natural language.  The
limited, "natural language like" programming language bit can assumed to be
non-controversial from here on in.

Let's call it Lisp :-), it really doesn't matter.

> > The problem is not natural language parsing per se
> > (although that too is hard). Rather, the problem is figuring out possible
> > precise alternatives for a user's imprecise specifications.
> 
> This problem is less hard when the possible imprecise statements the
> user can make is limited by the range of the language. 

Assuming a reasonable language with conditionals, method calls, and simple
variables, the "hardness" is not related to the limited range (or not) of the
language.

The hardness comes about because it is unclear how to automate dealing with
the user's imprecision. 

What are the user's goals at a given point? Does the user specify them? Does
the computer infer them? If the goals are not known, how does the computer
know what alternatives to present to the user. Recall that the language is
domain-specific and allows imprecision.

Again, formal systems allow pretty much everything to be specified, but such
systems are the exact antithesis of "non-programmer friendly" systems. One has
to much more precise than usual with such systems.

Consider a silly architect-specific language snippet:

  DESIGN A STANDARD BUILDING.

Imprecise as hell. How do we begin the dialog? Should it be:

  "DESIGN" is unclear. Do you mean:
  a) create and emit a blueprint?
  b) synthesize a synopsis for a winning entry in the next edition of
     "Buildings Now!"?

or should it be:

  "STANDARD" needs refinement. Do you mean:
  a) Standard in Canada for a set of median-income tenants?
  b) Standard in terms of appearance with the surrounding neighbourhood?
  c) Designed according to ARCH SPEC 123A-34-JK1-1?

or should it be:

  "BUILDING" is imprecise. Do you mean:
  a) an outhouse?
  b) a factory?
  c) a castle?

or should it be:

  How big should it be?
  What color?
  What materials?
  ...and so on, tediously and exhaustively...

etc.

The point of my example is to illustrate how such systems realistically can
only present preconceived alternatives. It is like an expert system, actually.
Such systems tend to be quite brittle and break down as soon as one steps
outside of narrowly defined constraints.

My overall reason for participating in this discussion has nothing to do with
natural langaguage parsing. It is to fight against the idea that "programming
is obsolete". This is false today, and will be as long as computers are dumb.

"Domain-specific" has nothing to do with it. "Domain-specific" is isomorphic
to having a library available that consists of methods/algorithms related to a
specific domain. The programming problem is still present (or not) regardless.

Note also, that it is not the case that "real" programming requires the
programmer to be aware of low level implementation details. Certainly a good
programmer aspires to know everything about how computers and software
work. However, it is also the case that good programmers can readily switch
abstraction levels, viewing lower levels as a "black box".

Indeed, there are many competent Java/Lisp/Smalltalk programmers that have no
idea how their language constructs are implemented, and yet they still use
them productively. They are in a "domain" above the low-level implementation
details.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Nikodemus Siivola
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <bbpflf$5rbso$2@midnight.cs.hut.fi>
Ray Blaak <········@stripcapstelus.net> wrote:

> My overall reason for participating in this discussion has nothing to do with
> natural langaguage parsing. It is to fight against the idea that "programming
> is obsolete". This is false today, and will be as long as computers are dumb.

I definitely agree. The following does a nice job of spelling it out:

  http://www.htdp.org/2001-11-21/Book/node2.htm

To quote: "No one can predict what kind of application packages will exist five or ten years from 
now. But application packages will continue to require some form of programming. To prepare
students for these kinds of programming activities, schools can either force them to study
algebra, which is the mathematical foundation of programming, or expose them to some form of
programming."

Cheers,

  -- Nikodemus
From: Raffael Cavallaro
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <raffaelcavallaro-D1F3BE.09411006062003@netnews.attbi.com>
In article <··············@midnight.cs.hut.fi>,
 Nikodemus Siivola <········@kekkonen.cs.hut.fi> wrote:

> To quote: "No one can predict what kind of application packages will exist 
> five or ten years from 
> now. But application packages will continue to require some form of 
> programming. To prepare
> students for these kinds of programming activities, schools can either force 
> them to study
> algebra, which is the mathematical foundation of programming, or expose them 
> to some form of
> programming."

Any why shouldn't that "form of programming" be a user friendly one, 
which uses a sytax like that of natural languages, prompts for 
clarification of user inputs, abstracts away low level details, and 
gives access to powerful libraries through the same user friendly 
natural-language-like interface?
From: Boris Smilga
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <87adcnrlez.fsf@ganglion.bhasha.com>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> Any why shouldn't that "form of programming" be a user friendly one, 
> which uses a sytax like that of natural languages, <...>

1. Because natural language is verbose and has loads of redundancy to
make communication over less than perfect channels more reliable. A
programmer in such a language would have to type in much more
keystrokes than necessary.

2. Because any natural language is like a living organism. Having
grown over years and centuries, it evolved many fine irregularities.
If your hypothetical programming language is anything less than full
natural language, its user shall want to know its complete spec, lest
he be constantly bumping into misunderstanding on the part of the
compiler (it won't help much if this misunderstanding manifests itself
in the form of fancy back-questioning). And the more regular and
"designed" the spec, the easier it is to learn. Natural language is
not "designed", it rather "evolves" on itself.

Btw., the syntax of Basic --or, for that matter, AppleScript-- is much
more English-like, than that of Lisp: there are more words with common
syntactic construct, and less cryptic punctuation. Which one do you
prefer?

                                                          Yours
                                                             -Smilga
From: Pascal Bourguignon
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <87smqed6vr.fsf@thalassa.informatimago.com>
Boris Smilga <smilga (AT) dca (DOT) net> writes:

> Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> 
> > Any why shouldn't that "form of programming" be a user friendly one, 
> > which uses a sytax like that of natural languages, <...>
> 
> 1. Because natural language is verbose and has loads of redundancy to
> make communication over less than perfect channels more reliable. A
> programmer in such a language would have to type in much more
> keystrokes than necessary.
> 
> 2. Because any natural language is like a living organism. Having
> grown over years and centuries, it evolved many fine irregularities.
> If your hypothetical programming language is anything less than full
> natural language, its user shall want to know its complete spec, lest
> he be constantly bumping into misunderstanding on the part of the
> compiler (it won't help much if this misunderstanding manifests itself
> in the form of fancy back-questioning). And the more regular and
> "designed" the spec, the easier it is to learn. Natural language is
> not "designed", it rather "evolves" on itself.
> 
> Btw., the syntax of Basic --or, for that matter, AppleScript-- is much
> more English-like, than that of Lisp: there are more words with common
> syntactic construct, and less cryptic punctuation. Which one do you
> prefer?

Less cryptic punctuation than in  what language?  Not lisp, there's no
punctuation in lisp.

    10 print "Hello","word";"!"
    20 print "on same line"; : print " or not?"

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Boris Smilga
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <87y906qwwc.fsf@ganglion.bhasha.com>
Pascal Bourguignon <····@thalassa.informatimago.com> writes:

> Less cryptic punctuation than in  what language?  Not lisp, there's no
> punctuation in lisp.

Well, Lisp does have punctuation, it's just that none of it is out of
place, unlike in most other languages.

>     10 print "Hello","word";"!"
>     20 print "on same line"; : print " or not?"

Touch�. I've been rather thinking about something like

   Dim nCol As Integer, nChar As Integer

                                                           Yours
                                                              -Smilga
From: Greg Menke
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <m3ptliyz5f.fsf@europa.pienet>
Boris Smilga <smilga AT dca DOT net> writes:

> Pascal Bourguignon <····@thalassa.informatimago.com> writes:
> 
> >     10 print "Hello","word";"!"
> >     20 print "on same line"; : print " or not?"
> 
> Touch�. I've been rather thinking about something like
> 
>    Dim nCol As Integer, nChar As Integer
> 

Thats what he presented also.  The first example is using more
"classical" Basic.  I imagine it should run in VB- certainly Qbasic.


Gregm
From: Raffael Cavallaro
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <raffaelcavallaro-E7683D.09373306062003@netnews.attbi.com>
In article <·············@STRIPCAPStelus.net>,
 Ray Blaak <········@STRIPCAPStelus.net> wrote:

> or should it be:
> 
>   How big should it be?
>   What color?
>   What materials?
>   ...and so on, tediously and exhaustively...
> 
> etc.
> 
> The point of my example is to illustrate how such systems realistically can
> only present preconceived alternatives. It is like an expert system, actually.
> Such systems tend to be quite brittle and break down as soon as one steps
> outside of narrowly defined constraints.


Only if you assume that the parser/compiler's response cannot also be 
open ended.

This works, where all of your canned examples do not:

"I don't know exactly what you mean by a building. Please define this 
object, or class of objects precisely."

Then the user would have to enter the natural language syntax equivalent 
of a CLOS make-instance or defclass.

> 
> My overall reason for participating in this discussion has nothing to do with
> natural langaguage parsing. It is to fight against the idea that "programming
> is obsolete". This is false today, and will be as long as computers are dumb.
> 

And I'm not claiming that programming is obsolete. Just that it could be 
accessible to a much larger group of people. With users doing their own 
software for many tasks, this would leave much harder (and, IMHO, much 
more interesting) problems for real programmers, such as developing new 
capabilites which would eventually be incorporated into the kinds of 
canned libraries that ordinary users would work with.


> Note also, that it is not the case that "real" programming requires the
> programmer to be aware of low level implementation details. Certainly a good
> programmer aspires to know everything about how computers and software
> work. However, it is also the case that good programmers can readily switch
> abstraction levels, viewing lower levels as a "black box".

But even with a language as high level as Common Lisp, if you are 
concerned with performance, you have to deal with low level details at 
some point - look at the current thread on destructive operators.

> Indeed, there are many competent Java/Lisp/Smalltalk programmers that have no
> idea how their language constructs are implemented, and yet they still use
> them productively. They are in a "domain" above the low-level implementation
> details.

You're arguing *my* point here - some programmers *already* ignore low 
level details, and live at a higher level of abstraction, which is why I 
think it is possible to let ordinary users live there all the time, and 
still craft funtional software - it just won't be very efficient, just 
as the Java/Lisp/Smalltalk programs of a naive programmer who knows 
nothing about low level details are not very efficient.

In addition, I think users can live at an even higher level. In addition 
to ignoring low level stuff, they could enter program statements in a 
syntax that follows the structure of natural language grammars. They 
could have unclear statements rewritten by choosing alternatives, or 
being prompted as to what needs precise clarification. They could have 
access to the same scope of libraries now available to Java programmers, 
or larger, but with the above mentioned natural-language-like syntax 
interface.

This is *not* an AI complete problem by any means. It is simply putting 
a friendlier, interactive, face on a high level language, such as Common 
Lisp, with lots of powerful libraries.
From: Ray Blaak
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <ullwfp45r.fsf@STRIPCAPStelus.net>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> This works, where all of your canned examples do not:
> 
> "I don't know exactly what you mean by a building. Please define this 
> object, or class of objects precisely."

Exactly. This is the same as saying "I don't know what to do with this". I
predict that in far too many cases this would be the standard response.

> Then the user would have to enter the natural language syntax equivalent 
> of a CLOS make-instance or defclass.

And how is this different from "regular" programming? Isn't it easier just
write such code in the first place?

> And I'm not claiming that programming is obsolete. Just that it could be 
> accessible to a much larger group of people. With users doing their own 
> software for many tasks, this would leave much harder (and, IMHO, much 
> more interesting) problems for real programmers, such as developing new 
> capabilites which would eventually be incorporated into the kinds of 
> canned libraries that ordinary users would work with.

And I have no problem with that.

> In addition, I think users can live at an even higher level. In addition 
> to ignoring low level stuff, they could enter program statements in a 
> syntax that follows the structure of natural language grammars. They 
> could have unclear statements rewritten by choosing alternatives, or 
> being prompted as to what needs precise clarification. They could have 
> access to the same scope of libraries now available to Java programmers, 
> or larger, but with the above mentioned natural-language-like syntax 
> interface.

Working at a higher abstraction level is what will give the most results. The
prompting, the interactive dialog is what I think will not be workable in
practice, since I believe that it will quickly become too tedious and annoying
without being useful enough.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Raffael Cavallaro
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <raffaelcavallaro-4862C6.18290606062003@netnews.attbi.com>
In article <·············@STRIPCAPStelus.net>,
 Ray Blaak <········@STRIPCAPStelus.net> wrote:



> > Then the user would have to enter the natural language syntax equivalent 
> > of a CLOS make-instance or defclass.
> 
> And how is this different from "regular" programming? Isn't it easier just
> write such code in the first place?

Not for naive users. I think you, and many programmers like you, just 
don't get how profoundly offputting the syntax of computer languages is 
to non-programmers. I think this blind spot is precisely parallel to CLI 
jockies' failure to understand the appeal of GUIs - after all, a CLI is 
more powerful, right? To most people, however, ease of use is much, much 
more important than power.

Even if the *only* change were syntactic, it would still be a huge win 
for most users, since they simply won't go near anything that *looks* 
unlike natural language. People who wouldn't go near a CLI, even for 
simple tasks, will cheerfully use a GUI which does *exactly the same 
tasks*. Why? Because ease of use means putting the user at ease. For 
syntax, that means "like natural language."

Moreover, I proposing more user friendly feedback. Not simply "parse 
error at line 24 of source file youRfsked.c," but specific guides to 
what exactly wasn't understood, possible rewrites, and suggestions as to 
what the user needs to specify, and how.

Add to this a collection of libraries on the scale of those available to 
Java programmers, and programming becomes something which, for many 
tasks, is available to many more users.


> Working at a higher abstraction level is what will give the most results.

For *you*, because you already know how to program, and are comfortable 
with the sort of mental gymnastics necessary to read and write the 
unnatural grammars of computer languages. This is simply untrue of the 
overwhelming majority of users.

> The
> prompting, the interactive dialog is what I think will not be workable in
> practice, since I believe that it will quickly become too tedious and annoying
> without being useful enough.

It would be tedious for you, because you are fluent in the more concise, 
but unnatural grammars of computer languages. However, this interaction 
would be all that naive users would ever know of programming, and it 
would allow them to program where they could not do so before. I don't 
think they would think of it as any more tedious than many other tasks 
they do, and if it were even a bit less tedious than performing the task 
at hand manually, without the aid of custom software, then it would be a 
win. Even more so if the task needs to be performed repeatedly.
From: Ray Blaak
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <uel26pwa2.fsf@STRIPCAPStelus.net>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> > The prompting, the interactive dialog is what I think will not be workable
> > in practice, since I believe that it will quickly become too tedious and 
> > annoying without being useful enough.
> 
> It would be tedious for you, because you are fluent in the more concise, 
> but unnatural grammars of computer languages. 

I disagree. I am already assuming a better, more natural grammer.

The reason I think it would be tedious for everyone is because, as I have been
previously trying to show, the computer will not be able to give useful
interactive feedback to the user.

I have tried to convey that all too often the computer will essentially give
the feedback of "eh?", which is not useful. Since it is not useful, and the
user (non-programmer or not) is then so often forced to specify things without
useful help, why not skip the slow step?

It would only be useful if good feedback could occur in a practically large
number of cases (i.e. the canned knowledge was good enough).

Maybe this could be user configurable (beginner, expert, etc.).

Real knowledge in this area can only come about by experimentation, however.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Raffael Cavallaro
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <raffaelcavallaro-434305.22302406062003@netnews.attbi.com>
In article <·············@STRIPCAPStelus.net>,
 Ray Blaak <········@STRIPCAPStelus.net> wrote:

> I have tried to convey that all too often the computer will essentially give
> the feedback of "eh?", which is not useful. Since it is not useful, and the
> user (non-programmer or not) is then so often forced to specify things without
> useful help, why not skip the slow step?


Even a hand-holding during the process of class definition, or instance 
creation would be useful feedback for a non-programmer. One needs to be 
aware of the target audience here - it is not programmers, but ordinary 
users. I'm sure that a decent compiler could provide much more useful 
feedback than just walking users through common, boilerplate tasks.

> 
> It would only be useful if good feedback could occur in a practically large
> number of cases (i.e. the canned knowledge was good enough).


The feedback would be useful for pretty much all cases, since all it 
would need to do is look for undefined nouns (classes or instances), 
verbs (methods), adjectives (either slots, or subclass specifiers), and 
adverbs (method keywords), and walk the user through defining them. The 
user could elect to defer these tasks, but the program wouldn't run 
untill all the deferred tasks were completed.

> 
> Maybe this could be user configurable (beginner, expert, etc.).

That certainly seems reasonable - different users would need more or 
less hand-holding.

> 
> Real knowledge in this area can only come about by experimentation, however.

True. I've already had email queries as to what I've done so far along 
these lines, so the proof of the pudding, and all that...

Unfortunately, I'll have to end this rather engaging discussion for a 
while - I'm leaving on a trip and won't have access to Usenet for the 
next couple of weeks. Thanks for your many replies - they've helped me 
clarify my thoughts.

regards,

Raf
From: Daniel Barlow
Subject: Re: helping non-programmers
Date: 
Message-ID: <87r866903a.fsf@noetbook.telent.net>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> Even if the *only* change were syntactic, it would still be a huge win 
> for most users, since they simply won't go near anything that *looks* 
> unlike natural language. People who wouldn't go near a CLI, even for 
> simple tasks, will cheerfully use a GUI which does *exactly the same 
> tasks*. Why? Because ease of use means putting the user at ease. For 
> syntax, that means "like natural language."

I'm unconvinced, based on my experience of search engines.  Google and
Ask Jeeves go to incredible lengths to produce meaningful results for
natural-language queries, and all I ever see in my referrer logs are
keywords.  

For that matter, remember Infocom text adventures?  Sure, you could type

> get the small yellow biscuit then put it on the sideboard

but anyone I ever watched play these things for more than ten minutes
would instead do

> get biscuit
Which biscuit?  The small yellow biscuit, or the large chocolate
biscuit?
> yellow
You get the small yellow biscuit
> put it on sideboard
You put the small yellow biscuit on the sideboard

Even if you go to some fairly serious lengths to understand the human,
the chances are they'll end up talking in pidgin anyway


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Kenny Tilton
Subject: Re: helping non-programmers
Date: 
Message-ID: <3EE1F23C.3060005@nyc.rr.com>
Daniel Barlow wrote:
> Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:
> 
> 
>>Even if the *only* change were syntactic, it would still be a huge win 
>>for most users, since they simply won't go near anything that *looks* 
>>unlike natural language. People who wouldn't go near a CLI, even for 
>>simple tasks, will cheerfully use a GUI which does *exactly the same 
>>tasks*. Why? Because ease of use means putting the user at ease. For 
>>syntax, that means "like natural language."
> 
> 
> I'm unconvinced, based on my experience of search engines.  Google and
> Ask Jeeves go to incredible lengths to produce meaningful results for
> natural-language queries, and all I ever see in my referrer logs are
> keywords.  
> 
> For that matter, remember Infocom text adventures?  Sure, you could type
> 
> 
>>get the small yellow biscuit then put it on the sideboard
> 
> 
> but anyone I ever watched play these things for more than ten minutes
> would instead do
> 
> 
>>get biscuit
> 
> Which biscuit?  The small yellow biscuit, or the large chocolate
> biscuit?
> 
>>yellow
> 
> You get the small yellow biscuit
> 
>>put it on sideboard
> 
> You put the small yellow biscuit on the sideboard
> 
> Even if you go to some fairly serious lengths to understand the human,
> the chances are they'll end up talking in pidgin anyway

you remind me of the fact that GUIs also come with keyboard shortcuts, 
which most heavy users find their way to in short order. pulldown menus 
serve then as (useful) documentation. and it's no good saying someone 
programming in a natural HLL won't be a heavy user; development is not 
like walking up to a toaster once a day to take care of that pop tart.

i have not followed this whole thing very closely, but has LOOP been 
discussed? was that not an exercise in NL syntax -- granted, not with 
compiler prompting. Or does that fall short because you still have to 
hit the right keywords at the right time wih the right arguments in the 
right combination?


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Michael Sullivan
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <1fwb1gh.e4p1x918h62psN%mes@panix.com>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:
> In article <·············@STRIPCAPStelus.net>,
>  Ray Blaak <········@STRIPCAPStelus.net> wrote:

> > > Then the user would have to enter the natural language syntax equivalent
> > > of a CLOS make-instance or defclass.

> > And how is this different from "regular" programming? Isn't it easier just
> > write such code in the first place?

> Not for naive users. I think you, and many programmers like you, just
> don't get how profoundly offputting the syntax of computer languages is
> to non-programmers. I think this blind spot is precisely parallel to CLI
> jockies' failure to understand the appeal of GUIs - after all, a CLI is
> more powerful, right? To most people, however, ease of use is much, much
> more important than power.

I agree with this, I'm just not sure "natural language" is the right
direction.

The beautiful, huge advantage that GUIs have, and it's an advantage for
*anyone* who is not intimately familiar with a program, is that GUIs
tend to be largely self-documenting.  If you're not sure what some
command is, you look through the menus until you find something useful
to try, and all the while, you can still very easily see what you are
working on, or if it gets covered, it's trivial and obvious (and
generally -- self documenting) how to go back.  You can get through lots
of levels of lookup without losing track of your original place.  

Now, I *hate* GUIs that don't offer good keyboard or script/command
based control options, but I would never prefer a command line tool over
an equivalently powerful GUI tool, for this reason, unless the GUI is so
badly designed as to get in my way, or slow things interminably even on
modern hardware.  The problems with GUIs, for me, are when they are
*not* equivalently powerful/easy for advanced users as their command
line counterparts.  Which is all too often in practice, but not true in
principle.

What I think novice programmers could use is the IDE equivalent of a
good GUI tool vs. a CLI tool.  Programming in most general purpose
languages still feels a lot like using a CLI.  In an IDE where you can
do stuff like double click on a function/macro to bring up a window with
its definition/expansion -- drag an object or propertly list to some
kind of inspector window to get a graphical display of its contents and
structure.   This is the kind of thing you can do in some database
programming environments, sometimes even being able to program purely in
GUI mode (Helix, for instance).  If there was enough work done on a
really *good* graphical environment, you might be able to make something
like this for general purpose languages that really would make it easier
to learn to program.


Michael
From: Michael Sullivan
Subject: Re: helping non-programmers
Date: 
Message-ID: <1fw4z5w.ex4w2l1swe41uN%michael@bcect.com>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> wrote:

> But even with a language as high level as Common Lisp, if you are 
> concerned with performance, you have to deal with low level details at
> some point - look at the current thread on destructive operators.

This means, AFAICT, that your system's covering of low-level bookeeping
details isn't significantly different from that of various high-level
languages now.

If I choose to program in CL, or just about any popular HLL, I can
freely pay no attention whatsoever to low-level details (as long as the
structures I need are supported by language/libraries) and focus purely
on operational semantics.  My programs will often be bloated and
bog-slow when I am done, whenever my random walk down data-structure
alley has produced a few schlemiel the painter algorithms in key spots.
But it seems to me that (modulo a programmer AI), this is exactly what
is going to happen under your system, except now, the user doesn't have
any choice, unless this system translates into some standard language
that can then be edited at the raw language level.

I can tell you as an applescript writer and follower of AS mailing
lists, that a fair bit of time gets spent on hacking around speed
problems, and discussing the fastest way to do certain kinds of
operations.  And begging of the local "real programmers" to write
osaxen/apps, or apple to update the implementation for things where no
good solution is found. 

So, I'm not sure you can completely eliminate low-level concerns, any
more than many current languages already do.  In the ideal, you would
make it possible to ignore them generally (a la CL), but not impossible
to wallow in them where necessary.

> You're arguing *my* point here - some programmers *already* ignore low
> level details, and live at a higher level of abstraction, which is why I
> think it is possible to let ordinary users live there all the time, and
> still craft funtional software - it just won't be very efficient, just
> as the Java/Lisp/Smalltalk programs of a naive programmer who knows 
> nothing about low level details are not very efficient.

But if you can ignore these things already in current HLLs, why do you
need a new language to do that?  And why would you want to take away the
*option* of fiddling with those details for when optimization is
critical?

> In addition, I think users can live at an even higher level. In addition
> to ignoring low level stuff, they could enter program statements in a
> syntax that follows the structure of natural language grammars. They 
> could have unclear statements rewritten by choosing alternatives, or 
> being prompted as to what needs precise clarification. 

I understand what you're saying here, but I really think it won't make
that much difference.  IME, various basic style language are not more
difficult to read for a novice/non-programmer than applescript, which is
certainly in the direction you're discussing.  

IME, the thing that makes lisp programs harder to understand than these
is *not* the syntax, but the more powerful things that are being done!

If anything, I've seen lots of well written lisp programs (especially
scheme, which has simpler typography) that read almost like pseudocode.

It's hard to imagine a less intimidating syntax than pseudocode.

> This is *not* an AI complete problem by any means. It is simply putting
> a friendlier, interactive, face on a high level language, such as Common
> Lisp, with lots of powerful libraries.

I really believe that the natural language syntax is the least important
of your three keys, and that it is the hardest to implement.  

I really believe that some version of scheme or lisp, with an online
well-designed Roget's thesaurus of function space, a lot of key standard
libraries, and an FFI standard that's as easy to grok as the apple event
object model would do a lot.


Michael
From: Daniel Barlow
Subject: Re: helping non-programmers
Date: 
Message-ID: <87y90f9wwa.fsf@noetbook.telent.net>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> Consider a silly architect-specific language snippet:
>
>   DESIGN A STANDARD BUILDING.
>
> Imprecise as hell. How do we begin the dialog? Should it be:

You wouldn't say that to a person either, unless you had sufficient
shared experience to create a context which disambiguates it at least
partly.  Even in an architect's office, you wouldn't say that to the
new hire on his first day - and if you did, it would be quite
reasonable for him to come back with all the questions you list below. 

Now, if you were in a room with an interviewee, you gave him a
whiteboard marker and you said "DESIGN A STANDARD BUILDING", you would
expect him to start with assumptions derived from the local context
and question each of those in turn.  So,

(i am holding a whiteboard marker) =>
    "Do you want me to draw it on the whiteboard?"
(i only have one colour of marker) =>
    "Do you want me to specify colours etc?  I only have the one pen"
(we are in the downtown SF area) =>
    "Standard for downtown SF?"

and then onto the questions about which nothing can be inferred
locally.  But this sounds like a hard question for a human, never mind
for a computer, and outside of a situation where they're forced to
answer you could expect "Eh?" as a typical response.

I think that a computer that could respond to "design a building like
the one you saw yesterday, but with two bathrooms" would be a laudable
goal. 

> My overall reason for participating in this discussion has nothing to do with
> natural langaguage parsing. It is to fight against the idea that "programming
> is obsolete". This is false today, and will be as long as computers are dumb.

I agreee with that, but I don't think that was even the point being
argued (I may be wrong, I skimmed a lot of the thread).  To solve the
problem in general is somewhere between very hard and impossible, but
there's some easier cases to be picked off first that would even be
useful.  For example, if the SQL-like-language query

SELECT * FROM EMPLOYEE WHERE GRADE=7 OR GRADE=8 AND NEXT_REVIEW_DATE<NOW

were given, the response would be "all of the grade 7 employees or
just the ones due for review?".  

(Or perhaps it already knows you're trying to contact all the people
likely to be grade 7 next month, and doesn't need to ask)


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Ray Blaak
Subject: Re: helping non-programmers
Date: 
Message-ID: <uisrjp3th.fsf@STRIPCAPStelus.net>
Daniel Barlow <···@telent.net> writes:
> Now, if you were in a room with an interviewee, you gave him a
> whiteboard marker and you said "DESIGN A STANDARD BUILDING", you would
> expect him to start with assumptions derived from the local context
> and question each of those in turn.  

A person, though can guess the assumptions much better than a computer can,
and can at least make headway. At the very least, a person can bitch and
complain and justifiably call the interviewer a moron.

A computer would just stop.

> I think that a computer that could respond to "design a building like
> the one you saw yesterday, but with two bathrooms" would be a laudable
> goal. 

Even there, imprecision quickly becomes difficult. Where do the bathrooms go?
What color should they be? Should they have bathtubs or toilets only?

Are these the only canned questions possible? Does the computer generate them
from a built in knowledgebase of possible bathroom attributes?

These kinds of problems quickly become difficult outside of constrained
situations.

> SELECT * FROM EMPLOYEE WHERE GRADE=7 OR GRADE=8 AND NEXT_REVIEW_DATE<NOW

Now here, all the pieces are precisely defined.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Boris Schaefer
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <87el28be51.fsf@qiwi.uncommon-sense.net>
* Ray Blaak <········@STRIPCAPStelus.net> wrote:
| 
| Imagine such a library. Choosing the appropriate algorithm from its
| reportoire for the problem at hand is not easy to do in general. By
| what criteria does one decide? How does it match the intended
| results? What are the trade offs to consider?
| 
| In fact, I maintain that such a choosing process is exactly
| programming.

I don't agree with Raffael, and I'm fairly sure he can defend himself,
but I think you're viewing his post out of context.  In other posts he
said that the users would be `domain experts'.  I think he believes
that domain experts will be able to choose appropriate  algorithms
without being programmers.

Is that what you're saying, Raffael?

Anyway, I basically agree with what you say in the following pragraph.

* Ray Blaak wrote:
| 
| My point, though, is that in general, it is not possible to
| eliminate the need for programming ability, even in the presence of
| more natural syntax, language environments that can ask for
| clarification, and large libraries of preprogrammed functionality.
| 
| Very very quickly one steps out of the preconceived prepared
| boundaries of the system, and then one is back to needing to think
| about consequences and how things fit together.

I believe this might even increase the need for programming ability,
because users will want to do things that they aren't doing now,
because they think it's impossible.  Remember:

  Given a large file of data about anything, people like to ask
  arbitrarily complicated questions.
                  -- D.E. Knuth

Boris

-- 
·····@uncommon-sense.net - <http://www.uncommon-sense.net/>

The farther you go, the less you know.
		-- Lao Tsu, "Tao Te Ching"
From: Ray Blaak
Subject: Re: helping non-programmers (was Re: Lisp-2 or Lisp-1)
Date: 
Message-ID: <u65nkzafs.fsf@STRIPCAPStelus.net>
Boris Schaefer <·····@uncommon-sense.net> writes:
> * Ray Blaak <········@STRIPCAPStelus.net> wrote:
> | 
> | Imagine such a library. Choosing the appropriate algorithm from its
> | reportoire for the problem at hand is not easy to do in general. By
> | what criteria does one decide? How does it match the intended
> | results? What are the trade offs to consider?
> | 
> | In fact, I maintain that such a choosing process is exactly
> | programming.
> 
> I don't agree with Raffael, and I'm fairly sure he can defend himself,
> but I think you're viewing his post out of context.  In other posts he
> said that the users would be `domain experts'.  I think he believes
> that domain experts will be able to choose appropriate  algorithms
> without being programmers.

Being a domain expert choosing domain-specific solutions doesn't fundamentally
change anything.

The choosing process can get arbitrarily complicated, and that is in essence
programming (i.e. thinking about consequences, what should happen next).

My disagreement is with the idea that the computer can *in general* have a
dialog with the user guiding them to the correct (domain-specific) solution.

The computer can do this in controlled, preconceived situations. This can even
be helpful in practice. The computer cannot do this in general, for that is
the AI problem.

I would love to be proven wrong, of course.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Michael Sullivan
Subject: Re: helping non-programmers
Date: 
Message-ID: <1fw35m1.1mbmiv512szeezN%michael@bcect.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote:

> Being a domain expert choosing domain-specific solutions doesn't fundamentally
> change anything.
 
> The choosing process can get arbitrarily complicated, and that is in essence
> programming (i.e. thinking about consequences, what should happen next).

Note here that I agree with you, that natural language is not
necessarily the answer to this problem.

But I think domain experts *do* often program in this essential sense
now, using domain specific software, or in writing clear specifications
for untrained users.  People who would not ever try to write a scratch
application, nonetheless do programming, when the tools make it fairly
easy (or they had to learn them anyway), and there are not lots of
arbitrary, non-domain-specific housekeeping things to worry about.  

A lot can be done to speed the learning process of writing software for
existing domain experts who have a bent, but are not software experts.
I think where you and I fall short of agreeing with Raffael is where he
seems to think that this process can resemble conversation with a human,
short of writing a real AI.  At least that working in that direction
could lead us to something greatly more helpful than what's available
now, long before a real AI, or human level natural language processing
is possible.  I just don't think that's true.  

I think it could get to something that gets people over the hump and
gets them started writing certain canned functions.  But I fear that,
short of true NLP, it will be dumbed-down enough not only to turn off
programmers, but also those domain experts who begin to work it
constructively, just as most competent secretaries found themselves
cursing Clip-It until they figured out how to turn it off.

Seriously to Raffael, Karl's clip-it crack was on target -- we have a
great deal of trouble writing good "conversational" software to do
things like aid in the set up of a word-processing document.  What makes
you think that something so much more general and powerful (and
non-domain specific) is doable without close to fluent NLP?

Have you ever tried to give complicated precise instructions or hash out
details with someone where there is no language you both speak very
well?  I've certainly encountered all sorts of problems with people I'd
call "basically fluent" but who don't share a firm near-native command
of language semantics and idiom (for instance, notably less command than
the typical european usenet poster for whom english is a second/third
language).  Computers aren't even going to be as good as those people at
having this kind of conversation.


Michael
From: Raffael Cavallaro
Subject: Re: helping non-programmers
Date: 
Message-ID: <raffaelcavallaro-D640D3.00431206062003@netnews.attbi.com>
In article <·······························@bcect.com>,
 ·······@bcect.com (Michael Sullivan) wrote:

> Have you ever tried to give complicated precise instructions or hash out
> details with someone where there is no language you both speak very
> well? 
 [snip]
> Computers aren't even going to be as good as those people at
> having this kind of conversation.

I submit that they are going to be as good, even better, when the 
language is constrained in the same way that existing computer langauges 
are. In other words, the parser/compiler would have a much smaller space 
to search for possible meanings than it would with a real natural 
language because the range of possible expressions would be so much 
smaller.

What keeps non-programmers from programming?

I see it as threefold, listed in order of aversion to naive users:

1. Unfamiliar syntax.
2. Unwillingness to learn and deal with low level book keeping issues 
(data representations, etc.)
3. Lack of expertise in existing solutions (algorithms, protocols, etc.)

Note that from a programming perspective, that which most naive users 
find most offputting (the syntax) is actually the least of their 
problems - they're actually much more profoundly incompetent at other 
aspects of programming, they just know so little, they don't know how 
much they don't know.

So, provide powerful libraries (3) in a language that hides the low 
level details (2) in a syntax that follows the pattern of natural 
language. Note that this is *not* the same thing at all as parsing real 
natural language.
From: Thien-Thi Nguyen
Subject: Re: helping non-programmers
Date: 
Message-ID: <7gznkvlchg.fsf@gnufans.net>
Raffael Cavallaro <················@junk.mail.me.not.mac.com> writes:

> the parser/compiler would have a much smaller space 
> to search for possible meanings than it would with a real natural 
> language because the range of possible expressions would be so much 
> smaller.

meaning arises not only that which is sought but how it is sought.
the range of "how" is the art of programming, difficult to automate.

> in a syntax that follows the pattern of natural language. Note
> that this is *not* the same thing at all as parsing real natural
> language.

human: [rambling] "kind of like" [more rambling] "know what i mean?"
computer: sorry, no, please find a human translator and pay that
          person to write code that i understand, thanks, ps: may
          i suggest the wonderful programmer who wrote me?        

if you stipulate the human must avoid the "know what i mean?"
question, that human will probably not be as expert in that domain
as you would like.  (an expert understands expertise to be exactly
the requirement of asking that question, even when "programming".)

thi
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305302136.642133ee@posting.google.com>
"Karl A. Krueger" <········@example.edu> wrote in message news:<············@baldur.whoi.edu>...

> Clippy for programmers.
> 
>       "It looks like you're writing a conditional!  Do you want your
>        program to:
> 
>               (*) Take the first branch which matches at all;
>               ( ) Take the branch which matches closest, using Joe's
>                   Discount Fuzzy Logic and Bait Shop Algorithm, for an
>                   extra 50 cent surcharge per invocation;
... [more lame mockery snipped]

And it is precisely this sort of superior attitude which will have
"real programmers" scratching their heads and wondering what they did
wrong when they have been made redundant by software that can ...
write software.

What programmers do is, for the most part, really not all that
special. Most of the hard work is in the libraries - that's why
clearly inferior languages like Java do so well - because much of the
work has already been done for you.

Make these sort of libraries accessible to ordinary users, and you
don't even need Java programmers for most tasks.

So lets keep programming languages as arcane as possible - that way,
we ensure that "real programmers" will always have a job.

For the record, I never suggested fuzzy logic as a means of
disambiguating user input - simply allow users to choose among valid
rewrites of the code they actually wrote. Neither did I suggest that
the mere typing of the word "if" would cause a prompt to disambiguate
input.

But what's even more telling is the truly frightened and visceral
reaction to these ideas, even though any one with any sense of
historical perspective can see that they are inevitable. Libraries
will get more powerful; parsers and compilers will get smarter;
surface syntax will become more natural-language-like; error and
warning messages will become more helpful, and allow users to choose
rewrites. The language to put these features together first, will
become the language that most people write software in.
From: Rob Warnock
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <P8adnYUZ_cVoCkWjXTWcoQ@speakeasy.net>
Raffael Cavallaro <·······@mediaone.net> wrote:
+---------------
| How many fewer programmers would there be if programming still meant
| entering raw op-codes in order to specify a program? I know that I for
| one wouldn't be programming.
+---------------

Well, having spent the first few years of the programming side of my life
coding *assembler* (for the LGP-30, IBM 1410, IBM 1620, DEC PDP-10, PDP-8,
PDP-11, Zilog Z-80, to name a few), I assure you that serious assembly-
language programmers very quickly build up a library of macros and
subroutines that are roughly at the level of abstraction as "libc"
(or even Lisp!), and then code at the level of macro and/or subroutine
calls.

Of course, the macro systems available for assemblers in those days
were of nearly the same power as Lisp macros (that is, dynamic compile-
time re-writing of code), making the whole task of abstraction building
a *lot* easier!

Was it Tony Hoare who said this?

    "I always program in the same language, no matter what the compiler is."


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Pascal Bourguignon
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87wug6qln4.fsf@thalassa.informatimago.com>
·······@mediaone.net (Raffael Cavallaro) writes:

> Adrian Kubala <······@sixfingeredman.net> wrote in message news:<······································@gwen.sixfingeredman.net>...
> > What you want is an AI agent which takes vague requests by users and uses
> > finely-honed logical thinking skills to turn these into precise
> > specifications which are then executed. Currently these agents are called
> > "programmers" and command decent salaries -- I can put you in touch with a
> > few, if you're interested.
> 
> And I submit that much of what programmers get paid to do over and
> over again, can be packaged in a user friendly way, much as a
> spreadsheet works, but with a broader range of functionality.
> 
> I further submit that one reason this has not happened is that many
> programmers *like* reinventing the wheel over and over in an arcane
> syntax- it makes them feel useful and special.

And we want to keep our good salaries! :-)

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0306011816130.8593-100000@gwen.sixfingeredman.net>
On 30 May 2003, Raffael Cavallaro wrote:
> These frameworks would then be accessed by commands in a syntax that
> much more closely approaches natural language than is true of any
> existing computer language.

You are saying that programming is chiefly syntax. I don't think any real
programmers would say this, not because they're technocrats struggling to
maintain class domination, but because it's not true. Are you for real?
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0306022012.30059f36@posting.google.com>
Adrian Kubala <······@sixfingeredman.net> wrote in message news:<·······································@gwen.sixfingeredman.net>...


> You are saying that programming is chiefly syntax. I don't think any real
> programmers would say this, not because they're technocrats struggling to
> maintain class domination, but because it's not true. Are you for real?

I'm saying that, *given powerful libraries*, most programming is
chiefly syntax. Most programmers are not solving the great problems of
computer science. They're re-executing an already solved problem for a
different client, with a few local nuances.

Note that a great deal of "real programming" had to go into making
these powerful libraries. But that's why they're powerful. Once a
problem is solved algorithmically, there is simply no good reason for
it to be re-implemented over and over. Put it in a library that is
universally accessible and move on to other problems.

"Real programming," which is not "chiefly syntax," constitutes a small
portion of the total amount of work being done by programmers today.
This "real programming" consists in hard problems, the solutions to
which, will form the powerful libraries of the future.
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0306022013.7a0b6eab@posting.google.com>
Adrian Kubala <······@sixfingeredman.net> wrote in message news:<·······································@gwen.sixfingeredman.net>...


> You are saying that programming is chiefly syntax. I don't think any real
> programmers would say this, not because they're technocrats struggling to
> maintain class domination, but because it's not true. Are you for real?

I'm saying that, *given powerful libraries*, most programming is
chiefly syntax. Most programmers are not solving the great problems of
computer science. They're re-executing an already solved problem for a
different client, with a few local nuances.

Note that a great deal of "real programming" had to go into making
these powerful libraries. But that's why they're powerful. Once a
problem is solved algorithmically, there is simply no good reason for
it to be re-implemented over and over. Put it in a library that is
universally accessible and move on to other problems.

"Real programming," which is not "chiefly syntax," constitutes a small
portion of the total amount of work being done by programmers today.
This "real programming" consists in hard problems, the solutions to
which, will form the powerful libraries of the future.
From: Adrian Kubala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Pine.LNX.4.44.0306030755430.17627-100000@gwen.sixfingeredman.net>
On 2 Jun 2003, Raffael Cavallaro wrote:
> I'm saying that, *given powerful libraries*, most programming is chiefly
> syntax. Most programmers are not solving the great problems of computer
> science. They're re-executing an already solved problem for a different
> client, with a few local nuances.

Deciding which library functions to use and how to use them is
non-trivial. I also don't see that a natural language syntax offers
abstraction-building facilities that would make creating such universal
libraries possible. One reason programmers re-solve the same problems is
that non-functional languages like C make flexible code reuse hard and
encourage use of fixed, inflexible binary libraries.

> Note that a great deal of "real programming" had to go into making
> these powerful libraries. But that's why they're powerful. Once a
> problem is solved algorithmically, there is simply no good reason for
> it to be re-implemented over and over. Put it in a library that is
> universally accessible and move on to other problems.

If you're saying that if every conceivable task is available as an opaque
library function, then anyone can program, you're trivially right, even if
making such a library is impossible. If you're saying that we need more
powerful libraries and abstractions, you're also trivially right, although
your vague suggestion that this be done with ideas from natural language
remains unsupported.
From: Pascal Bourguignon
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <871xyes0b9.fsf@thalassa.informatimago.com>
·······@mediaone.net (Raffael Cavallaro) writes:

> Adrian Kubala <······@sixfingeredman.net> wrote in message news:<········································@gwen.sixfingeredman.net>...
> 
> > That's not to say
> > there's not some need for a middle-ground between abstract programming and
> > dialogue boxes, but I don't think just having this will make
> > non-programmers as much more productive as you seem to think.
> 
> I don't think you've seen how unproductive most users are given the
> incredible computational power at their disposal.
> 
> For example, they'll rename scores of files manually, because they
> either don't know that scripting tools exist that can do this in a
> line of code, or, what is more usually the case, because perl, python,
> ruby, etc. are completely beyond them. Even scripting languages are
> too unlike natural language for most users to learn them easily.

Let me see, what would a good user oriented scripting language would be?

    - tidy your room!
    - send this invoice to Mr Schmutz.
    - please, xerox me three copies of this letter.
    - I need a cup of coffee.

That's still is not scripting, only simple orders...

> Now, there exist GUI programs with dialog boxes that tackle this
> specific problem, but there do not exist GUi programs that tackle
> every problem like this that most computer users face. Being able to
> use a decent scripting language makes power users about an order of
> magnitude more productive with their machines than ordinary users -
> i.e., they spend about a tenth the time, or less, on certain tasks as
> ordinary users.
> 
> Maybe its that programmers don't realize how completely beyond most
> users even scripting languages are. But giving ordinary users the
> power of, say python, in an easily learnable form, would greatly
> enhance their productivity with computers.
> 
> Raf

The  problem  is  that  in  any case,  "programming"  is  an  abstract
activity.  To  direct some computer  or somebody to do  something, you
have to be able to explain it or  him how to do it and this needs some
reflexion,  and  some formal  presentation,  including  the design  of
decision points, loops and sub-tasks.

I don't think  that the problem lies in  the scripting languages. Even
lisp presents  no difficulties in  that respect.  The problem  is that
people  is  not  educated  to  approach  problems  with  a  programmer
mind-set.  When  they have a  task to do,  they just do  it mindlessly
instead  of thinking  about the  task,  and how  it can  be done  more
efficiently and more automatically, or even entirely avoided...

Just  see  how  many  centuries  of  human  work  were  needed  before
ergonomists  such as  Taylor could  have  an impact  on a  significant
scale!

-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Eric Smith
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ceb68bd9.0305282342.48a32342@posting.google.com>
·······@mediaone.net (Raffael Cavallaro) wrote in message news:<····························@posting.google.com>...

> I was actually talking about a natural-language-like general purpose
> programming language that *is* "dumbed down" for non programmers. My
> point was that more programming needs to be done by people who are now
> non-programmers. These people, the overwhelming majority of computer
> users, simply will not get the software they want or need any other

The problem you're trying to solve, enabling
non-programmers to do the work of programmers,
Is not really a language issue.  If it were,
it would imply that programmers are translators.
But if programming is just translation, why is
design considered such an important part of it?

The real reason it's hard for non-programmers to
do programming is not their lack of knowledge of
good programming languages, but that they don't
know how to design any but the simplest programs.

OTOH we can build frameworks for very flexible
software which non-programmers can configure.  A
few good frameworks can give a lot of users a lot
of the software they want, such that they only
have to configure each to make it work the way
they want it to work.  Provided of course they
have the necessary configuration expertise.  But
at least it's not programming.  We might be able
to make the configuration process use something
closer to natural language.
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0305291336.473643f1@posting.google.com>
········@yahoo.com (Eric Smith) wrote in message news:<····························@posting.google.com>...
> OTOH we can build frameworks for very flexible
> software which non-programmers can configure.
...
> We might be able
> to make the configuration process use something
> closer to natural language.

This is what I meant when I was talking about natural-language-like
programming language with powerful libraries. I think you're agreeing
with me, but using a different terminology - i.e., you say
"frameworks" where I say "libraries."

In other words, do all the hard work for the user in the form of
powerful frameworks/librares, and let them access this power though a
nearly natural language interface.

Raf
From: Michael Sullivan
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <1fvxdxc.iz13ej1u90db3N%michael@bcect.com>
Raffael Cavallaro <·······@mediaone.net> wrote:

> ········@yahoo.com (Eric Smith) wrote in message 
> news:<····························@posting.google.com>...
> > OTOH we can build frameworks for very flexible
> > software which non-programmers can configure.
> ...
> > We might be able
> > to make the configuration process use something
> > closer to natural language.
> 
> This is what I meant when I was talking about natural-language-like
> programming language with powerful libraries. I think you're agreeing
> with me, but using a different terminology - i.e., you say
> "frameworks" where I say "libraries."

> In other words, do all the hard work for the user in the form of
> powerful frameworks/librares, and let them access this power though a
> nearly natural language interface.

I agree that this makes a lot of sense.  There are two places I differ
with you.  

1.  I don't think that the interface has to be all that natural language
like.  People got used to spreadsheets (this includes tons of people who
never used one before computers), and I think similarly that people can
get used to a language that is as natural/simple as Applescript is now.
The key is that they have to realize that what they want to do is
possible.  

Basically, you're saying that libraries/frameworks are important and
that natural-language-like syntax is important.  I agree with both of
these things.  We agree but our emphases would be opposite (unless I'm
misreading you). You appear to consider "natural-language-like" to be
80% of the solution and libraries/frameworks to be 20% (or maybe more
like 50-50), I think the opposite.  I think libraries/frameworks are
80-90% of the problem and that more natural language like syntax is only
10-20% (assuming you're starting from one of the more beginner friendly
syntaxes that exist today like applescript, realbasic or certain
schemes).

2.  I don't think you can cover all domains with general purpose
libraries in a way that eliminates real programming work.  No matter how
many wonderful general purpose libraries you write, there will be parts
of any interesting domain that are domain specific and still hard in
non-domain specific ways.  These will require real programming expertise
to design well, no matter what language is used.  I think that each
domain is going to have some serious things that will need to be written
by clueful, good programmers, even if they are not programming in the
"traditional" sense, and such people will not want to deal with a
language that is "dumbed-down" beyond a certain point.

I *do* agree that if we were more intentional about designing
user-extensible solutions and full scripting environments that most
domain's problems could be reduced to writing a really good set of
domain frameworks and applications and then letting domain experts who
script have at the works to customize to their heart's content.  It is
my contention that this is technically possible with languages that
exist today -- that the problem is in designing those open structures
within libaries/apps and making those structures available to something
easier and more flexible yet at least as powerful as C++/Java.  Many
languages which fit that bill are available today or could be made to
fit with only a SMOP.

In any case, I think it will always be only 5-10% of the population that
is capable of writing software that is at all helpful for more than the
simplest instruction-like tasks (Inotherwords, be able to extend
software to better cover a domain in any meaningful way).  Yes, a *lot*
of that 5-10% is doing no serious programming or scripting now, but I
think the lack of libraries and extensible software is what's keeping
them down.  The person who won't script to solve simple repetive tasks
like file copying is either unaware that a scripting solution exists,
feels they have no time whatsoever to learn it, or will never be
particularly good at writing software with *any* language that isn't a
programmer-AI, no matter how dumbed down or natural-language-like that
language gets.  The people that you need to exploit for a brave new
world of software design are the ones who script, keyboard macro and GUI
automate everything they can get their hands on (and do as good a job as
is feasible with the tools and time available), but currently don't
participate in scratch writing domain specific software because it
requires too much up front expense.  The up front expense is less
learning the language than building all the components from scratch.  



Michael
From: Michael Sullivan
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <1fvw4a1.j4ov8d6h0ndkN%mes@panix.com>
Raffael Cavallaro <·······@mediaone.net> wrote:

> ···@panix.com (Michael Sullivan) wrote in message news:<···························@panix.com>...
> > Raffael Cavallaro <·······@mediaone.net> wrote:
> > 
> > > What we need are general purpose computer languages that are also user
> > > friendly. When it comes to languages, "user friendly" means
> > > natural-language-like.
> > 
> > I agree with the first sentence.  I'm not sure I agree with the second.
> > In the very, very long term, it's probably true.  But to make a
> > natural-language-like general purpose programming language that isn't
> > just dumbed down for non-programmers, but actually helps good
> > programmers more than it hurts them, is probably a strong-AI problem.
> 
> I was actually talking about a natural-language-like general purpose
> programming language that *is* "dumbed down" for non programmers. My
> point was that more programming needs to be done by people who are now
> non-programmers. These people, the overwhelming majority of computer
> users, simply will not get the software they want or need any other
> way. They can't afford to hire a development team, and even it they
> did, it would be - and often is - a crap shoot as to whether said
> development team would ever finish the project to spec, under budget,
> and on time.
> 
> Its obvious to me, and should be to most programmers, that there exist
> a number of perfectly satisfactory programming languages for
> *programmers*, Common Lisp being foremost among them IMHO. However,
> this still doesn't solve the much larger problem - most users would be
> much more productive with more customized software, and they have no
> way of getting it.
> 
> A user friendly language, would solve this problem. For most users,
> this would have to mean natural-language-like, since that's the only
> language paradigm we can assume that all users have mastery of. If one
> argues that they should learn the artificial constructs of any of a
> number of existing families of computer languages, we've already
> defeated the purpose - i.e., this is tantamount to saying "they should
> just learn how to program."
> 
> 
>   
> > I don't think this is a really bad thing to work towards, but I think you
> > need to be careful of putting the cart before the horse.  You need a
> > really strong base of natural language processing before something
> > that's a real improvement over existing computer languages can be
> > possible.
> 
> I think this is where we part company. I think if a language were
> developed that started, from scratch, with the natural language model
> (subject, verb, object, modifiers, prepositions), and mapped those
> familiar constructs onto the necessary elements of computer
> programming, we would have a language that was the next step beyond
> AppleScript, to use your example.
> 
> > I contend that if apple had chosen scheme or caml, or csh, or pretty
> > much any other language as its extension language, and the same support
> > had been built, and the same runtime characteristics (or better) were
> > exhibited, that whatever this language was would have had the same kind
> > of adoption that applescript has now.
> 
> I would have to disagree very strongly with this contention. Many of
> the people who use Applescript would run screaming from caml -
> actually, they wouldn't even get that far - you have to begin to
> understand how something works before you can recoil in horror from
> it, and many Applescript users wouldn't even get to the stage of
> understanding how caml works in the first place.
> 
>    
> > Look at Visual Basic.  It's atrocious.
> 
> Only from a computer language snob perspective - no insult intended
> here, I count myself as one too. From a user perspective though, it
> gets things done in an understandable, if often inelegant fashion. I
> think you greatly underestimate the extent to which Basic, in any
> form, is easier to learn and use than, for example, caml. There's a
> reason why they chose the acronym BASIC, after all.
> 
> 
> 
> 
> > Power users want to automate, and they will wrap their heads around
> > anything which will let them do this with a minimum of architecture
> > building.
> 
> Not if that anything is caml, or common lisp. In other words, by
> choosing a sufficiently difficult scripting language, you greatly
> reduce the proportion of users who will use it.
> 
>   They will not learn lisp or C++ in order to write a whole
> > structure, when they can plug away at existing applications and get
> > 60-70% of what they need.  give them a decent scripting environment to
> > go with those applications and suddenly they can get 95%+ of what they
> > need.  Why reinvent all those wheels for that last 5%?
> 
> 
> Because your numbers are only true for 10% of users. The other 90% do
> not even get the 60-70% from existing applications. Needless to say,
> they won't realize the advantages of scripting languages either,
> unless they are much more like Applescript than caml. Even more like
> Applescript than Applescript would be even better.
> 
> 
> > But the point is -- power users get into programming because of good
> > libraries, and hooks into existing software.  They do *NOT* need an
> > easier language.  They are already using languages that are *far less
> > easy* than most of the lisp and ML family.
> 
> You've been using high powered languages too long. Whatever one may
> say in favor of lisp and ML, they are definitely not easier for non
> computer scientists to learn. Basic may suck from a CS standpoint, but
> from the naive user standpoint, Basic is great, because it actually
> makes sense without a CS degree.
> 
> 
> > The fact that they are also
> > less powerful is just another sad truth about the computing landscape.
> 
> 
> Its a natural consequence of the fact that they put ease of learning
> above power in their designs.
> 
> In other words, to follow up on your point, for ordinary users, power
> comes from libraries, not from language features. So a language that
> was easy to learn - meaning, having a natural-language-like syntax -
> and had powerful libraries, would be the ideal solution for the
> overwhelming majority of computer users. This leaves out lisp and ML,
> of course, because they are far too unlike natural language to be easy
> for *most users* to learn.
> 
> 
> > I am in a large community of similar folks who use applescript every
> > day.  I certainly can't speak for them all, but I speak for a lot of
> > them when I say that trying to make a computer language be more like a
> > natural language was one of the *mistakes* of applescript, and we will
> > be better off without attempting this until we are *much* further along
> > into natural language processing than we currently are.
> 
> 
> The mistake of Applescript was to give a superficial appearance of
> natural language, without the underlying structure of natural
> language. Starting with basic grammatical structures first, then
> coming up with the language syntax, then how this maps to
> computational functionality, in that order, would yield a much better
> language.
> 
> 
> 
> Raf
From: Jeff Caldwell
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <XCBza.1449$H84.677008@news1.news.adelphia.net>
Raffael Cavallaro wrote:
...
 > We need a natural-language-like
 > computer language that is the next step beyond spreadsheets, as it
 > were.
...
 > Having an adaptive compiler would be nice, but lets get one that is
 > merely more natural-language-like first.

Please try to convince a musician that he or she would be better off 
writing and reading music in English or Chinese rather than in musical 
notation.

The accountants I know would be angry if you tried to force them to 
construct their spreadsheets in English. Take a complex spreadsheet, 
write out its specifications in unambiguous English, show the result to 
an accountant and try to convince her that she should begin entering all 
her spreadsheets your new way.

 > But no ordinary person lays out paragraphs as nested blocks.

A paragraph is a nested block of language but that was made clear in my 
original post.

 > Or a more general purpose language will be developed that allows
 > people from different domains to write their own software. This will
 > be much more useful, affordable, and flexible than calling in a team
 > of software engineers, QA personnel, documentation specialists, and
 > programmers,

Programmers need software engineers, QA personnel, and documentation 
specialists but bankers won't! When machines can parse bankers's natural 
language, the banker's software will be well designed, bug free, fully 
documented, and comprehensive enough to run their entire enterprise, no 
matter how large! With no QA! Or documentation! Or SE's! It will all 
work! Because the compiler will be so smart! And the program will be the 
documentation! Much better than those lousy programmers who need all 
that extra support!
From: Matthew Danish
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <20030524180123.GG17564@lain.cheme.cmu.edu>
On Sat, May 24, 2003 at 03:25:43AM +0000, Jeff Caldwell wrote:
> When machines can parse bankers's natural language, the banker's
> software will be well designed, bug free, fully documented, and
> comprehensive enough to run their entire enterprise, no matter how
> large! With no QA! Or documentation! Or SE's!

Or bankers.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Kenny Tilton
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3ECF7DB3.1050107@nyc.rr.com>
Jeff Caldwell wrote:
> Raffael Cavallaro wrote:
> ...
>  > We need a natural-language-like
>  > computer language that is the next step beyond spreadsheets, as it
>  > were.
> ...
>  > Having an adaptive compiler would be nice, but lets get one that is
>  > merely more natural-language-like first.
> 
> Please try to convince a musician that he or she would be better off 
> writing and reading music in English or Chinese rather than in musical 
> notation.

Why so negative, everyone? We already have one profession that has a 
natural-language-like way to express requirements unambiguously -- the 
legal profession. This is why contracts are so easy to write and read 
never end up in court... ok, never mind.

Now there's an idea. Go the other way, write contracts as programs. Run 
a kazillion eventual futures thru them, make sure the outcomes are 
acceptable.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Marc Spitzer
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <8665o0z5j2.fsf@bogomips.optonline.net>
Kenny Tilton <·······@nyc.rr.com> writes:

> 
> Now there's an idea. Go the other way, write contracts as
> programs. Run a kazillion eventual futures thru them, make sure the
> outcomes are acceptable.

The lawyers would never ever go for it.  Way too expensive, if it
worked think of all the billed hours they would loose.

marc
From: Eduardo Muñoz
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <uof1s9d8z.fsf@terra.es>
* Kenny Tilton <·······@nyc.rr.com>
| Now there's an idea. Go the other way, write contracts as
| programs. Run a kazillion eventual futures thru them, make sure the
| outcomes are acceptable.

Already done (sort of), see:
http://article.gmane.org/gmane.comp.lang.lightweight/1148


-- 
Eduardo Mu�oz          | (prog () 10 (print "Hello world!")
http://213.97.131.125/ |          20 (go 10))
From: Pekka P. Pirinen
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <uisrv9mk8.fsf@globalgraphics.com>
Kenny Tilton <·······@nyc.rr.com> writes:
> Now there's an idea. Go the other way, write contracts as programs. Run 
> a kazillion eventual futures thru them, make sure the outcomes are 
> acceptable.

Charles Stross had this neat idea of companies as programs in his
impressive series of near-future short stories starting with
"Lobsters".  Companies are essentially legal entities, and holding
companies are little else.  So the hero sets up complex networks of
autonomous companies to keep people from getting hold of his assets.

The series is really required reading for any IT person for the
brilliantly knowledgeable extrapolations of computing trends, but it
is at the moment only available on the pages of various SF magazines,
anthologies and websites (for details, see
<http://www.antipope.org/charlie/fiction/index.html>).
-- 
Pekka P. Pirinen
Don't force it, get a larger hammer.
From: Donald Fisk
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3ED254DE.AD64F0FA@enterprise.net>
Raffael Cavallaro wrote:

> Why? Because the "Software Crisis" will only be solved by enabling
> power users to write their own applications. I'm convinced that the
> real scarcity is not competent programmers, but domain expertise. Many
> people can learn to code. Very few people have the domain expertise to
> code the right thing. Acquiring domain expertise in many fields that
> need a great deal of software is far more difficult than learning to
> program competently. How many professional software developers have
> the equivalent domain knowledge of a Ph.D. in molecular biology, or a
> professional options trader, etc. Wouldn't it make more sense to
> develop compilers that were easier to work with, than to have coders
> acquire a half baked, partly broken level of domain expertise for each
> new project they undertake?

No.   The easier a programming language is to use, the lower
the quality of its average programmers (cf Dijkstra's comments
on Cobol and Basic, both optimized for inexperienced programmers).
Programming languages should be optimized for good programmers, and
it that means making things difficult for the rest, so be it.

You sometimes need to call in specialist experts.   If I ever need
to do genetic engineering, I'll hire an expert in that area rather
than expect it to have been dumbed down to a level that I can
understand without years of study.   Similarly, I wouldn't
represent myself in a court of law, or conduct a survey of a
house I intend to buy.

Finally, there is the issue of whether computer programs are,
in general, expressible in anything even remotely resembling
English.   I doubt it, unless you mean something like Cobol.
In many other fields, the inadequacy of English to express
instructions precisely had been recognized and that's why we have
formal notations for mathematics, music, choreography and knitting.

> But most software is not needed by human logicians. It is needed by
> human bankers, and human market traders, and human accountants, and
> human molecular biologists, and they all communicate quite well in
> natural language, modulo a sprinkling of domain specific notation.

Yes, but they have brains, culture and an education.   Computers
are high-speed idiots with no common sense, and there's no prospect
of that changing any time soon.   Of course, it would be nice if
computers /were/ made more intelligent.   Perhaps some psychologists
could just write down how to do this in plain English, and feed it
into a computer.

:ugah179
-- 
"I'm outta here.  Python people are much nicer."
                -- Erik Naggum (out of context)
From: Coby Beck
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <bau7vt$2q7k$1@otis.netspace.net.au>
"Donald Fisk" <················@enterprise.net> wrote in message
······················@enterprise.net...
> Raffael Cavallaro wrote:
>
> > Why? Because the "Software Crisis" will only be solved by enabling
> > power users to write their own applications. I'm convinced that the
> > real scarcity is not competent programmers, but domain expertise. Many
> > people can learn to code. Very few people have the domain expertise to
> > code the right thing. Acquiring domain expertise in many fields that
> > need a great deal of software is far more difficult than learning to
> > program competently. How many professional software developers have
> > the equivalent domain knowledge of a Ph.D. in molecular biology, or a
> > professional options trader, etc. Wouldn't it make more sense to
> > develop compilers that were easier to work with, than to have coders
> > acquire a half baked, partly broken level of domain expertise for each
> > new project they undertake?
>
> No.   The easier a programming language is to use, the lower
> the quality of its average programmers (cf Dijkstra's comments
> on Cobol and Basic, both optimized for inexperienced programmers).
> Programming languages should be optimized for good programmers, and
> it that means making things difficult for the rest, so be it.

I don't see the issue as easy versus hard at all.  I believe natural
language principles are a good thing to keep the sights on not for ease of
use but for increased density of information in a file of text you will feed
to the compiler.


-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Raffael Cavallaro
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <aeb7ff58.0306022031.4ea62f7c@posting.google.com>
Donald Fisk <················@enterprise.net> wrote in message news:<·················@enterprise.net>...

> No.   The easier a programming language is to use, the lower
> the quality of its average programmers (cf Dijkstra's comments
> on Cobol and Basic, both optimized for inexperienced programmers).
> Programming languages should be optimized for good programmers, and
> it that means making things difficult for the rest, so be it.

Wow, both rigidly normative, *and* elitist! How about this - there can
exist both languages optimized for good programmers (we already have
these - lisp and c come to mind), and languages optimized for
inexperienced programmers.

Since we already have the former in abundance, and the proportion of
computer users who are inexperienced at programming is increasing
every year, wouldn't it make sense to provide a few of the latter?

> Finally, there is the issue of whether computer programs are,
> in general, expressible in anything even remotely resembling
> English.   I doubt it, unless you mean something like Cobol.
> In many other fields, the inadequacy of English to express
> instructions precisely had been recognized and that's why we have
> formal notations for mathematics, music, choreography and knitting.

No, we have formal notations because they are more concise. If English
were incapable of expressing these ideas, then practitioners of these
arts wouldn't be able to converse about them, and they obviously can.
I've often overheard conversations about knitting and not had the
foggiest clue what was being discussed. But the knitters understood
each other perfectly, and without recourse to any specialized notation
(though both could read this notation when using knitting patterns).
Natural language is perfectly capable of expressing precise ideas
precisely.
 
> Computers
> are high-speed idiots with no common sense, and there's no prospect
> of that changing any time soon.

Only because their programmers have historically prized, and continue
to prize, raw speed over "common sense."

> Perhaps some psychologists
> could just write down how to do this in plain English, and feed it
> into a computer.

Which would first need to understand English... but that's the whole
point, to get the computer to understand English... Oh! I get it, it's
a JOKE!!!  Ha, ha - Oh, you slay me...

If your humor were any drier, you'd be mummy.
From: Marc Spitzer
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <861xybpr6u.fsf@bogomips.optonline.net>
·······@mediaone.net (Raffael Cavallaro) writes:

> Donald Fisk <················@enterprise.net> wrote in message news:<·················@enterprise.net>...
> 
> > No.   The easier a programming language is to use, the lower
> > the quality of its average programmers (cf Dijkstra's comments
> > on Cobol and Basic, both optimized for inexperienced programmers).
> > Programming languages should be optimized for good programmers, and
> > it that means making things difficult for the rest, so be it.
> 
> Wow, both rigidly normative, *and* elitist! How about this - there can
> exist both languages optimized for good programmers (we already have
> these - lisp and c come to mind), and languages optimized for
> inexperienced programmers.
> 
> Since we already have the former in abundance, and the proportion of
> computer users who are inexperienced at programming is increasing
> every year, wouldn't it make sense to provide a few of the latter?

Well it depends on how you look at it.  If 'any idiot' can program why
should it be a well paid profession.  And why should it attract smart
people to do it if it does not pay well?

> 
> > Finally, there is the issue of whether computer programs are,
> > in general, expressible in anything even remotely resembling
> > English.   I doubt it, unless you mean something like Cobol.
> > In many other fields, the inadequacy of English to express
> > instructions precisely had been recognized and that's why we have
> > formal notations for mathematics, music, choreography and knitting.
> 
> No, we have formal notations because they are more concise. If English
> were incapable of expressing these ideas, then practitioners of these
> arts wouldn't be able to converse about them, and they obviously can.
> I've often overheard conversations about knitting and not had the
> foggiest clue what was being discussed. But the knitters understood
> each other perfectly, and without recourse to any specialized notation
> (though both could read this notation when using knitting patterns).
> Natural language is perfectly capable of expressing precise ideas
> precisely.
>  
> > Computers
> > are high-speed idiots with no common sense, and there's no prospect
> > of that changing any time soon.
> 
> Only because their programmers have historically prized, and continue
> to prize, raw speed over "common sense."

Have you considered the fact that common sense is remarkably hard to
come across in the real world.  In fact having lots of common sense
was called being wise.  This is very hard to accomplish, in fact most
people do not even try.  And it was highly prized by those people
around the wise person, having wise people around and listening to
them improved the groups situation.

marc
From: Pascal Bourguignon
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87znkzd0ls.fsf@thalassa.informatimago.com>
Marc Spitzer <········@optonline.net> writes:
> Have you considered the fact that common sense is remarkably hard to
> come across in the real world.  In fact having lots of common sense
> was called being wise.  This is very hard to accomplish, in fact most
> people do not even try.  And it was highly prized by those people
> around the wise person, having wise people around and listening to
> them improved the groups situation.

Ah! The good ol' times...

Nowadays,  it seems  that  what's improving  the  groups situation  is
having popular people around...


-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Thien-Thi Nguyen
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <7gk7clyz3z.fsf@gnufans.net>
·······@mediaone.net (Raffael Cavallaro) writes:

> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

yes, if you are only interested in teaching human programmers to program.

thi
From: Grzegorz Chrupala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <8b9e2260.0305210700.174f6cb8@posting.google.com>
······@mediaone.net (Raffael Cavallaro) wrote in message news:<····························@posting.google.com>...

> people can learn easily. Deacon's point is that the selective
> bottleneck is childhood language acquisition. Any syntactic feature
> too difficult for children to master will not survive as part of that
> language into the next generation.

Natural languages are only (relatively) easy to acquire in natural
settings (interacting with parents and peers), because humans seem to
have specialized wireing to deal with this. But anyway this only works
until pubescence more or less. Otherwise natural languages are pretty
hard to learn, as anyone who tried learning a foreign language as an
adult can testify. They are rather more difficult than programming
languages, as far as I can tell from the experience of learning both
human and computer languages as an adult.

> 
> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

Given that the purpose and the way you use human languages is vastly
different from programming languages, I doubt designing a
pseudo-natural-syntax for a programming language would help to make it
clearer. I personally would much prefer a consistent, simple syntax
that I don't have to remember the quirks of to some sort of
pseudo-English (or pseudo-Polish ;)). I have also noted than when I
attempted to paraphrase some code I wrote (in Scheme, say) in order to
describe what it does, the resulting English prose is horribly
tortuous, wordy and far less clear than the original code. So I just
don't do it anymore if I don't have to.
All this makes me think that modelling a programming language syntax
after a natural language is, in general, a bad idea.

--
Grzegorz
From: Jochen Schmidt
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <oprpi4hrew20fff3@news.btx.dtag.de>
On 21 May 2003 08:00:48 -0700, Grzegorz Chrupala <········@pithekos.net> 
wrote:

> Given that the purpose and the way you use human languages is vastly
> different from programming languages, I doubt designing a
> pseudo-natural-syntax for a programming language would help to make it
> clearer. I personally would much prefer a consistent, simple syntax
> that I don't have to remember the quirks of to some sort of
> pseudo-English (or pseudo-Polish ;)). I have also noted than when I
> attempted to paraphrase some code I wrote (in Scheme, say) in order to
> describe what it does, the resulting English prose is horribly
> tortuous, wordy and far less clear than the original code. So I just
> don't do it anymore if I don't have to.
> All this makes me think that modelling a programming language syntax
> after a natural language is, in general, a bad idea.

The discussion is *not* to model a programming language syntax after a
natural language but to make use of the available wetware in peoples brains
for purposes to write more expressive code.

I disagree that the purpose and the way of use of human languages is vastly
different from programming languages. The purpose is to communicate an idea
- to other humans _and_ to the computer.
The way you use it is by dialog or through whole "documents".

Programming languages are better suited to describe typical programming 
ideas
than plain human language, because they are designed and/grown to do 
better.
This is not much different to the language a mechanic uses to talk to his
colleguaes. Just because you learnt english does not make you able to
understand what they talk. Other environments - like being underwater - 
leads
to other constraints in which sub-languages evolve which are obviously more
efficient than plain spoken human language.

Since the brain is indeed able to cope with context pretty well there the 
idea
to make use of this facility is not a bad one.

ciao,
Jochen
From: Grzegorz Chrupala
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <8b9e2260.0305220131.1914d344@posting.google.com>
Jochen Schmidt <···@dataheaven.de> wrote in message news:<················@news.btx.dtag.de>...

> I disagree that the purpose and the way of use of human languages is vastly
> different from programming languages. The purpose is to communicate an idea
> - to other humans _and_ to the computer.
> The way you use it is by dialog or through whole "documents".

Well, maybe not *vastly* different, but telling a computer what to do
and having a conversation with a human being are sufficiently
different that most analogies will be misleading.

> 
> Programming languages are better suited to describe typical programming 
> ideas
> than plain human language, because they are designed and/grown to do 
> better.

Agreed. And I happen to think that making programming languages
context dependent or ambiguous or syntactically similar to human
language would probably not make them any better suited to "describe
typical programming ideas".

> 
> Since the brain is indeed able to cope with context pretty well there the 
> idea
> to make use of this facility is not a bad one.

The brain is able to cope with *a lot*. The queston is:
Is introducing context actually going to help humans learn and use CL?
If there is a cost associated with context dependent processing, then
do its supposed benefits outweigh this cost?

--
Grzegorz
From: Jochen Schmidt
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <oprpldfhye20fff3@news.btx.dtag.de>
On 22 May 2003 02:31:06 -0700, Grzegorz Chrupala <········@pithekos.net> 
wrote:

> Jochen Schmidt <···@dataheaven.de> wrote in message 
> news:<················@news.btx.dtag.de>...
>
>> I disagree that the purpose and the way of use of human languages is 
>> vastly
>> different from programming languages. The purpose is to communicate an 
>> idea
>> - to other humans _and_ to the computer.
>> The way you use it is by dialog or through whole "documents".
>
> Well, maybe not *vastly* different, but telling a computer what to do
> and having a conversation with a human being are sufficiently
> different that most analogies will be misleading.

Programming languages are not only meant to communicate to computers.
Programs get read more often by humans than by machines.
What makes Programming languages special is that they can be "understood"
by computers in a straightforward way.

>> Programming languages are better suited to describe typical programming 
>> ideas
>> than plain human language, because they are designed and/grown to do 
>> better.
>
> Agreed. And I happen to think that making programming languages
> context dependent or ambiguous or syntactically similar to human
> language would probably not make them any better suited to "describe
> typical programming ideas".

The "typical programming ideas" are a very fluid and quick changing thing.
Depending on what you want to accomplish you need to adapt your language to
your domain to be efficient. What you perceive statically as "human 
language"
here doesn't make any domain topic easier to talk about than the right 
domain
language. When creating such domain languages do you really claim that one
should stay away from concepts mainly known from "human languages"? Why?

>> Since the brain is indeed able to cope with context pretty well there 
>> the idea
>> to make use of this facility is not a bad one.
>
> The brain is able to cope with *a lot*. The queston is:
> Is introducing context actually going to help humans learn and use CL?
> If there is a cost associated with context dependent processing, then
> do its supposed benefits outweigh this cost?

Concepts like context allow humans to express programs with means they
already understood in their wetware. We already paid the bill - the 
facility
is already installed and gets used and trained on a daily base...

ciao,
Jochen
From: Dorai Sitaram
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ba2lg8$aae$1@news.gte.com>
In article <··············@thalassa.informatimago.com>,
Pascal Bourguignon  <····@thalassa.informatimago.com> wrote:
>
>I was dubious about Lisp-2 at  first, but finally I've noticed that in
>human languages,  there are  a lot of  instances that show  that we're
>wired for a Lisp-2 rather than a Lisp-1:
>
>    The fly flies.          (FLIES FLY)
>    The flies fly.          (FLY FLIES)

Human-language verbs and nouns correspond to global
function names and global variables.  Common Lisp 
practice doesn't allow you to share names between
these.  For Common Lisp, the right column above should
properly be

                             (FLIES *FLY*)
                             (FLY *FLIES*)

or

                             (FLIES +FLY+)
                             (FLY +FLIES+)
From: Pascal Bourguignon
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <87fznf10n5.fsf@thalassa.informatimago.com>
····@goldshoe.gte.com (Dorai Sitaram) writes:

> In article <··············@thalassa.informatimago.com>,
> Pascal Bourguignon  <····@thalassa.informatimago.com> wrote:
> >
> >I was dubious about Lisp-2 at  first, but finally I've noticed that in
> >human languages,  there are  a lot of  instances that show  that we're
> >wired for a Lisp-2 rather than a Lisp-1:
> >
> >    The fly flies.          (FLIES FLY)
> >    The flies fly.          (FLY FLIES)
> 
> Human-language verbs and nouns correspond to global
> function names and global variables.  Common Lisp 
> practice doesn't allow you to share names between
> these.  For Common Lisp, the right column above should
> properly be
> 
>                              (FLIES *FLY*)
>                              (FLY *FLIES*)
> 
> or
> 
>                              (FLIES +FLY+)
>                              (FLY +FLIES+)

Please, let me introduce you Wally. Wally is a fly.  The fly flies.
Do you know Bally?  Bally's a fly too. The flies fly. (Bally and Wally).

    (defvar *Wally* (make-instance 'fly))
    (defvar *Bally* (make-instance 'fly))
    (let ((fly *Wally)
          (flies (list *Wally* *Bally*)))
        (flies fly)
        (fly flies))
 


-- 
__Pascal_Bourguignon__                   http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.
From: Coby Beck
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ba2tl4$1igv$1@otis.netspace.net.au>
"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message
···················@thalassa.informatimago.com...
> ····@goldshoe.gte.com (Dorai Sitaram) writes:
> Please, let me introduce you Wally. Wally is a fly.  The fly flies.
> Do you know Bally?  Bally's a fly too. The flies fly. (Bally and Wally).
>
>     (defvar *Wally* (make-instance 'fly))
>     (defvar *Bally* (make-instance 'fly))
>     (let ((fly *Wally)
>           (flies (list *Wally* *Bally*)))
>         (flies fly)
>         (fly flies))
>

<SPLAT>

C-LUSER -67322 > flies
(*wally*)
From: Dorai Sitaram
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <ba2v8v$ag2$1@news.gte.com>
In article <·············@otis.netspace.net.au>,
Coby Beck <·····@mercury.bc.ca> wrote:
>
>"Pascal Bourguignon" <····@thalassa.informatimago.com> wrote in message
>···················@thalassa.informatimago.com...
>> ····@goldshoe.gte.com (Dorai Sitaram) writes:
>> Please, let me introduce you Wally. Wally is a fly.  The fly flies.
>> Do you know Bally?  Bally's a fly too. The flies fly. (Bally and Wally).

Please watch your attributions.  I would never
introduce flies to people, unless they [1] were really
hungry.

--d

[1] Ambiguous anaphora retained, to show that lexical
variables don't model anaphora, which Pascal B
seems to think they do.
From: ····@sonic.net
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <3EC53BEF.AD86ED20@sonic.net>
Pascal Bourguignon wrote:
> 
> I was dubious about Lisp-2 at  first, but finally I've noticed that in
> human languages,  there are  a lot of  instances that show  that we're
> wired for a Lisp-2 rather than a Lisp-1:
> 
>     The fly flies.          (FLIES FLY)
>     The flies fly.          (FLY FLIES)

In human languages, there is a balance to be achieved. 
A certain amount of imprecision and ambiguity is actually
desirable, even necessary, in human languages.  There are 
many things important to us which we could not discuss at 
all without ambiguity and imprecision.  As somebody famous 
once said, "never express yourself more clearly than you 
think."  

It is not fruitful to generalize too much from what makes 
a good human language to ideas of what makes a good computer
language.  Or vice versa.

			Bear
From: Kent M Pitman
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <sfwllx63c5e.fsf@shell01.TheWorld.com>
[ replying to comp.lang.lisp only
  http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

····@sonic.net writes:

> Pascal Bourguignon wrote:
> > 
> > I was dubious about Lisp-2 at  first, but finally I've noticed that in
> > human languages,  there are  a lot of  instances that show  that we're
> > wired for a Lisp-2 rather than a Lisp-1:
> > 
> >     The fly flies.          (FLIES FLY)
> >     The flies fly.          (FLY FLIES)
> 
> In human languages, there is a balance to be achieved. 
> A certain amount of imprecision and ambiguity is actually
> desirable, even necessary, in human languages.  There are 
> many things important to us which we could not discuss at 
> all without ambiguity and imprecision.  As somebody famous 
> once said, "never express yourself more clearly than you 
> think."  
> 
> It is not fruitful to generalize too much from what makes 
> a good human language to ideas of what makes a good computer
> language.  Or vice versa.

People are NOT generalizing about what makes a good language.
They are using observations about the brain to claim that the
definition of "simple" that is often used by Scheme advocates
("must have a textually shorter formal semantics") is not the
only possible definition of simple.  Some languages are hard to
learn but easy to use; some are easy to learn but hard to use.
It's hard to make a language that is all things to all people.

Languages are implemented more than once (except that Scheme advocates
seem obsessed with the exercise of implementing their own Scheme;
thank whatever deity you have that we made Common Lisp complex enough
that everyone off the street doesn't attempt that wasted exercise),
but programs are written many times.  I want the language implemented
for easy program-writing, not for easy language implementation.  I
want human languages implemented for easy reading of War and Peace,
not for ease of teaching the language and for tripling my reading
pleasure by tripling the length of War and Peace...

I take it as a given that the brain IS capable, without slowing it
down, of executing multiple rules at the same time with no loss of
speed, and there is ample evidence that it is or else we wouldn't
design all human languages that way because it would slow down our
thinking, then all you are left with is a question of whether to
pretend we have a less powerful processor available than we do.

It's funny to me how many people in the Scheme community profess a
strong desire for accomodating parallel processing, and yet how many
of those same people reject the possibility that the brain does
either parallelism or sufficiently fast multi-tasking that the 
complexity of resolving context-based unambiguous notations is an
irrelevance.  These same people assert boldly that I must not make
assumptions about the brain, but then themselves go and just as boldly
assert that they have an idea of what it is for something to be simple.
It's just... odd.
From: Franz Kafka
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <Rfhxa.7550$mu5.3696@news01.roc.ny.frontiernet.net>
>
> Languages are implemented more than once (except that Scheme advocates
> seem obsessed with the exercise of implementing their own Scheme;
> thank whatever deity you have that we made Common Lisp complex enough
> that everyone off the street doesn't attempt that wasted exercise),
> but programs are written many times.
>

You don't need to reimplement Common Lisp to get an understanding of how it
works--just implementing a subset of the language would be a better
educational experience than implementing the whole language, unless you want
to be a compiler writer.

I like Lisp because you can embed other languages such as Prolog--which has
been done by LispWorks and in numerious programming books.

When I first was learning Lisp--I wrote a simple Assmbly language in Lisp to
get a feel for the language.

It had something like.

(assemble
    '((move r1 5)
    (move r2 7)
    (add r1 r2)
    (sub r2  4)))

I used something like

(cond
       ((eql (first opcode) 'move) (setf (second opcode) (third opcode)))
       ((eql (first opcode) 'add) (+ (second opcode) (third opcode)))
        ...
       ((eql (first opcode) 'sub) (- (second opcode) (third opcode))))

I just wrote this off the top of my head it might be wrong. :)

I did not add jumps in it because I wanted just to get a feel for how Lisp
worked.

I had a symbol-table with move, add, sub, mul, div--and I did not
allow assignment--each line would print what happened

r1= 5 r2=7 r1+t2=12 r2-4=2

This taught me alot--implementing a full Assembler would have prob. been a
waste of time--I helped me learn how to implement an Assembler in C for a
school project--I at least knew the algorith was correct.

What I am trying to say is that Kent is right you don't need to implement a
whole language--implementing a part of a language or instruction set will
teach you alot about how the lang./asm. works.
From: Pascal Costanza
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <costanza-0CE34A.22543516052003@news.netcologne.de>
In article <·················@sonic.net>, ····@sonic.net wrote:

> In human languages, there is a balance to be achieved. 
> A certain amount of imprecision and ambiguity is actually
> desirable, even necessary, in human languages.  There are 
> many things important to us which we could not discuss at 
> all without ambiguity and imprecision.  As somebody famous 
> once said, "never express yourself more clearly than you 
> think."  
> 
> It is not fruitful to generalize too much from what makes 
> a good human language to ideas of what makes a good computer
> language.  Or vice versa.

Polymorphism is all about _making_ code _deliberately_ ambiguous, so 
that it has different semantics in different contexts!

Of course, programming languages are supposed to offer polymorphism in a 
controlled way, but "a certain amount" is available in almost every 
programming language.


Pascal
From: Shriram Krishnamurthi
Subject: Re: Lisp-2 or Lisp-1
Date: 
Message-ID: <w7dof22y19t.fsf@cs.brown.edu>
Pascal Costanza <········@web.de> writes:

> Polymorphism is all about _making_ code _deliberately_ ambiguous, so 
> that it has different semantics in different contexts!

You must be referring to subtype polymorphism, because this isn't true
of parametric polymorphism.

[Note fwps.]

Shriram