From: Henry Baker
Subject: "Lisp" is poison for NSF research $$$
Date: 
Message-ID: <hbaker-0206950511260001@192.0.2.1>
From the anonymous reviews of a recent NSF proposal in which Lisp was
mentioned, but only as an incidental tool:

"The LISP environment is really getting out of date as a viable system
environment.  Let us not pursue this line of research any more."

and

"The investment may be a wasteful thing for the taxpayers."

----------------

My thought:

"C & C++ are God's way of telling Lispers that they're too productive".

-- 
www/ftp directory:
ftp://ftp.netcom.com/pub/hb/hbaker/home.html

From: David Neves
Subject: Re: "Lisp" is poison for NSF research $$$
Date: 
Message-ID: <neves-0206950926120001@neves.ils.nwu.edu>
In article <·······················@192.0.2.1>, ······@netcom.com (Henry
Baker) wrote:

:  From the anonymous reviews of a recent NSF proposal in which Lisp was
:  mentioned, but only as an incidental tool:
:  
:  "The LISP environment is really getting out of date as a viable system
:  environment.  Let us not pursue this line of research any more."
Amazing.  Anything should be allowed as an incidental tool.  A researcher
has to pick the best language for his or her group.  Groups put a lot of
effort in developing a good tool set for the language they work with.  For
an external reviewer to base his/her decision on an incidental tool is
stepping out of bounds.  Faulting a dynamic language is being particularly
insensitive to prototyping needs of research.  The reviewer is probably
someone who still views Lisp as the Lisp 1.5 that some programming
language texts cover.
From: Dave Yost
Subject: Lisp considered unfinished
Date: 
Message-ID: <3qnek3$mk@Yost.com>
In article <······················@neves.ils.nwu.edu>,
David Neves <·····@ils.nwu.edu> wrote:
>In article <·······················@192.0.2.1>, ······@netcom.com (Henry
>Baker) wrote:
>
>:  From the anonymous reviews of a recent NSF proposal in which Lisp was
>:  mentioned, but only as an incidental tool:
>:  
>:  "The LISP environment is really getting out of date as a viable system
>:  environment.  Let us not pursue this line of research any more."
>
>Amazing.  Anything should be allowed as an incidental tool.  A researcher
>has to pick the best language for his or her group.  Groups put a lot of
>effort in developing a good tool set for the language they work with.  For
>an external reviewer to base his/her decision on an incidental tool is
>stepping out of bounds.  Faulting a dynamic language is being particularly
>insensitive to prototyping needs of research.  The reviewer is probably
>someone who still views Lisp as the Lisp 1.5 that some programming
>language texts cover.

Denial.

I think Lisp implementers should take this as a wake-up call.

There are other warnings.
  * Lucid went out of business
  * CMUCL was abandoned, and the people are working on Dylan
  * MCL was abandoned for 2 years before being revived
  * The GARNET project has left lisp behind and has gone to C++.
    It's now 3 times faster, and more people are interested in it.
Surely there are many others.

As far as I can tell, ANSI lisp is being treated as a huge
plateau, as if there is nothing interesting left to do, or
as if any further changes would be too hard to negotiate.

What about speed?  size?  C/C++ interoperability?

These issues have been untreated emergencies for some years now.

Dave Yost
    @    .com
From: Philip Jackson
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3qpqd9$47a@condor.ic.net>
Dave Yost (····@Yost.com) wrote:
: In article <······················@neves.ils.nwu.edu>,
: David Neves <·····@ils.nwu.edu> wrote:
: >In article <·······················@192.0.2.1>, ······@netcom.com (Henry
: >Baker) wrote:
: >
: >:  From the anonymous reviews of a recent NSF proposal in which Lisp was
: >:  mentioned, but only as an incidental tool:
: >:  
: >:  "The LISP environment is really getting out of date as a viable system
: >:  environment.  Let us not pursue this line of research any more."
: >
: >Amazing.  Anything should be allowed as an incidental tool.  A researcher
: >has to pick the best language for his or her group.  Groups put a lot of
: >effort in developing a good tool set for the language they work with.  For
: >an external reviewer to base his/her decision on an incidental tool is
: >stepping out of bounds.  Faulting a dynamic language is being particularly
: >insensitive to prototyping needs of research.  The reviewer is probably
: >someone who still views Lisp as the Lisp 1.5 that some programming
: >language texts cover.

Since the reviewer was anonymous it is particularly hard to draw conclusions
about whether they view Lisp as Lisp 1.5, or to verify such conclusions.
Possibly the anonymous reviewer had a very clear understanding of the 
current state of Lisp, but was concerned about the issues of viability that
Yost mentions.

The sentence quoted is interesting because it seems to mix two concepts:
"Lisp as a viable system environment" and "Lisp as a line of research".
(Of course, the quoted sentence may really be talking about something else
as a line of research, but it brings to mind the idea that Lisp itself
has been developed as a "research language" -- a language for doing 
research about AI and other topics, and an environment for doing research
about the design of powerful programming languages.)

Possibly the reason Lisp is in the state is in today is that historically
these concepts have always been mixed, and perhaps not well balanced.
This has led to a language with the perceived efficiency and space
problems that Yost mentions.

: Denial.

: I think Lisp implementers should take this as a wake-up call.

: There are other warnings.
:   * Lucid went out of business
:   * CMUCL was abandoned, and the people are working on Dylan
:   * MCL was abandoned for 2 years before being revived
:   * The GARNET project has left lisp behind and has gone to C++.
:     It's now 3 times faster, and more people are interested in it.
: Surely there are many others.

: As far as I can tell, ANSI lisp is being treated as a huge
: plateau, as if there is nothing interesting left to do, or
: as if any further changes would be too hard to negotiate.

: What about speed?  size?  C/C++ interoperability?

: These issues have been untreated emergencies for some years now.

: Dave Yost


Perhaps the solution is to separate the concepts, not mix them, and
focus development of Lisp along three parallel paths:

1) Lisp as a viable system development platform. A subset of CLOS
that can be implemented with small memory requirements, compiles just as
efficiently as C, yet retains more value than C or C++ in terms of 
productivity and built-in language constructs.  This could be very
attractive to current systems developers.  It might even resemble
Lisp 1.5 ;)

2) Lisp as a language for research projects.  Perhaps the full CLOS as
we know it, or perhaps CLOS extended along lines that support particular
topics in AI research.

3) Lisp as a language for research about programming languages. 


Phil Jackson
From: Dave Dyer
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <ddyerD9pqo2.GKx@netcom.com>
I have to agree with Dave Yost; In many respects, modern C/C++/Visual
Basic development environments rival or exceed the best lisp has to
offer.  The underlying language is still crap, but the gloss on top of
it demos really well; and truthfully, goes a long way toward improving
productivity.

Despite many millions that went into Symbolics, LMI, TI and Xerox
(both directly and to their customers) there is not *ONE* really well
known "lisp" success story to point to; and on the flip side,
everybody knows how much was invested in those companies, and where
they are now.

The remaining lisp vendors are locked into survival mode, and don't have
the resources or inclination to undertake anything revolutionary.  The
supply of new blood from the universities is thin - all the up-and-coming
wizards are into networks and multimedia.  

In short, lisp is well on its way back to where it always was; as an
amusing backwater of the computer industry; of interest only to a few
academics trying to do things that are completely impractical anyway.

-- 
---
My home page: ftp://ftp.netcom.com/pub/dd/ddyer/home.html
or try http://www.triple-i.com/~ddyer/home.html
From: Dave Yost
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r20ee$nh@Yost.com>
In article <···············@netcom.com>, Dave Dyer <·····@netcom.com> wrote:
>
>The remaining lisp vendors are locked into survival mode, and don't have
>the resources or inclination to undertake anything revolutionary.

This appears not to apply to Harlequin at all.
The revolutionary thing they're undertaking is to develop Dylan.

Dave Yost
    @    .com
From: David B. Lamkins
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r3enk$sgp@maureen.teleport.com>
····@Yost.com (Dave Yost) wrote:
>In article <···············@netcom.com>, Dave Dyer <·····@netcom.com> wrote:
>>
>>The remaining lisp vendors are locked into survival mode, and don't have
>>the resources or inclination to undertake anything revolutionary.
>
>This appears not to apply to Harlequin at all.
>The revolutionary thing they're undertaking is to develop Dylan.

If you believe Harlequin's own promotional materials, the sale of development
systems is not their primary business.  To them, Lisp seems to be a tool that
supports other aspects of their business (large custom systems) -- the fact
that they can sell some copies is probably icing on the cake, assuming that
support costs don't become prohibitive...

Dave
http://www.teleport.com/~dlamkins
---
CPU Cycles: Use them now, while you still have them.
From: Barry Margolin
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r7qh7$m88@tools.near.net>
In article <·········@Yost.com> ····@Yost.com (Dave Yost) writes:
>In article <···············@netcom.com>, Dave Dyer <·····@netcom.com> wrote:
>>
>>The remaining lisp vendors are locked into survival mode, and don't have
>>the resources or inclination to undertake anything revolutionary.
>
>This appears not to apply to Harlequin at all.
>The revolutionary thing they're undertaking is to develop Dylan.

Harlequin also does other stuff besides Lisp and Dylan.  I suspect the
electronic publishing side of the company is carrying the language side.
Also, if the electronic publishing software is written in CL, it justifies
the language side even if LispWorks isn't making money on its own.  This is
sort of the opposite of Lucid's situation: the Lisp business lives or dies
based on the success of the rest of the company.

Franz seems to be the only big player where Lisp is the main business.
-- 
Barry Margolin
BBN Planet Corporation, Cambridge, MA
······@{bbnplanet.com,near.net,nic.near.net}
Phone (617) 873-3126 - Fax (617) 873-5124
From: ········@iexist.flw.att.com
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <D9rr51.MwM@ssbunews.ih.att.com>
I agree with the warnings sent by Dave Dyer and most
comments thereafter. However, I think one number is off:
at least one language costs a lot more money to develop than this...

In article <···············@netcom.com>, ·····@netcom.com (Dave Dyer) writes:
|> Despite many millions that went into Symbolics, LMI, TI and Xerox
|> (both directly and to their customers) there is not *ONE* really well
|> known "lisp" success story to point to; and on the flip side,
|> everybody knows how much was invested in those companies, and where
|> they are now.
-- 
----------------
Olivier Clarisse	     "Languages are not unlike living organisms
Member of Technical Staff     can they adapt and improve to survive?"
AT&T Bell Laboratories
From: Martin Brundage
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r0jvo$50s@ixnews5.ix.netcom.com>
In <···············@netcom.com> ·····@netcom.com (Dave Dyer) writes: 
[...]
>In short, lisp is well on its way back to where it always was; as an
>amusing backwater of the computer industry; of interest only to a few
>academics trying to do things that are completely impractical anyway.

How about rapid prototyping? It seems unlikely that C++ will ever meet that 
need. C and C++ seem to be more accidents of history than anything else. Does 
it ever seem strange to you that nearly everybody is writing apps, both large 
and small, in a portable assembler language? Hardware is decreasing in cost 
exponentially but software costs (probably) rise. Where do you think the 
industry is going to find additional productivity? Delphi?

--
Marty
······@ix.netcom.com
From: Dave Dyer
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <ddyerD9qLJs.3Gw@netcom.com>
Don't doubt that I'm on lisp's side, just as you all are.  I'm just
agreeing with and amplifying the idea that our side isn't winning.
The argument you are making are well known to me, and frequently made
by me; but the fact remains that many $ were spent in persuit of
lisp's supposed virtues, and the wave is visibly receeding.

Saying "but we've got it right" to each other 1e6 more times isn't
going to convince anyone new.

-- 
---
My home page: ftp://ftp.netcom.com/pub/dd/ddyer/home.html
or try http://www.triple-i.com/~ddyer/home.html
From: Erik Naggum
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <19950606T083841Z@naggum.no>
[Dave Dyer]

|   Saying "but we've got it right" to each other 1e6 more times isn't
|   going to convince anyone new.

but it might convince a person or two to try it out and bring some ideas
back to the other cavemen.

if it weren't for the fact that numerous excellent Lisp systems exist for
my SPARCstation, I would probably have sold the machine and gone fishing
instead of suffering C++ and the general cluelessness of the PC industry.
every week or so, another student at the Department of Informatics at the U
of Oslo shows interest in Emacs Lisp and Lisp programming because some of
us keep posting neat functions in Lisp.  there are those who are attracted
to the elegance of Lisp even though there is little paid work to get using
it.  come to think of it, this is how many of our best programmers approach
_programming_, not just programming languages.

besides, I couldn't have worked with C++ for a year without Emacs Lisp to
help me write in that language.

#<Erik 3011416721>
--
NETSCAPISM /net-'sca-,pi-z*m/ n (1995): habitual diversion of the mind to
    purely imaginative activity or entertainment as an escape from the
    realization that the Internet was built by and for someone else.
From: Oliver Laumann
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r1a9d$qqv@news.cs.tu-berlin.de>
In article <···············@netcom.com>, Dave Dyer <·····@netcom.com> wrote:
> Don't doubt that I'm on lisp's side, just as you all are.  I'm just
> agreeing with and amplifying the idea that our side isn't winning.

I don't believe C/C++ will `win' in the sense that Lisp as an
implementation language will be displaced by C or C++.

Instead, we will see an increasing number of `hybrid' applications
that are written in a mixture of Lisp (or actually a dialect thereof,
such as Scheme) and C or C++.

And it's about time; Emacs has been around for many years, and it
works and has been successful.

We should probably invest more energy in improving language
_integration_ rather than fighting C++ or attempting to develop the
`ultimate' programming language (see the recent thread on the ``Grand
Challenge in Programming Languages'' in comp.lang.functional).
From: Luigi Semenzato
Subject: Love, Religion, and Programming Technologies
Date: 
Message-ID: <3r2nfe$d8n@fido.asd.sgi.com>
(was: Lisp considered unfinished)

We Lisp users are fighting more than a battle between
technologies: it is a struggle of giant viruses that take hold
of our brains and grow within us in irreversible symbiosis.

How often have you asked yourself `is Lisp really that good
or have I been zombified by using it this long?'  How often
have you seen someone talk about Prolog with the same inspired
expression you use when you describe some Lisp feature, and
you thought `how can he be so blind?'

Given enough time, people will like even something like C++.
By then, their thinking processes will be so enmeshed with
C++ that not liking C++ would mean not liking their own brain.

I know this happens and it worries me because I am afraid
it's something I cannot rationalize.  For instance, I see
people more and more often put C and C++ in the same ballpark.
How can they be so blind?  To me C is a small language, with
serious limitations but a precise rationale: easy to learn,
and, within that rationale, quite consistent.  C++ on the
other hand feels like the result of carrying that same 
rationale into a realm where it no longer makes sense.  
But I have programmed in C a lot more than C++.  So what 
do I know?

I say this because that's how people on the other side see us.
To them we look just as blind as they to us.  This is the
fundamental obstacle.  It's not object orientation or image
size or programming environment.  It's whatever you learn first.

So the only way to make Lisp succeed is to work with Lisp
haters.  Learn their language, write good code in it, keep
telling them how much easier this would be in Lisp, sneak
a Scheme interpreter in through the back door.  I think we'll
make it.    ---Luigi
From: David B. Lamkins
Subject: Re: Love, Religion, and Programming Technologies
Date: 
Message-ID: <3r3g6a$d6@maureen.teleport.com>
··@barracuda.engr.sgi.com (Luigi Semenzato) wrote:
>(was: Lisp considered unfinished)
>
>We Lisp users are fighting more than a battle between
>technologies: it is a struggle of giant viruses that take hold
>of our brains and grow within us in irreversible symbiosis.
>
>How often have you asked yourself `is Lisp really that good
>or have I been zombified by using it this long?'  How often
..
>
>Given enough time, people will like even something like C++.
>By then, their thinking processes will be so enmeshed with
>C++ that not liking C++ would mean not liking their own brain.
..

Yes, there's a strong leaning toward one's favorite language, and
yes it does color one's perceptions of other languages, particularly
those that are a sufficient "distance", cognitively, from familiar
territory.

What's really frustrating is that the cross-pollination of ideas is
so pathetically slow, mostly because people's natural tendency is to 
block out anything far from their self-perceived norm...  That's why
C++ users absolutely fawn over new clever ways of writing data structures
like (to quote an example from a magazine that crossed my desk today)
"arrays of arrays" -- a big yawn to Lispers.  On the other side, I'd
kill for a really natural interface to the underlying OS, or a GUI
framework as robust as those taken for granted by C++ users.

Even languages as conceptually close as Common Lisp and Scheme suffer
from the cognitive gap.  And as a Lisper, how well do you think Dylan's
infix syntax (an idea being reborn for the Nth time in a Lispy language)
will fare with die-hard Lispers?

>I say this because that's how people on the other side see us.
>To them we look just as blind as they to us.  This is the
>fundamental obstacle.  It's not object orientation or image
>size or programming environment.  It's whatever you learn first.
>
>So the only way to make Lisp succeed is to work with Lisp
>haters.  Learn their language, write good code in it, keep
>telling them how much easier this would be in Lisp, sneak
>a Scheme interpreter in through the back door.  I think we'll
>make it.    ---Luigi

I don't think Lisp will fade away.  It hasn't so far.  Neither has
FORTRAN or COBOL.  I think it may be a mistake to compare Lisp to 
the commercial languague du jour.  These things go in cycles.  How
long ago was it that you could find a thriving Pascal marketplace?
The real question, IMO, is whether Lisp can survive as a viable
business for the remaining vendors.  To do this, we need to talk
more about our own use of Lisp in revenue-producing products.

Dave
http://www.teleport.com/~dlamkins
---
CPU Cycles: Use them now, while you still have them.
From: Bradford Miller
Subject: Re: Love, Religion, and Programming Technologies
Date: 
Message-ID: <miller-0706951101160001@127.0.0.1>
In article <·········@maureen.teleport.com>, "David B. Lamkins"
<········@teleport.com> wrote:

> What's really frustrating is that the cross-pollination of ideas is
> so pathetically slow, mostly because people's natural tendency is to 
> block out anything far from their self-perceived norm...  That's why
> C++ users absolutely fawn over new clever ways of writing data structures
> like (to quote an example from a magazine that crossed my desk today)
> "arrays of arrays" -- a big yawn to Lispers.  On the other side, I'd
> kill for a really natural interface to the underlying OS, or a GUI
> framework as robust as those taken for granted by C++ users.

Don't look at it as something frustrating, look at it as a career opportunity.
Yes, you can take those 20 year old results from Lisp, and republish them
as "new" C++ results! Academic careers are made in this fashion (hint: you
don't have to be the author of the original result, just able to read it
and translate to the language de jour).

Look, for instance, at all the "numerical methods for C" type books that
are republications of code written originally in Fortran in the 60s....

Here's the trick to maintain academic honesty while seeming completely original:
write a TR with full references to the lisp literature, then write your paper
for whatever C++ conference you like, but only reference your TR. Nobody will
ever bother getting the TR but other university types, and WHO CARES, you're 
after the real job in industry that pays 6 figures, not a publish or perish
rathole that barely covers rent. Of course, even if you want to go in that
direction, there's only about 45 years of lisp results waiting to be
republished in C++ journals.

Remember that nobody reads anything that isn't "in their language", be it
computer programming or political spectrum. There's millions of ideas most
people will never bother trying to adapt to their own technology, because
they can't be bothered to learn to look under the surface syntax.

That's the secret to good research methodology, btw, think of how the same
problem might have come up in a different area, then see how they solved it.
From: William Paul Vrotney
Subject: Re: Love, Religion, and Programming Technologies
Date: 
Message-ID: <vrotneyD9uCox.45q@netcom.com>
> In article <·······················@127.0.0.1> ······@cs.rochester.edu (Bradford Miller) writes:
> 
>    In article <·········@maureen.teleport.com>, "David B. Lamkins"
>    <········@teleport.com> wrote:
> 
>    > What's really frustrating is that the cross-pollination of ideas is
>    > so pathetically slow, mostly because people's natural tendency is to 
>    > block out anything far from their self-perceived norm...  That's why
>    > C++ users absolutely fawn over new clever ways of writing data structures
>    > like (to quote an example from a magazine that crossed my desk today)
>    > "arrays of arrays" -- a big yawn to Lispers.  On the other side, I'd
>    > kill for a really natural interface to the underlying OS, or a GUI
>    > framework as robust as those taken for granted by C++ users.
> 
>    Don't look at it as something frustrating, look at it as a career opportunity.
>    Yes, you can take those 20 year old results from Lisp, and republish them
>    as "new" C++ results! Academic careers are made in this fashion (hint: you
>    don't have to be the author of the original result, just able to read it
>    and translate to the language de jour).
>
>    ... etc

Language de jour indeed.  Brad makes some interesting points here.  If I
know Brad (hi Brad :o)), I think he intends them with a twist of sarcasm
while cutting though to the actual truth.  I would like to add another
dimension to the language de jour concept.

It seems like not even a year has passed and here we are again with the
great "Lisp versus C debate", just different debaters.  I remember a time
when it was Lisp versus Pascal and before that Lisp versus Algol for
algorithm publications and even a time when it was Lisp versus Fortran
because those were the only two languages way back then.  And now it seems
to be Lisp versus C++.  And in the future if it turns out to be Lisp versus
x, perhaps Lisp versus Dylan, then perhaps there is a pattern emerging here.
Does Lisp keep getting rediscovered?  Could McCarthy have uncovered the last
programming language and the world just hasn't accepted it yet?

PLI advocates thought they had the answer.  I remember going to a lecture
way back then where the speaker actually announced, "PLI is the last
programming language you will ever have to learn".  I remember the Pascal
advocates though they had the answer with structured programming, "Enforces
program correctness".  And now the C++ advocates are more or less implying
that static typed objects is the answer, "The compiler will prevent any
object type errors".  Meanwhile little old Lisp just refuses to die.  Could
the implementation problems of Lisp just be a "side effect" of our primitive
technologies of the 1900s?

In the language wars Lisp represents elegance.  Don't let the world sell
elegance short.  It seems to be an enduring substance of the formulae of
science.  From time to time when the Lisp versus C debate has reappeared I
have posted this quote which I believe has something to say about the
world's acceptance of Lisp:

 "Over the period A.D. 1000-1500 the Hindu-Arab number system co-existed in
 western Europe with the Greek and Roman numerals.  Oriental science was
 brought to the west by Italian merchants who traveled in the East.  One of
 them was Leonardo of Pisa (Fibonacci, `son of Bonaccio') who wrote a famous
 mathematical treatise (in 1202), the "Liber Abaci", which was influential in
 introducing Arabic numerals to the West.  However, on the whole, people
 preferred the Roman numerals with which they were familiar; they had learned
 to do sums with them quite rapidly using an abacus (a counting board with
 movable counters, still in use in some parts of the world).  The public
 disliked Hindu-Arab numerals because they were strange and difficult to
 read, and the authorities opposed them because they were too easily forged.
 In 1299 Florentine merchants were forbidden to use Arabic numerals in
 book-keeping; 200 years later Roman numerals had disappeared entirely from
 the books of the Medici." 

   [Numbers and Infinity, E.Sondheimer and A.Rogerson]

Are there parallels here?  Could history be repeating itself with Lisp?

-- 

William P. Vrotney - ·······@netcom.com
From: Robert Elton Maas, 14 yrs LISP exp, unemployed 3.7 years
Subject: Re: Love, Religion, and Programming Technologies
Date: 
Message-ID: <3rertg$pf@openlink.openlink.com>
(Flushed some off-topic newsgroups, but kept comp.lang.lisp.x because
that's where I'm following this thread even though it's slightly off topic.)

<<How often have you asked yourself `is Lisp really that good or have I
been zombified by using it this long?' ... It's not object orientation
or image size or programming environment. It's whatever you learn
first.>>

I disagree. I learned "Fortran with Format" (what I now call Fortran 1)
and SPS (IBM 1620 assembly language) first. Then we got a disk and had
"Fortran 2d" which was better than the earlier Fortran but I still used
SPS for some tasks. Then I got access to IBM 360 assembly language and
WATFOR. Then I got access to SAIL (a Stanford-modified version of Algol
60) and preferred it to all the previous and used it for about 5 years.
Meanwhile I got access to Stanford LISP 1.6 but didn't use it much for
various reasons. Then I got access to UCI LISP and started really using
it in parallel with SAIL. Then I got access to MacLISP and found it
MUCH better than SAIL or UCI LISP or anything else I had previously
used, and it became my primary programming language whenever possible.
I suffered a temporary detour when I lost TIP access to InterNet and
couldn't connect to MIT-MC to use MacLISP and had to suffer PSL
(Portable Standard LISP) for a few years, but that was still better
than anything else available. Now I've been using Common LISP since
1989, best language I ever used. No, it's not what I learned first, or
I'd still be on the Fortrash trail, using Fortran 77 or whatever they
have now. It's not even the first reasonably good language I used, or
I'd still be using SAIL or MainSAIL now.

Too bad it's no longer possible to earn a living doing rapid
prototyping using LISP. (All the ads I've seen on LISP-JOBS are either
for C/C++ programmers who happen to know a little LISP on the side, or
for A.I. Ph.D. researchers who happen to know LISP. Nothing at all in
the past 3 years for plain vanilla LISP-based applications, sigh.)

<<So the only way to make Lisp succeed is to work with Lisp haters.
Learn their language, write good code in it, keep telling them how much
easier this would be in Lisp, sneak a Scheme interpreter in through the
back door.>>

An alternative idea is to create LISP-based applications that provide
network services such as information retrieval. Customers who submit
service requests by e-mail and get results by e-mail don't care whether
the code was written in LISP or C++ or even LOGO, but after they pass
the word and you get lots of customers, a few might be curious what
language you used to get such fine results, then word of LISP might get
around favourably. (I'm planning to try that myself as soon as I catch
up with some more urgent matters and have some blocks of time to
concentrate on programming again.)
From: Rich Parker
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r7ccl$pkc@moa1.moa.com>
·····@netcom.com (Dave Dyer) wrote:
>Don't doubt that I'm on lisp's side, just as you all are.  I'm just
>agreeing with and amplifying the idea that our side isn't winning.
>The argument you are making are well known to me, and frequently made
>by me; but the fact remains that many $ were spent in persuit of
>lisp's supposed virtues, and the wave is visibly receeding.
>
>Saying "but we've got it right" to each other 1e6 more times isn't
>going to convince anyone new.

I absolutely agree. I _love_ MCL and am looking forward to seeing v3.0,
but it's going to take a back seat to C++ or Smalltalk environments
_unless_ it features a small footprint in compiled apps and has a good
application framework and interface builder capability.

-rich-
From: Rich Parker
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r7c8h$pkc@moa1.moa.com>
······@ix.netcom.com (Martin Brundage) wrote:
>How about rapid prototyping? It seems unlikely that C++ will ever meet that 
>need.

I beg to differ. Visual C++ (for Windows) and Symantec C++ (for Mac) both offer
quite rapid prototyping environments. The amount of code that you have to write
to get a user interface up and running is really minimal (if any). That's
usually the biggest job in most any application. The computational
algorithms needed to complete the job are going to be pretty much the same in
any environment.

-rich-
From: Patrick Logan
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r9ulh$6hr@ornews.intel.com>
Rich Parker (·······@moa.com) wrote:
: ······@ix.netcom.com (Martin Brundage) wrote:
: >How about rapid prototyping? It seems unlikely that C++ will ever meet that 
: >need.

: I beg to differ. Visual C++ and Symantec C++ both offer
: quite rapid prototyping environments. 
: The computational
: algorithms needed to complete the job are going to be pretty much the same in
: any environment.

I have used both Visual C++ and several Lisp/Smalltalk, etc. environments.
From my experience, in no way is the "computational algorithm" development
the same in all these environments.

The closest experience I have had with a C/C++ environment being close to
a Lisp/Smalltalk environment is Zeta-C for Lisp Machines and ObjectCenter
for Unix. (ObjectCenter is currently available.)

Even so, there is a *big*, *Big*, *BIG* gap between the Lisp/Smalltalk
environments and ObjectCenter.

--
···············@ccm.jf.intel.com
Intel/Personal Conferencing

"Form follows function." -Le Corbusier
From: Barry Margolin
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r0v3d$bvp@tools.near.net>
In article <···············@netcom.com> ·····@netcom.com writes:
>Despite many millions that went into Symbolics, LMI, TI and Xerox
>(both directly and to their customers) there is not *ONE* really well
>known "lisp" success story to point to; and on the flip side,
>everybody knows how much was invested in those companies, and where
>they are now.

I believe that quite a bit of the space station code was being developed in
Lisp.  At least it was a few years ago; that may have changed.

Another Lisp success story was Desert Storm.  Much of the logistics was
done using Symbolics Lisp.

Part of the problem is that Lisp is best suited to large, dynamic
applications like these.  For such applications, the overhead that's often
associated with Lisp is not noticeable, while the power and flexibility of
Lisp is incredibly helpful.  But before programmers can use Lisp for large
applications they need to get their feet wet on small ones, and Lisp
usually isn't the appropriate language for little applications (the 5MB
"hello world" binary is the usual example).  Using Lisp for a little
application is like using a jack hammer when you just need a screwdriver;
but using C for a large application is like trying to break up a sidewalk
with a screwdriver and hammern.
-- 
Barry Margolin
BBN Planet Corporation, Cambridge, MA
······@{bbnplanet.com,near.net,nic.near.net}
Phone (617) 873-3126 - Fax (617) 873-5124
From: John Doner
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r5ads$60l@news.aero.org>
In article <··········@tools.near.net>,
Barry Margolin <······@nic.near.net> wrote:
>But before programmers can use Lisp for large
>applications they need to get their feet wet on small ones, and Lisp
>usually isn't the appropriate language for little applications (the 5MB
>"hello world" binary is the usual example).

Hear, hear.  I would like to add the perspective of a professor who
would like to teach Lisp dialects to students of math and computer
science, but who has trouble making it work.  An important part of
the problem is the size and complexity of the environment within
which the learner must learn.  Since students' computers are apt to
be small, maybe slow, this is a big obstacle.  So is cost.  (Gambit
scheme solves the cost problem, maybe the size problem, and part of
the complexity problem, so I do use that.)  But there are other
serious obstacles.

The first is comprehensibility.  In the case of Lisp, there is the
sheer size of the language.  It prevents all but a few from ever
getting a feeling that they have really mastered it; in fact, it
leaves most with the sense that they have only scratched the
surface.  Scheme is much better here.

Then there is the problem of abstractions.  Lambda, one of the key
things that makes Lisp & friends so nifty (you can make functions
on the fly, have functions returning other functions, create
closures, etc.) is a DIFFICULT CONCEPT.  Look folks, we have
trouble teaching them what ordinary mathematical functions are, let
alone functions of higher type (a.k.a functionals or operators).
(Actually, I have hopes that teaching a Lisp dialect will help
students learn mathematical concepts!  So I'm not objecting to
abstractions, I'm only pointing out that for the beginning
programmer they're a source of difficulty and complexity that must
be addressed.)

Another aspect of comprehensibility: In my view, most people reach
an understanding of how a programming language "works" by
constructing a mental model of the computer and how it executes the
statements of the language.  C is close to an assembly language, so
the mental models (which change as learning occurs) reach a good
approximation of reality in a relatively short time, and thus serve
as reliable predictors of what a particular piece of code is going
to do.  The reality of Lisp implementations is relatively
complicated, and the mental model must also be more complicated.  I
spend some effort explaining "how it all works" to students, but
however useful this may be, it does add an extra layer of
complexity.

There is the problem of efficiency.  I don't mean whether the
compiler can generate fast code.  The problem is seeing whether one
has written good code or not.  It is hard to tell.  I have a friend
who maintains a large Lisp program written years ago by others.
Often, he comments on what poor code is there, and he improves it.
But the original coders were hardly Lisp neophytes; why didn't they
see how bad the stuff they were writing really was?

Program size: as Barry notes, even "hello world" can be huge.  How
important do you suppose it is that student programmers and other
learners be able to create small programs that they can pass around
to friends?  Some don't care but many do.  If I write a piece of
Lisp code for some simple string manipulation, I may find it very
useful myself, but I might actually be embarrassed to offer the
complete stand-alone application to anyone unfamiliar with Lisp.

Accessibility: Good C compilers are available, are cheap, and fit
on relatively small student-owned machines.  Having your own system
on your own machine is a big plus for anyone learning a language.
It isn't a requirement, but it helps.

Debugging: Most Lisps have pretty serviceable debugging tools.  But
what if your code breaks something at a low level?  It's pretty
easy to tell what piece of C source some assembly code corresponds
to, but not so for Lisp.  And there are other difficult situations:
I've been scratching my head for a while over one where the
compiler chokes on code produced by several layers of interacting
macros.  It is bewildering trying to figure out where this code
originally came from!

John Doner
From: Robert Futrelle
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r5kfn$4nh@camelot.ccs.neu.edu>
In article <··········@news.aero.org> ·····@aero.org (John Doner) writes:
>In article <··········@tools.near.net>,
>Barry Margolin <······@nic.near.net> wrote:
>>But before programmers can use Lisp for large
>>applications they need to get their feet wet on small ones, and Lisp
>>usually isn't the appropriate language for little applications (the 5MB
>>"hello world" binary is the usual example).

I am a professor who has taught lisp to students and it works.
>
>Hear, hear.  I would like to add the perspective of a professor who
>would like to teach Lisp dialects to students of math and computer
>science, but who has trouble making it work.  An important part of
>the problem is the size and complexity of the environment within
>which the learner must learn.  Since students' computers are apt to
>be small, maybe slow, this is a big obstacle.  So is cost.  (Gambit
>scheme solves the cost problem, maybe the size problem, and part of
>the complexity problem, so I do use that.)  But there are other
>serious obstacles.

MCL isn't very big.  It will fit easily onto all the machines 
in our Mac labs.  The very fact that the listener is there makes 
the environment trivial.  Type in whaterver and it compiles and
executes and gives you an answer (defs are added to the running
environment).

>The first is comprehensibility.  In the case of Lisp, there is the
>sheer size of the language.  It prevents all but a few from ever
>getting a feeling that they have really mastered it; in fact, it
>leaves most with the sense that they have only scratched the
>surface.  Scheme is much better here.

The language is only as big as you make it.  If the sample code you
give the students to guide them in their work only includes a handful
of functions, growing to 40 or so, then why should they be confused.
Don't just hand them Steele and say, this is it.  There are numerous
texts that only teach a core set of functionality.

>Then there is the problem of abstractions.  Lambda, one of the key
>things that makes Lisp & friends so nifty (you can make functions
>on the fly, have functions returning other functions, create
>closures, etc.) is a DIFFICULT CONCEPT.  Look folks, we have
>trouble teaching them what ordinary mathematical functions are, let
>alone functions of higher type (a.k.a functionals or operators).
>(Actually, I have hopes that teaching a Lisp dialect will help
>students learn mathematical concepts!  So I'm not objecting to
>abstractions, I'm only pointing out that for the beginning
>programmer they're a source of difficulty and complexity that must
>be addressed.)

Lambdas are a separate topic from returning functions, creating
closures and the like.  They can and should be treated separately.
You don't give "beginning" programmers exercises that involve these
things.

>Another aspect of comprehensibility: In my view, most people reach
>an understanding of how a programming language "works" by
>constructing a mental model of the computer and how it executes the
>statements of the language.  C is close to an assembly language, so
>the mental models (which change as learning occurs) reach a good
>approximation of reality in a relatively short time, and thus serve
>as reliable predictors of what a particular piece of code is going
>to do.  The reality of Lisp implementations is relatively
>complicated, and the mental model must also be more complicated.  I
>spend some effort explaining "how it all works" to students, but
>however useful this may be, it does add an extra layer of
>complexity.

The reality you speak of really isn't there.  Modern CPUs and compilers
do exotic things to your code.  What about conditionals and case
statements? -- they certainly compile to odd things with gotos and
worse at the assembler level.  Is that the model you want.

Lisp has an elegant and simple model.  Arguments are evaluated and then
functions are applied to them and the value returned.  Data and function forms
have the identical syntax.  Could things be simpler?  As far as mental
models go, this takes a lot off my mind.

>There is the problem of efficiency.  I don't mean whether the
>compiler can generate fast code.  The problem is seeing whether one
>has written good code or not.  It is hard to tell.  I have a friend
>who maintains a large Lisp program written years ago by others.
>Often, he comments on what poor code is there, and he improves it.
>But the original coders were hardly Lisp neophytes; why didn't they
>see how bad the stuff they were writing really was?

Since the beginning of programming, people have been commenting
on the poor quality of code that they are given to change/maintain.
This has _nothing_ to do with Lisp.  As for what is good style,
the same rule applies as applies to writing prose -- you have to
read good things in order to learn how to write good things.
(Peter Norvig makes that remark explicitly in his Paradigms book
and I seconded in in my AI Magazine review.)

>Program size: as Barry notes, even "hello world" can be huge.  How
>important do you suppose it is that student programmers and other
>learners be able to create small programs that they can pass around
>to friends?  Some don't care but many do.  If I write a piece of
>Lisp code for some simple string manipulation, I may find it very
>useful myself, but I might actually be embarrassed to offer the
>complete stand-alone application to anyone unfamiliar with Lisp.

Students, like professors, when it comes to programming, are more
interested in source code than in compiled applications.  The source
code for "hello world" in Lisp is (princ "hello world").  They 
can give that to their friends and their friends can compile it.

>Accessibility: Good C compilers are available, are cheap, and fit
>on relatively small student-owned machines.  Having your own system
>on your own machine is a big plus for anyone learning a language.
>It isn't a requirement, but it helps.

Digitool is now offering MCL 3.0 orderable directly from them
with student pricing.  It's not $39.95, but you get a lot for
your money.
>
>Debugging: Most Lisps have pretty serviceable debugging tools.  But
>what if your code breaks something at a low level?  It's pretty
>easy to tell what piece of C source some assembly code corresponds
>to, but not so for Lisp.  And there are other difficult situations:
>I've been scratching my head for a while over one where the
>compiler chokes on code produced by several layers of interacting
>macros.  It is bewildering trying to figure out where this code
>originally came from!

Lisp doesn't tend to break on "low-level" things because there
are reliable parts of the Lisp system that handle pointers and memory
allocation for you.  If you write several layers of interacting
macros, you could be making trouble for yourself.  See Paul Graham's
book On Lisp for discussions of how you can hang yourself writing
macros.

>John Doner

  Bob Futrelle
-- 
Prof. Robert P. Futrelle | Biological Knowledge Lab, College of CS
Office: (617)-373-2076   | Northeastern University, 161CN
Fax:    (617)-373-5121   | 360 Huntington Ave.
········@ccs.neu.edu     | Boston, MA 02115
From: Dave Yost
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r76ob$l1@Yost.com>
In article <··········@camelot.ccs.neu.edu>,
Robert Futrelle <········@ccs.neu.edu> wrote:
>In article <··········@news.aero.org> ·····@aero.org (John Doner) writes:
>>How important do you suppose it is that student programmers and other
>>learners be able to create small programs that they can pass around
>>to friends?  Some don't care but many do.  If I write a piece of
>>Lisp code for some simple string manipulation, I may find it very
>>useful myself, but I might actually be embarrassed to offer the
>>complete stand-alone application to anyone unfamiliar with Lisp.
>
>Students, like professors, when it comes to programming, are more
>interested in source code than in compiled applications.  The source
>code for "hello world" in Lisp is (princ "hello world").  They 
>can give that to their friends and their friends can compile it.

I think the point here is that a language/environment
is much more fun and interesting if it makes it easy
to generate a complete, polished, runnable thingy
for the platform that isn't unduly large and that
is easy to modify or integrate into other code.

On the Mac, for example, this would mean that you could
give someone a tiny double-clickable application that put
up a "Hello, folks of World!" window.  This can't be done
on MCL because there is no shared runtime.  Is there yet
a lisp implementation that facilitates this?

The best is to be able to have your cake and eat it too--
i.e. give out a file that is both a compiled app and a
repository for its source code.

The best example of this I've seen is a thing called an
AppleScript "applet" on the Mac.  When you double-click
it, it runs.  When you drag it onto the Script Editor
application, the source code is revealed and you can
mess with it.  Of course, the AppleScript runtime
library is not carried with the applet, so it's tiny.
There's now an even better development environment for
AppleScript called FaceSpan on the Mac that melds a UI
builder with an AppleScript programming environment.
It can produce an applet with a significant UI.

Lisp is definitely behind in this area, at least on the Mac.

Dave
From: 55437-olivier clarisse(haim)463
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <D9wzuF.Kq9@ssbunews.ih.att.com>
In article <·········@Yost.com>, ····@Yost.com (Dave Yost) writes:
|> The best is to be able to have your cake and eat it too--
|> i.e. give out a file that is both a compiled app and a
|> repository for its source code.
|> 
Let's check our history of computer. In early 80's Xerox LM
the Interlisp-D "OS" handled each function as a small object
both compiled and source code. You could also pull up any such
code object in its own editor any time and "mess with it".
You could even *advise* (modify) any system code function.

|> The best example of this I've seen is a thing called an
|> AppleScript "applet" on the Mac.  When you double-click
|> it, it runs.  When you drag it onto the Script Editor
|> application, the source code is revealed and you can
|> mess with it.  Of course, the AppleScript runtime
|> library is not carried with the applet, so it's tiny.
|> There's now an even better development environment for
|> AppleScript called FaceSpan on the Mac that melds a UI
|> builder with an AppleScript programming environment.
|> It can produce an applet with a significant UI.
|> 
|> Lisp is definitely behind in this area, at least on the Mac.
|>
And what if MCL *was* the OS on your MAC?
Then every MCL object, function or method would be both a native
"script" (is lisp the 1st scripting language or what?) and would also
inherently run compiled. [Lisp *was* ahead indeed - 10 years ago.]


-- 
----------------
Olivier Clarisse	     "Languages are not unlike living organisms
Member of Technical Staff     can they adapt and improve to survive?"
AT&T Bell Laboratories
From: Robert Elton Maas, 14 yrs LISP exp, unemployed 3.7 years
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3rfs1r$3pf@openlink.openlink.com>
<<On the Mac, for example, this would mean that you could give someone
a tiny double-clickable application that put up a "Hello, folks of
World!" window.  This can't be done on MCL because there is no shared
runtime.>>

I don't quite see how a shared runtime would help the recipient if that
recipient had NOTHING whatsoever previously on his/her machine having
to do with LISP. What would be needed would be a link-loader that
selected all needed routines from some library and loaded them into the
runnable application being created.

LISP and HyperCard share this problem of ordinarily needing to have the
master application around to run any program, because programs are
documents not applications. The difference between MCL (and older MACL)
and HyperCard is that MCL/MACL are expensive while HyperCard used to be
free for a while. But XLISP is free even now, so if a LISP program can
run in XLISP, then pass the LISP document (source or compiled) plus a
pointer to where XLISP can be found on the net if the recipient doesn't
already have it. Then the recipient should be able to double-click on
the document to start the master XLISP application and run the program,
right?

Unfortunately I've played with XLISP only a little to date, so I don't
yet know how easy it is using XLISP to write a simple put-up-window
"Hello" program (with a button for closing the window and quitting the
program presumably).

P.S. I'm reading this thread on comp.lang.lisp.x, hoping to see some
active discussion of XLISP, but seeing mostly general LISP stuff
cross-posted, including most of this thread (but I put in a token
mention of XLISP, ha ha).
From: Rich Parker
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r7cv7$pkc@moa1.moa.com>
········@ccs.neu.edu (Robert Futrelle) wrote:
>MCL isn't very big.  It will fit easily onto all the machines 
>in our Mac labs.  The very fact that the listener is there makes 
>the environment trivial.  Type in whaterver and it compiles and
>executes and gives you an answer (defs are added to the running
>environment).

I absolutely agree. MCL is really tops when it comes to ease of use and
friendliness. I ran all of Norvig's (Paradigms of Artificial Intelligence
Programming, Peter Norvig, Morgan Kauffman publishers) from within the MCL
environment and learned a lot about many of the gnarly features of closures
and functions returning functions, etc.

>The language is only as big as you make it.  If the sample code you
>give the students to guide them in their work only includes a handful
>of functions, growing to 40 or so, then why should they be confused.
>Don't just hand them Steele and say, this is it.  There are numerous
>texts that only teach a core set of functionality.

Absolutely agree. You can't hand a student CLtL2 and expect them to learn
Lisp. A book like Norvig's is ideal for teaching Lisp.

-rich-
From: William D Clinger
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r7k7a$dgg@camelot.ccs.neu.edu>
I am posting this followup only to comp.lang.lisp.

In article <··········@camelot.ccs.neu.edu> ········@ccs.neu.edu (Robert Futrelle) writes:
>The language is only as big as you make it.  If the sample code you
>give the students to guide them in their work only includes a handful
>of functions, growing to 40 or so, then why should they be confused.

Because they will stumble over the rest of the language as they try
to use the simple core.

As a COBOL programmer, for example, I had to consult a list of more
than 700 reserved words every time I chose the name for a variable.
It was ridiculous.

Common Lisp has about as many reserved words as COBOL.  (This is the
effect of CLtL2 p 260.)  Unlike COBOL, where the compiler reliably
complains if you misuse a reserved word, the effect of misusing a
reserved word in Common Lisp is implementation-dependent, and you
might not even get a warning message.

Not all Lisps have this large a problem.  IEEE Scheme has only 19
reserved words.  With the R4RS macro system, Scheme has no reserved
words at all.

But Scheme, Common Lisp, and C++ share the problem of a syntax in
which a high percentage of the plausible-looking expressions are
syntactically legal (and will survive the limited type-checking in
C++).  This is a very significant problem for students.  It means
you have to teach parts of the language that you don't want students
to use just so they can recognize symptoms of this kind of mistake.
Examples:

    (cond (null? x) (write 0) (else (write 1)))
    (defun map1 (f x)
      (if (null x) '() (cons (f (car x)) (map1 f (cdr x)))))
    if (*p++ = *q++ & isalpha(*p));

>Lambdas are a separate topic from returning functions, creating
>closures and the like.  They can and should be treated separately.
>You don't give "beginning" programmers exercises that involve these
>things.

I can understand this attitude with regard to Common Lisp, because
you don't want to have to explain to a novice why the definition of
map1 above doesn't work.  That example doesn't contain any lambda
expressions, however, so I don't think you should blame lambda for
the problem.

>Lisp has an elegant and simple model.  Arguments are evaluated and then
>functions are applied to them and the value returned.  Data and function forms
>have the identical syntax.  Could things be simpler?  As far as mental
>models go, this takes a lot off my mind.

What you say is true for Scheme, though false for Common Lisp (unless
you meant only to say that both data and function forms use Cambridge
Polish).

The real problem, I believe, is that Scheme, Common Lisp, and their ilk
are at a higher level than the programming languages that most people
are used to.  In particular, people who are accustomed to understanding
programs by thinking about the sequence of states that they induce
within a von Neumann computer are not going to understand Lisp programs
very well.  (Why?  Because there is a substantial semantic gap between
Lisp and machine instructions.  This gap is bridged by highly
implementation-dependent machinery, the variety of which tends to
confuse novices who think they would understand Lisp if they only knew
"what really goes on at the machine level".)

It is true that people who have a problem with high-level languages
aren't likely to be hotshot C or C++ programmers either, but they can
at least understand small pieces of C and sometimes of C++ by thinking
at a very low level, and that may well be good enough to get them
through an undergraduate CS curriculum and to achieve recognition as
an average C programmer.

I am not saying we should aim the CS curriculum at students who have
trouble with abstraction.  What I am saying is that many schools do
precisely that, and it seems a little unfair, having attracted these
students by using low-level languages, to require them to think in a
manner that might make them more productive.  It's easier to leave
that kind of education to industry.

In article <··········@news.aero.org> ·····@aero.org (John Doner) writes:
>>There is the problem of efficiency.  I don't mean whether the
>>compiler can generate fast code.  The problem is seeing whether one
>>has written good code or not.  It is hard to tell.  I have a friend
>>who maintains a large Lisp program written years ago by others.
>>Often, he comments on what poor code is there, and he improves it.
>>But the original coders were hardly Lisp neophytes; why didn't they
>>see how bad the stuff they were writing really was?

Futrelle responds:
>Since the beginning of programming, people have been commenting
>on the poor quality of code that they are given to change/maintain.
>This has _nothing_ to do with Lisp...

I think it does.  In languages with an Algol-like syntax, the infix
operators usually take constant time, and the procedure call syntax
is used mainly for non-trivial operations.  This makes it easier to
focus one's attention on the time-consuming operations.  Since Lisp
uses the procedure call syntax for almost all operations, trivial
or not, it isn't quite so easy to see the time-consuming operations.

To see what I mean, you might ask a group of novice Lisp programmers
why the following Scheme code is inefficient:

    (define (replace a b x)
      (cond ((= (length x) 0)
             '())
            ((= (length x) 1)
             (if (eqv? a (list-ref x 0))
                 (list b)
                 (list (list-ref x 0))))
            (else
             (append (replace a b (list (list-ref x 0)))
                     (replace a b (cdr x))))))

You might hear about the common subexpressions (length x) or
(list-ref x 0), the use of append instead of cons, and someone might
even suggest a type-dispatch on the first argument so eq? could be
used instead of eqv?, but there's a good chance that no one will
notice the real problem.

I once reduced the running time of a real-world application from
2 hours to 3 seconds by repairing inefficiencies such as this, so
experienced programmers can make this kind of mistake too.

It is worth noting that syntactically familiar operators in C++ do
not necessarily take constant time, since they can be overloaded,
so this particular objection to Lisp's syntax applies also to C++.

Doner observes:
>>Program size: as Barry notes, even "hello world" can be huge.  How
>>important do you suppose it is that student programmers and other
>>learners be able to create small programs that they can pass around
>>to friends?  Some don't care but many do.  If I write a piece of
>>Lisp code for some simple string manipulation, I may find it very
>>useful myself, but I might actually be embarrassed to offer the
>>complete stand-alone application to anyone unfamiliar with Lisp.

It is easy to write an effective selective linker for Scheme, and
I wrote one for MacScheme+Toolsmith in 1985.  The situation for
Common Lisp is rather different, because the X3J13 committee made
a semi-conscious decision that selective linking was less important
than the ability to pass symbols (as opposed to procedures) as
arguments to procedures like APPLY.  Consequently selective linking
for Common Lisp is impossible in theory and difficult in practice.
This is an area in which Common Lisp deserves its reputation, which
has affected the entire Lisp family by association.

William D Clinger
From: ········@iexist.flw.att.com
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <D9x205.LLr@ssbunews.ih.att.com>
In article <··········@camelot.ccs.neu.edu>, ····@ccs.neu.edu (William D Clinger) writes:
[...]
|> Common Lisp has about as many reserved words as COBOL.  (This is the
|> effect of CLtL2 p 260.)  Unlike COBOL, where the compiler reliably
|> complains if you misuse a reserved word, the effect of misusing a
|> reserved word in Common Lisp is implementation-dependent, and you
|> might not even get a warning message.
|>
(defpackage "EMPTY" (:use))
(in-package "EMPTY")	;Where Lisp has no reserved work.

Lisp only has as many "reserved words" as you want to include.
I have taught lisp by starting in a package "SMALL" that contained
only 16 imported symbols from COMMON-LISP (similar to "EMPTY" above).
Using SMALL to teach a small subset of important concepts, showing
students how you could rebuild all of Lisp within itself from SMALL,
and then introducing "COMMON-LISP" as a library package.

There are NO more reserved words in Lisp than in any other languages
and there are only ~27 special forms in common-lisp (but you can
eliminate all those you don't need). In CL least you have packages
to organize large software systems into manageable units.

If cond is confusing, don't include it in SMALL, if a function
does not work for weird reasons, don't teach it until students have
learned to produce small elegant functions that do useful things
(there are thousands of examples in CL books).

-- 
----------------
Olivier Clarisse	     "Languages are not unlike living organisms
Member of Technical Staff     can they adapt and improve to survive?"
AT&T Bell Laboratories
From: Martin Cracauer
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <1995Jun9.085359.11701@wavehh.hanse.de>
········@ccs.neu.edu (Robert Futrelle) writes:

>The language is only as big as you make it.  If the sample code you
>give the students to guide them in their work only includes a handful
>of functions, growing to 40 or so, then why should they be confused.
>Don't just hand them Steele and say, this is it.  There are numerous
>texts that only teach a core set of functionality.

I oppose to this view. I couldn't get produtive in Lisp with the
functions mentioned in Winston/Horn and Koschman. Simulating existing
functions by combining simpler ones is a waste of time. The worst
thing was I couldn't understand other people's code because I didn't
know many functions/macros.

It took reading CLtL2 to be productive. With productive I mean a
portable application with acceptable fast I/O and optimized inner
parts and usage of standard CL elements where they exist (to be
readable for others and for me at a later time).

[...]

>Lisp doesn't tend to break on "low-level" things because there
>are reliable parts of the Lisp system that handle pointers and memory
>allocation for you.  If you write several layers of interacting
>macros, you could be making trouble for yourself.  See Paul Graham's
>book On Lisp for discussions of how you can hang yourself writing
>macros.

For me, Common Lisp systems break relativly often, more often than my
(GNU) C compiler.

This causes me to do my Lisp work on UNIX where I have several
compilers availiable to check them against each other (and against my
unsufficient knowledge of Common Lisp that causes many programs to
fail, too).

The nature of working in Lisp, in an integrated environment where a
crash in a compiler or debugger/inspector/whatever causes all editing
sessions to be terminated, too make the situation worse as in C, where
a chrashing compiler doesn't harm other emacs buffers (of course, that
doesn't apply to Borland C++ :-).

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <········@wavehh.hanse.de>. No NeXTMail, please.
 Norderstedt/Hamburg, Germany. Fax +49 40 522 85 36. This is a 
 private address. At (netless) work programming in data analysis.
From: Frank Adrian
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r9l2r$71@atheria.europa.com>
John Doner (·····@aero.org) wrote:
: Another aspect of comprehensibility: In my view, most people reach
: an understanding of how a programming language "works" by
: constructing a mental model of the computer and how it executes the
: statements of the language.  C is close to an assembly language, so
: the mental models (which change as learning occurs) reach a good
: approximation of reality in a relatively short time, and thus serve
: as reliable predictors of what a particular piece of code is going
: to do.  The reality of Lisp implementations is relatively
: complicated, and the mental model must also be more complicated.  I
: spend some effort explaining "how it all works" to students, but
: however useful this may be, it does add an extra layer of
: complexity.

I wonder if this is a problem of how people understand or how they are
taught?  Operational models are useful.  However, I fail to see why an
operational model at the level of registers, memory cells, and increments
is more comprehensible than one at the level of functions, bindings,
and recursion, unless the student's mind has already been polluted by
earlier exposure to these concepts.  If programming is to advance beyond
its current (rather pathetic) level, we have to start losing our fas-
cination with the implementation layers.
___________________________________________________________________________
Frank A. Adrian           ancar technology            Object Analysis,
······@europa.com         PO Box 1624                   Design,
                          Portland, OR 97207              Implementation,
                          Voice: (503) 281-0724             and Training...
                          FAX: (503) 335-8976
From: Simon Brooke
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <D9ynpq.6qz@rheged.dircon.co.uk>
In article <··········@news.aero.org>, John Doner <·····@aero.org> wrote:
>In article <··········@tools.near.net>,
>Barry Margolin <······@nic.near.net> wrote:
>>But before programmers can use Lisp for large
>>applications they need to get their feet wet on small ones, and Lisp
>>usually isn't the appropriate language for little applications (the 5MB
>>"hello world" binary is the usual example).
>
>Hear, hear.  I would like to add the perspective of a professor who
>would like to teach Lisp dialects to students of math and computer
>science, but who has trouble making it work.  An important part of
>the problem is the size and complexity of the environment within
>which the learner must learn.  Since students' computers are apt to
>be small, maybe slow, this is a big obstacle.  So is cost.  (Gambit
>scheme solves the cost problem, maybe the size problem, and part of
>the complexity problem, so I do use that.)  But there are other
>serious obstacles.

This is because we conflate 'LisP' with 'Common LISP'. Common LISP is
a very big language. That doesn't mean LisP has to be a very big
language. I learned LisP on Auntie Beeb's Micro, which had a maximum
32Kb of RAM, into which all the operating system buffers and the
screen had to be fitted, as well as your LisP program. It was a very
nice system, complete with an in-core structure editor and an easy to
use break package. I and my immediate colleagues continued to rapid
prototype algorithms on our BBC Micros long after we had Xerox
Dandelions to play with.

A good small LisP may be better for teaching than a good big one,
because there's so much less to distract the learner.


-- 
------- ·····@rheged.dircon.co.uk (Simon Brooke)

			;; I'd rather live in sybar-space
From: John Doner
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r5bbs$6f8@news.aero.org>
In article <··········@tools.near.net>,
Barry Margolin <······@nic.near.net> wrote:
>But before programmers can use Lisp for large
>applications they need to get their feet wet on small ones, and Lisp
>usually isn't the appropriate language for little applications (the 5MB
>"hello world" binary is the usual example).

Hear, hear.  I would like to add the perspective of a professor who
would like to teach Lisp dialects to students of math and computer
science, but who has trouble making it work.  An important part of
the problem is the size and complexity of the environment within
which the learner must learn.  Since students' computers are apt to
be small, maybe slow, this is a big obstacle.  So is cost.  (Gambit
scheme solves the cost problem, maybe the size problem, and part of
the complexity problem, so I do use that.)  But there are other
serious obstacles.

The first is comprehensibility.  In the case of Lisp, there is the
sheer size of the language.  It prevents all but a few from ever
getting a feeling that they have really mastered it; in fact, it
leaves most with the sense that they have only scratched the
surface.

Then there is the problem of abstractions.  Lambda, one of the key
things that makes Lisp & friends so nifty (you can make functions
on the fly, have functions returning other functions, create
closures, etc.) is a DIFFICULT CONCEPT.  Look folks, we have
trouble teaching them what ordinary mathematical functions are, let
alone functions of higher type (a.k.a functionals or operators).
(Actually, I have hopes that teaching a Lisp dialect will help
students learn mathematical concepts!  So I'm not objecting to
abstractions, I'm only pointing out that for the beginning
programmer they're a source of difficulty and complexity that must
be addressed.)

Yet another comprehensibility issue: I believe that most people
reach an understanding of how a programming language "works" by
constructing a mental model of the computer and how it responds to
language constructs.  This model changes as learning occurs.  In
the case of C, the language is close to the hardware, and it is
relatively easier to arrive at a mental model that is a serviceable
approximation to reality.  But the reality of Lisp implementations
is much more complicated, and calls for a more complicated mental
model.  So students spend more time blundering around, wondering
why things happen the way they do.  I spend some time explaining
"how it all works", but there's little doubt that this adds to the
perceived complexity of the whole experience.

There is the problem of efficiency.  I don't mean whether the
compiler can generate fast code.  The problem is seeing whether one
has written good code or not.  It is hard to tell.  I have a friend
who maintains a large Lisp program written years ago by others.
Often, he comments on what poor code is there, and he improves it.
But the original coders were hardly Lisp neophytes; why didn't they
see how bad the stuff they were writing really was?

Program size: as Barry notes, even "hello world" can be huge.  How
important do you suppose it is that student programmers and other
learners be able to create small programs that they can pass around
to friends?  Some don't care but many do.  If I write a piece of
Lisp code for some simple string manipulation, I may find it very
useful myself, but I might actually be embarrassed to offer the
complete stand-alone application to anyone unfamiliar with Lisp.

Accessibility: Good C compilers are available, are cheap, and fit
on relatively small student-owned machines.  Having your own system
on your own machine is a big plus for anyone learning a language.
It isn't a requirement, but it helps.

Debugging: Most Lisps have pretty serviceable debugging tools.  But
what if your code breaks something at a low level?  It's pretty
easy to tell what piece of C source some assembly code corresponds
to, but not so for Lisp.  And there are other difficult situations:
I've been scratching my head for a while over one where the
compiler chokes on code produced by several layers of interacting
macros.  It is bewildering trying to figure out where this code
originally came from!

John Doner
From: David B. Lamkins
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r5un0$t24@maureen.teleport.com>
·····@aero.org (John Doner) wrote:
[...]
>Accessibility: Good C compilers are available, are cheap, and fit
>on relatively small student-owned machines.  Having your own system
>on your own machine is a big plus for anyone learning a language.
>It isn't a requirement, but it helps.

cost(Lisp environment) < cost(C environment) : usually true.

size(Lisp environment) < size(C environment) : debatable.

On my Mac, MCL 2.0 takes up 5.7 MB, fully loaded.  Symantec THINK C++ 7.0
takes up 15.5 MB of disk, fully loaded.  MCL will run comfortably in 3-5 MB
of physical memory.  THINK C++ requires 8-12 MB.

At work, the ratios are similar when comparing ALC/Windows 2.0 and MS VC++ 2.1.

>Debugging: Most Lisps have pretty serviceable debugging tools.  But
>what if your code breaks something at a low level?  It's pretty
>easy to tell what piece of C source some assembly code corresponds
>to, but not so for Lisp.

This may be an issue of familiarity with language implementation issues, as
suggested earlier in your post (not quoted here).  A Lisp disassembly, at
lower optimization levels, tends to name a lot of primitive operations that
pretty clearly indicate what's going on.

>  And there are other difficult situations:
>I've been scratching my head for a while over one where the
>compiler chokes on code produced by several layers of interacting
>macros.  It is bewildering trying to figure out where this code
>originally came from!

How about macroexpand-1?

Dave
http://www.teleport.com/~dlamkins
---
CPU Cycles: Use them now, while you still have them.
From: David B. Lamkins
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r73t8$aos@maureen.teleport.com>
"David B. Lamkins" <········@teleport.com> wrote:
>·····@aero.org (John Doner) wrote:
>[...]
>>Accessibility: Good C compilers are available, are cheap, and fit
>>on relatively small student-owned machines.  Having your own system
>>on your own machine is a big plus for anyone learning a language.
>>It isn't a requirement, but it helps.
>
>cost(Lisp environment) < cost(C environment) : usually true.
>
>size(Lisp environment) < size(C environment) : debatable.

Of course, the inequalities go the wrong way in my original message...
I meant to say:

cost(Lisp environment) > cost(C environment) : usually true.

size(Lisp environment) > size(C environment) : debatable.

>
>On my Mac, MCL 2.0 takes up 5.7 MB, fully loaded.  Symantec THINK C++ 7.0
>takes up 15.5 MB of disk, fully loaded.  MCL will run comfortably in 3-5 MB
>of physical memory.  THINK C++ requires 8-12 MB.
>
>At work, the ratios are similar when comparing ALC/Windows 2.0 and MS VC++ 2.1.

Dave
http://www.teleport.com/~dlamkins
---
CPU Cycles: Use them now, while you still have them.
From: Rich Parker
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r7d8e$pkc@moa1.moa.com>
"David B. Lamkins" <········@teleport.com> wrote:
>On my Mac, MCL 2.0 takes up 5.7 MB, fully loaded.  Symantec THINK C++ 7.0
>takes up 15.5 MB of disk, fully loaded.  MCL will run comfortably in 3-5 MB
>of physical memory.  THINK C++ requires 8-12 MB.

I am also a user of Symantec C++/VA/TCL, but the main issue of difference
between that and the MCL environment is the size of compiled applications.
A decent C++/TCL-based application can be about 800Kb in size. MCL apps
tend to be 3- to 4-times that size. That's the main problem, IMO.

-rich-
From: Dave Yost
Subject: Lisp considered too hard
Date: 
Message-ID: <3r20a2$mp@Yost.com>
Here's another problem that needs solving:

Lisp is considered too hard for most programmers,
too elegant, too complex, too formal, etc.
A Vice President of a major computer industry company
said this (more or less) to me just the other day.

I think this problem could be solved by
the right book, targeted directly at C programmers.

Lisp is as great as it is because there are many, many
things about Lisp that don't come easily to a C programmer.
Most of them I think are ramifications of the fact that
Lisp is a dynamic environment.

Perhaps someone could do a survey people who still
remember what they went through to transition from C to
Lisp and collect a list of things they found difficult
or hard to get used to.

Dave Yost
    @    .com
From: Patrick Logan
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3r289d$jup@ornews.intel.com>
Dave Yost (····@Yost.com) wrote:

: Lisp is considered too hard for most programmers,

Guess what, from what I've seen during twelve years in the industry,
*programming* is too hard for most programmers.

I am not kidding. Any language. Most can't do it very well,
and I'm tired of dealing with those who can't.

--
···············@ccm.jf.intel.com
Intel/Personal Conferencing

"Form follows function." -Le Corbusier
From: Dave Yost
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3r3995$1dn@Yost.com>
In article <··········@ornews.intel.com>,
Patrick Logan <·······@ornews.intel.com> wrote:
>Dave Yost (····@Yost.com) wrote:
>
>: Lisp is considered too hard for most programmers,
>
>Guess what, from what I've seen during twelve years in the industry,
>*programming* is too hard for most programmers.

  :-)  There is truth in what you say.

>I am not kidding. Any language. Most can't do it very well,
>and I'm tired of dealing with those who can't.

There is another side to this story, though.
There are programmers that are so smart and so fast,
that they have little regard for leaving something
clean and readable behind for themselves or others
to develop further.  Would you say that programming
is too hard for such a person?

Maybe we should shame all programmers into taking on lisp,
as a test of mental strength, and if they can't hack it
maybe they'll give up and try another line of work.
Hey, maybe management!    ;-)

Dave
From: Geoffrey Clements
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <gclements-0706950901150001@mac21_58.keps.com>
In article <··········@Yost.com>, ····@Yost.com (Dave Yost) wrote:

> In article <··········@ornews.intel.com>,
> Patrick Logan <·······@ornews.intel.com> wrote:
> >Dave Yost (····@Yost.com) wrote:
> >
> >: Lisp is considered too hard for most programmers,
> >
> >Guess what, from what I've seen during twelve years in the industry,
> >*programming* is too hard for most programmers.
> 
>   :-)  There is truth in what you say.
> 

I'd say.

I've programmed in a few different langauges and worked with a bunch of
different programmers. I find that too many of those programmers used the
"try stuff until it works" debugging method. They try something. OK that
doesn't work. They try something else. Ok that doesn't work. So they try
something else. That works. Bug fixed. This makes a huge mess of a piece
of software because it turns a relatively straight forward piece of code,
that may have not covered every contingency, into a patchwork of partial
fixes.

I absolutely hate coming onto a project and seeing code like this. In most
cases I end up rewritting huge protions of the code. I usually finish
before someone who is just fixing things because I turn the messy
patchwork into clean simple code and they just fight the patchwork.

> >I am not kidding. Any language. Most can't do it very well,
> >and I'm tired of dealing with those who can't.
> 

Me too.

> There is another side to this story, though.
> There are programmers that are so smart and so fast,
> that they have little regard for leaving something
> clean and readable behind for themselves or others
> to develop further.  Would you say that programming
> is too hard for such a person?
> 

The reason someone is fast is because they are smart. They think about why
a piece of code is not working and then fix it. This way they only have to
fix the problem once. This makes them fast. They don't use clever hacks
because they are too hard to understand in a glance. They don't use fancy
constructs because they are too hard to understand in a glance. They use
the simplest code that solves the problem.

I don't find programming in any one language harder than any other. I find
programming in a particular implementation harder than another. I also
find that some problems take longer to solve in one langauge than another.
I find having to deal with all of the details of memory management in C to
be a pain in the butt. Sometimes I feel like I'm writing the same code
over and over. On the other hand trying to poke around in the internals of
the OS from Lisp is a pain in the butt. (Some Lisps are better at this
than others but doing it in C is easier. At least I find it so.)

> Maybe we should shame all programmers into taking on lisp,
> as a test of mental strength, and if they can't hack it
> maybe they'll give up and try another line of work.
> Hey, maybe management!    ;-)
> 

Actually I found learning Lisp a lot easier than C++. Not because learning
the syntax of C++ is difficult but because it took me a while to really
get the mindset for working with objects. (Just compiling C code with your
C++ compiler doesn't count as programming in C++) Someone told me to go
program in Smalltalk for a while and objects will fall into place. I
didn't listen and so it I had to struggle longer than neccesary.

I've found Lisp one of the easier languages to work with. Some of the
reason is the excellent environment that comes with Macintosh Common Lisp
and some of it is because I don't find myself programming details as much
as much as I do in C.

> Dave

-- 
geoff                                           Geoffrey P. Clements
Senior Software Engineer                            Mac Software Guy
Kodak Electronic Printing Systems                         KEPS, Inc.
·········@keps.com                             Voice: (508) 670-6812
From: Henry Baker
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <hbaker-0706951733440001@192.0.2.1>
In article <··························@mac21_58.keps.com>,
·········@keps.com (Geoffrey Clements) wrote:

> In article <··········@Yost.com>, ····@Yost.com (Dave Yost) wrote:
> 
> > In article <··········@ornews.intel.com>,
> > Patrick Logan <·······@ornews.intel.com> wrote:
> > >Dave Yost (····@Yost.com) wrote:
> > >
> > >: Lisp is considered too hard for most programmers,
> > >
> > >Guess what, from what I've seen during twelve years in the industry,
> > >*programming* is too hard for most programmers.

> I've programmed in a few different langauges and worked with a bunch of
> different programmers. I find that too many of those programmers used the
> "try stuff until it works" debugging method. They try something. OK that
> doesn't work. They try something else. Ok that doesn't work. So they try
> something else. That works. Bug fixed. This makes a huge mess of a piece
> of software because it turns a relatively straight forward piece of code,
> that may have not covered every contingency, into a patchwork of partial
> fixes.
> 
> I absolutely hate coming onto a project and seeing code like this. In most
> cases I end up rewritting huge protions of the code. I usually finish
> before someone who is just fixing things because I turn the messy
> patchwork into clean simple code and they just fight the patchwork.
> 
> > >I am not kidding. Any language. Most can't do it very well,
> > >and I'm tired of dealing with those who can't.
> 
> Me too.
> 
> > There is another side to this story, though.
> > There are programmers that are so smart and so fast,
> > that they have little regard for leaving something
> > clean and readable behind for themselves or others
> > to develop further.  Would you say that programming
> > is too hard for such a person?

What?  You don't like program development by debugging a blank sheet of
paper (or a blank screen)?  Neither did many of the people that saw Lisp
in the late 1970's and early 1980's.  Such development didn't follow
established practise, it didn't conform to the 'waterfall' model, etc.,
it didn't use 'static typing', etc.

So where are we today?  People _love_ development by debugging blank
screens, so the newest tools for 'Visual Cxx' support this mode.  This
mode has become almost classical, now that it has the fashionable name
of 'fast prototyping'.  So the bullshit reasons why people didn't like
Lisp are just plain _wrong_, because nearly every one of these ideas is
heavily used today.

No, I think the real reason why people don't like Lisp is that people
(especially researchers at large companies and large universities) don't
like to admit that they were wrong, and that some snot-nosed kids got it
right the first time, perhaps decades before they did.  Therefore, after
5-10 years and a blizzard of technical papers, they change the names of
things and start touting the same things that Lisp had first.

Remember garbage collection?  Remember interpreters and incremental
debugging?  Remember incremental compilers?  Remember incremental loading?
(I could go on for _days_.)

People complain about 5 Megabyte 'hello world' programs.  Have you ever
measured the size of a RISC 'hello world' program that runs on a GUI
interface these days, with X-Windows (or equivalent), etc., etc.?  You
might be shocked and amazed.

The 8 Mbyte main memories required to run Lisp Machine Lisp grossed everyone
out in 1980.  But look at what it takes to run MS Windows these days!
16 Mbytes!  So it isn't the _language_ that cost all of the memory, but
the _functionality_.  And I'll bet that Lisp was at least an order of
magnitude more efficient in its use of that memory than MS Windows is.

-- 
www/ftp directory:
ftp://ftp.netcom.com/pub/hb/hbaker/home.html
From: 55437-olivier clarisse(haim)463
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <D9vnFI.DBA@ssbunews.ih.att.com>
In article <·······················@192.0.2.1>, ······@netcom.com (Henry Baker) writes:
[...]
|> Remember garbage collection?  Remember interpreters and incremental
|> debugging?  Remember incremental compilers?  Remember incremental loading?
|> (I could go on for _days_.)
|> 
|> People complain about 5 Megabyte 'hello world' programs.  Have you ever
|> measured the size of a RISC 'hello world' program that runs on a GUI
|> interface these days, with X-Windows (or equivalent), etc., etc.?  You
|> might be shocked and amazed.
|> 
|> The 8 Mbyte main memories required to run Lisp Machine Lisp grossed everyone
|> out in 1980.  But look at what it takes to run MS Windows these days!
|> 16 Mbytes!  So it isn't the _language_ that cost all of the memory, but
|> the _functionality_.  And I'll bet that Lisp was at least an order of
|> magnitude more efficient in its use of that memory than MS Windows is.
|> 
Way to go! You're right on target. I remember the 7Bytes lisp images
on Xerox Dandelions 11 years ago containing all the functionality
of todays MS Windows including + most of its applications and a lot more
(including the ability to crash). These had ethernet, TCP/IP, FTP, WISYWIG
text editors, PostScript printing (actually called Interpress?).

So who needed to invent CORBA (today) when you could distribute objects
(agents) on a network of Lisp machines 10 year ago, plus you could use
TeleRaid and friends to remotely debug OO software distributed across
the network. I really thought these concepts would take over the
world of computing... And yes they did. I have seen every single successful
software vendor steal these concepts one by one, canibalize them and
make them so huge ever since.

Heck, Xerox invented fast windows on CPUs that probably were
20 times slower than the Pentium and at a time when having a 20MBytes hard
drive was a luxury! And look what happened to X windows and MS Windows.
When I look at the computer on my desk with 10 years of evolution behind it,
I see:

"Monster inside"

Written all over it!
Q: How can an industry go so wrong?
A: With lots more money, zero sense of Ethics and zero Knowledge
of its own history.

"That was me speaking, Not It."
-- 
----------------
Olivier Clarisse	     "Languages are not unlike living organisms
Member of Technical Staff     can they adapt and improve to survive?"
AT&T Bell Laboratories
From: Philip Jackson
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3r814e$3vo@condor.ic.net>
Dave Yost (····@Yost.com) wrote:
: Here's another problem that needs solving:

: Lisp is considered too hard for most programmers,
: too elegant, too complex, too formal, etc.
: A Vice President of a major computer industry company
: said this (more or less) to me just the other day.

: I think this problem could be solved by
: the right book, targeted directly at C programmers.

Perhaps the target for "ease of programming" should not be C (or C++)
programmers, but Visual Basic... A Lisp environment that was as easy
as Visual Basic, and provided the same level of support for GUI
development, OS access, and integration with other applications,
could be very attractive... and perhaps not too far from where 
Lisp already is...

Phil Jackson
From: Richard A. O'Keefe
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3rma9n$kfs@goanna.cs.rmit.edu.au>
····@Yost.com (Dave Yost) writes:

>Here's another problem that needs solving:

>Lisp is considered too hard for most programmers,
>too elegant, too complex, too formal, etc.
>A Vice President of a major computer industry company
>said this (more or less) to me just the other day.

Read this and weep:
    'If you are like most C programmers, you use C not only for its power
     and flexibility but also because the language itself represents an
     almost intangible [sic], formal beauty that can be appreciated for
     its own sake.  In fact, C is often referred to as "elegant" because
     of its consistency and purity.  ...
     The C language is one of the most theoretically consistent
     computer languages ever developed.'

    Herb Schildt, in 'The Craft of C'.

If C is elegant, then I guess Lisp *is* 'too elegant'.  (Mind you, I wonder
how Schildt manages to touch beauty...)

By the way, I bought the book on sale because the cover said it contained a
C interpreter.  It doesn't.  It contains an interpreter for a toy language
vaguely resembling C, and _unbelievably_, it not only keeps on reparsing all
the time, it keeps on retokenising!  This is the kind of elegance that sells
a million books, people!

(Walks away sobbing and looking for a basin.)
-- 
"The complex-type shall be a simple-type."  ISO 10206:1991 (Extended Pascal)
Richard A. O'Keefe; http://www.cs.rmit.edu.au/~ok; RMIT Comp.Sci.
From: Marco Antoniotti
Subject: Re: "Lisp" is poison for NSF research $$$
Date: 
Message-ID: <MARCOXA.95Jun3120220@mosaic.nyu.edu>
In article <······················@neves.ils.nwu.edu> ·····@ils.nwu.edu (David Neves) writes:


   From: ·····@ils.nwu.edu (David Neves)
   Newsgroups:
   comp.lang.lisp,comp.lang.lisp.franz,comp.lang.lisp.x,comp.lang.clos 
   Date: Fri, 02 Jun 1995 09:26:12 -0500
   Organization: The Institute for the Learning Sciences
   Lines: 16
   Distribution: inet
   References: <·······················@192.0.2.1>
   Xref: cmcl2 comp.lang.lisp:18263 comp.lang.lisp.franz:439 comp.lang.lisp.x:1495 comp.lang.clos:3027

   In article <·······················@192.0.2.1>, ······@netcom.com (Henry
   Baker) wrote:

   :  From the anonymous reviews of a recent NSF proposal in which Lisp was
   :  mentioned, but only as an incidental tool:
   :  
   :  "The LISP environment is really getting out of date as a viable system
   :  environment.  Let us not pursue this line of research any more."
   Amazing.  Anything should be allowed as an incidental tool.  A researcher
   has to pick the best language for his or her group.  Groups put a lot of
   effort in developing a good tool set for the language they work with.  For
   an external reviewer to base his/her decision on an incidental tool is
   stepping out of bounds.  Faulting a dynamic language is being particularly
   insensitive to prototyping needs of research.  The reviewer is probably
   someone who still views Lisp as the Lisp 1.5 that some programming
   language texts cover.

I would say *almost all* the programming languages book. If you fish
around for "programming Language Books", very few even *acknowledge*
the presence of Common Lisp and - when they are up to it - use some
subset of Scheme to show the power of recursion and list
processing. The fact that both CL and Scheme have arrays and vectors
does not even breeze near the author(s). Think what happens to
CLOS. :(

Cheers



--
Marco G. Antoniotti - Resistente Umano
-------------------------------------------------------------------------------
Robotics Lab		| room: 1220 - tel. #: (212) 998 3370
Courant Institute NYU	| e-mail: ·······@cs.nyu.edu

...e` la semplicita` che e` difficile a farsi.
...it is simplicity that is difficult to make.
				Bertholdt Brecht
From: Ken Anderson
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <KANDERSO.95Jun4194001@bitburg.bbn.com>
In article <·········@Yost.com> ····@Yost.com (Dave Yost) writes:
   >Amazing.  Anything should be allowed as an incidental tool.  A researcher
   >has to pick the best language for his or her group.  Groups put a lot of
   >effort in developing a good tool set for the language they work with.  For
   >an external reviewer to base his/her decision on an incidental tool is
   >stepping out of bounds.  Faulting a dynamic language is being particularly
   >insensitive to prototyping needs of research.  The reviewer is probably
   >someone who still views Lisp as the Lisp 1.5 that some programming
   >language texts cover.

   Denial.

   I think Lisp implementers should take this as a wake-up call.

Everyone, not just implemntors need to hear this.

   There are other warnings.
     * Lucid went out of business

Why was that?  The hearsay says the C++ side brought the house down.

     * CMUCL was abandoned, and the people are working on Dylan

Perhaps they did not believe that Lisp had the seeds for its own renewal.

     * MCL was abandoned for 2 years before being revived
     * The GARNET project has left lisp behind and has gone to C++.
       It's now 3 times faster, and more people are interested in it.

This "3" is an interesting number, where does it come from?  While people
often mention a factor of 10 difference between Lisp and C, the Lisp
version is usually junk (sorry i only have two examples of this).  The
analysis i've done on admitedly small programs, suggests reasonable factors
are less than 1.5.  Garnet is a sizable chunk of code, so as an example of
a real application, understanding the performance and other differences
between the Lisp and C++ versions would be very valuable to us.  I can find
the Garnet Lisp code from the Lisp Faq (ftp a.gp.cs.cmu.edu
/usr/garnet/garnet), but where is the C++ code?

   Surely there are many others.

1.  Any expert system shell that is now in C or C++.

2.  Anyone who would rather be programming in Lisp than what they are
programming in.  Here are two of the projects i'm working on:

2P1: Coding in C++, even though i'm implementing a very simple scheme like
parallel language (without scheme syntax).  I prototype in Lisp and hand
code a threaded interpreter that uses C++ objects at runtime.

2P2: Coding in Scheme, although Lisp would be more effective.  This is
because Lisp is "tainted" which was Henry's point, and Scheme can be hidden
inside "a C application".  Also, the Scheme is flexible enough for those
who realize they want it on the inside.

   As far as I can tell, ANSI lisp is being treated as a huge
   plateau, as if there is nothing interesting left to do, or
   as if any further changes would be too hard to negotiate.

   What about speed?  size?  C/C++ interoperability?

I think users and vendors working together can do a lot here.

   These issues have been untreated emergencies for some years now.

It is easy to say "we did that 10 years ago in Lisp".  However, it doesn't
mean much if we can't deliver it to the people who want it now.  We need to
collect and focus our capabilities so we can.
--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: John Atwood
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <3qvuft$fdj@engr.orst.edu>
In article <·····················@bitburg.bbn.com>,
Ken Anderson <········@bitburg.bbn.com> wrote:
>In article <·········@Yost.com> ····@Yost.com (Dave Yost) writes:
>     * The GARNET project has left lisp behind and has gone to C++.
>       It's now 3 times faster, and more people are interested in it.
>
>This "3" is an interesting number, where does it come from?  While people
>often mention a factor of 10 difference between Lisp and C, the Lisp
>version is usually junk (sorry i only have two examples of this).  The
>analysis i've done on admitedly small programs, suggests reasonable factors
>are less than 1.5.  Garnet is a sizable chunk of code, so as an example of
>a real application, understanding the performance and other differences
>between the Lisp and C++ versions would be very valuable to us.  I can find
>the Garnet Lisp code from the Lisp Faq (ftp a.gp.cs.cmu.edu
>/usr/garnet/garnet), but where is the C++ code?

the number 3 comes from the Garnet/Amulet development team.
In the Garnet FAQ 
(http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ),
they answer the question, Why switch to C++, they list political & 
technical reasons. One technical reason:

* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
but it was still pretty slow on "conventional" machines.  The initial
version of the C++ version, with similar functionality, appears to be
about THREE TIMES FASTER than the current Lisp version without any
tuning at all.


The C++ code is now available (Amulet alpha 0.2) at:
http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html


John
-- 
_________________________________________________________________
Office phone: 503-737-5583 (Batcheller 349);home: 503-757-8772
Office mail:  303 Dearborn Hall, OSU, Corvallis, OR  97331
_________________________________________________________________
From: Martin Cracauer
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <1995Jun8.074837.4764@wavehh.hanse.de>
·······@ada.CS.ORST.EDU (John Atwood) writes:

>In the Garnet FAQ 
>* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
>but it was still pretty slow on "conventional" machines.  The initial
>version of the C++ version, with similar functionality, appears to be
>about THREE TIMES FASTER than the current Lisp version without any
>tuning at all.

I don't think that is a meaningful number to compare the speed of
Common Lisp and C++ in general. Amulet is the second system and has
probably a cleaner and tighter implementation.

Additionally, in some places C++ *requires* faster coding techniques
where a Lisp solution may be more elegant. In Amulet, formulars are
mapped to ordinary functions in constant space. This is ugly and the
Lisp version was more elegant -but slower- in this regard.

>The C++ code is now available (Amulet alpha 0.2) at:
>http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html

Just for note, I had a look at the docs and ran some code and have to
say this is a nice toolkit, powerful and easy to understand. 
Congratulations. 

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <········@wavehh.hanse.de>. No NeXTMail, please.
 Norderstedt/Hamburg, Germany. Fax +49 40 522 85 36. This is a 
 private address. At (netless) work programming in data analysis.
From: Brad Myers
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <3rq9nb$rat@cantaloupe.srv.cs.cmu.edu>
It has just come to my attention that a discussion of Garnet and
Amulet was going on here.  I no longer follow these newsgroups, so I
was not aware of it.

The Garnet group put a significant emphasis on performance and spent
many man-months optimizing the code using all available performance
measuring tools.  Unlike many of the usual benchmark comparisons of C
and Lisp, Type declarations showed no noticeable performance
improvements for Garnet.  The biggest performance gains resulted from,
of course, changing algorithms.  The same (final) algorithms are
mostly being used in Amulet.  My biggest complaints about performance
in Lisp are that:

1) It is generally impossible to tell which Lisp functions and features
   will be slow and which will be fast, and furthermore it differs
   enormously among compilers.
2) Many of the standard functions cause consing.
3) The syntax for declarations is really awful and it is very
   easy to miss some of the places you need to put them.

In C++ we find that what seems like it will be efficient usually is,
and vice versa, and the "obvious" way to code an algorithm usually
results in quite efficient code, which is certainly not true in Lisp.

While our coding productivity is probably down somewhat in C++, I feel that
since we don't have to go back and spend time optimizing the code as
much, we might end up with about the same overall time to deliver a
system with a useful level of performance.

The list of reasons we switched from Lisp to C++ are number [12] in:
http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ

By the way, to answer one question, we have been able to retain many of
Garnet's dynamic features in C++, including dynamic slot typing and
generic Get and Set functions, using some of the advanced overloading
capabilities of C++.  Amulet is now in alpha release because the
documentation is not quite done.  We expect a real release by the end
of the month. See http://www.cs.cmu.edu/~amulet for more information.

Brad A. Myers
Computer Science Department
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA  15213-3891
(412) 268-5150
FAX: (412) 268-5576
···@cs.cmu.edu
http://www.cs.cmu.edu/~bam
From: Marty Hall
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <D9pBuK.2v6@aplcenmp.apl.jhu.edu>
In article <·····················@bitburg.bbn.com> 
········@bitburg.bbn.com (Ken Anderson) writes:
[...]
>     * Lucid went out of business
>
>Why was that?  The hearsay says the C++ side brought the house down.

I think this is more than hearsay. The Lisp side of the house was
still making money, but they went far into debt to get into the
Energize and C++ business. When these businesses didn't make a profit,
the investors pulled the plug. One of their senior guys referred to it
as "Albatrossergize".

Be that as it may, I agree very much with Dave Yost that we should
take very seriously the real and perceived problems with Lisp, both on
the technical and political ends. The issues Dave mentioned should not
just be brushed aside in a defend-Lisp-at-all-costs fervor, as
sometimes happens on this group.

On a more upbeat note, here at the Johns Hopkins University
Applied Physics Lab, we have several Navy, ARPA, and Army projects
that make significant use of Lisp. The role of Lisp has gone over
pretty well with internal management and sponsors, even for the newly
starting programs. But JHU/APL is not mainstream industry. More
stories are needed like AT&T using Lisp (and AllegroStore) for their
Global BroadBand 2000 switching system. 
						- Marty
(proclaim '(inline skates))
From: Ken Anderson
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <KANDERSO.95Jun19095421@bitburg.bbn.com>
In article <··········@cantaloupe.srv.cs.cmu.edu> ····@cs.cmu.edu (Brad Myers) writes:

Sorry if this is a duplicate, it looks like my initial reply didn't get out.

   It has just come to my attention that a discussion of Garnet and
   Amulet was going on here.  I no longer follow these newsgroups, so I
   was not aware of it.

   The Garnet group put a significant emphasis on performance and spent
   many man-months optimizing the code using all available performance
   measuring tools.  Unlike many of the usual benchmark comparisons of C
   and Lisp, Type declarations showed no noticeable performance

Unfortunately a brief (about 2 hours) look at Garntet suggets that type
declarations were either

1. Missing.  For example, there are no fixnum declarations in any
arithmetic or array declarations for aref's (although there were some
svref's). 

2.  or mostly useless. For example, a high level function declares an
argument to be a string and passes the argument to lower level functions
that have no type information.

This suggests that there is plenty of room for further performance
improvement.

   improvements for Garnet.  The biggest performance gains resulted from,
   of course, changing algorithms.  The same (final) algorithms are
   mostly being used in Amulet.  My biggest complaints about performance
   in Lisp are that:


   1) It is generally impossible to tell which Lisp functions and features
      will be slow and which will be fast, and furthermore it differs
      enormously among compilers.
   2) Many of the standard functions cause consing.

I presume you mean "cause unnecessary consing".  I don't know which
functions you are refering to, but using proper declarations can reduce
number consing.  I also avoid functions like read-line and string-trim in
critical places in the code.

   3) The syntax for declarations is really awful and it is very
      easy to miss some of the places you need to put them.

These are all fair comments.  However, to compensate for these problems
Lisp compilers provide advice about declarations that would improve
performance.  CMUCL is particularly good in this regard.  There are also
some good profilers which i find extremely effective at identifying where
optimization effort should go. A day or so of profiling every once and a
while should be all you need.  Did you use such tools?

   In C++ we find that what seems like it will be efficient usually is,
   and vice versa, and the "obvious" way to code an algorithm usually
   results in quite efficient code, which is certainly not true in Lisp.

   While our coding productivity is probably down somewhat in C++, I feel that
   since we don't have to go back and spend time optimizing the code as
   much, we might end up with about the same overall time to deliver a
   system with a useful level of performance.

People sell profilers for C++ too, so i suspect optimization in C++ isn't
completely obvious.

   The list of reasons we switched from Lisp to C++ are number [12] in:
   http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ

While the reasons are clearly stated, i think the following one on speed is
misleading:

"
* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
but it was still pretty slow on "conventional" machines.  The initial
version of the C++ version, with similar functionality, appears to be
about THREE TIMES FASTER than the current Lisp version without any
tuning at all.
"

As you pointed out, much of that 5 years would have been spent making
optimzations you would have made in C++ as well, ie algorithm changes.
Since undeclared arithmetic is 10 times slower than declared arithmetic, it
is unfair to say that the C++ version had "similar functionality" unless
you implemented the same arithemtic mode in C++.  

Since Amulet and Garnet are sizable chunks of code.  It would be quite
valuable if you could make your "about THREE TIMES FASTER" benchmark
available to us.

   By the way, to answer one question, we have been able to retain many of
   Garnet's dynamic features in C++, including dynamic slot typing and
   generic Get and Set functions, using some of the advanced overloading
   capabilities of C++.  Amulet is now in alpha release because the
   documentation is not quite done.  We expect a real release by the end
   of the month. See http://www.cs.cmu.edu/~amulet for more information.

If so, then perhaps profiling and optimizing Garnet would be a good thing
for Amulet.

k
--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: Fernando Mato Mira
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <3s6d92$hqo@disunms.epfl.ch>
In article <······················@bitburg.bbn.com>, ········@bitburg.bbn.com (Ken Anderson) writes:

> 1. Missing.  For example, there are no fixnum declarations in any
> arithmetic or array declarations for aref's (although there were some
> svref's). 

And SVREFs are not a good idea, as the element-type of simple vectors is T.
It's better to declare the arrays as one dimensional simple arrays with a certain element type 
and use AREF (I remember Allegro would open code AREFs but not SVREFs).

--
F.D. Mato Mira           http://ligwww.epfl.ch/matomira.html                  
Computer Graphics Lab    ········@epfl.ch 
EPFL                     FAX: +41 (21) 693-5328
From: Ken Anderson
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <KANDERSO.95Jun19093508@bitburg.bbn.com>
In article <··········@engr.orst.edu> ·······@ada.CS.ORST.EDU (John Atwood) writes:

   Regarding the effiency of Lisp expressions,
   Ken Anderson <········@bitburg.bbn.com> wrote:
   >
   >2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
   >jokes.
   >

   I'll bite.  What would be the proper way to code this?

This is an example where Lisp provides a correct result, but with a perhaps
unexpected performance hit.  The performance depends on the type of the
input arguments, fixed-width and comp-width.  Lets assume they are fixnums
and that their difference is a fixnum.  Unfortunately, the (/ ... 2)
can produce a ratio which is then converted into a fixnum by floor.  This
is a common mistake because in C, / of two int's produces an int (by
truncation?).  The right way to do this in Lisp is to use the two argument
version of floor.  The following table shows that there can be a factor of
7 difference in performance for the Lisp on my desk:

(defun f1 (a b) (floor (/ (- a b) 2)))
(defun f2 (a b) (floor (- a b) 2))
(defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))

What      time (microsec) +/-
(f1 20 5) 34.36351   0.41772142
(f1 20 6)  5.7358184 0.20296581
(f2 20 5)  4.845907  0.6062079
(f2 20 6)  4.83785   0.6236771
(f3 20 5)  4.760563  0.20169255
(f3 20 6)  4.747239  0.0

k
--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: Kelly Murray
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <1995Jun20.192838.1833@franz.com>
In article <······················@bitburg.bbn.com>, ········@bitburg.bbn.com (Ken Anderson) writes:
>> In article <··········@engr.orst.edu> ·······@ada.CS.ORST.EDU (John Atwood) writes:
>> 
>>    Regarding the effiency of Lisp expressions,
>>    Ken Anderson <········@bitburg.bbn.com> wrote:
>>    >2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
>>    >jokes.
>>    I'll bite.  What would be the proper way to code this?
>> This is an example where Lisp provides a correct result, but with a perhaps
>> unexpected performance hit.  The performance depends on the type of the
>> input arguments, fixed-width and comp-width.  Lets assume they are fixnums
>> and that their difference is a fixnum.  Unfortunately, the (/ ... 2)
>> can produce a ratio which is then converted into a fixnum by floor.  This
>> is a common mistake because in C, / of two int's produces an int (by
>> truncation?).  The right way to do this in Lisp is to use the two argument
>> version of floor.  The following table shows that there can be a factor of
>> 7 difference in performance for the Lisp on my desk:
>> (defun f1 (a b) (floor (/ (- a b) 2)))
>> (defun f2 (a b) (floor (- a b) 2))
>> (defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))

Good analysis, but the fastest method is to use arithmetic shift instead of floor:

(defun f4 (a b) (ash (the fixnum (- a b)) -1)))

At least in AllegroCL on the SPARC, this compiles into a few inline assembly instructions:

   0:	save	#x-68,%o6
   4:	tsubcc	%i1,%i0,%o4  ;; do the subtract
   8:	bvs,a	48           ;; branch out if they were not fixnums
  12:	move.l	%i0,%o0
lb1:
  16:	asr.l	#x1,%o4    ;; do the shift by 1 to divide
  20:	move.l	#x3,%o3
  24:	andn	%o3,%o4    ;; patch back type
  28:	move.l	%o4,%o0
  32:	move.l	#x1,%g3
  36:	jmpl	8(%i7),%g0  ;; return from function
  40:	restore	%g0,%o0
  44:	move.l	%i0,%o0

-Kelly Murray  (···@franz.com) <a href="http://www.franz.com"> Franz Inc. </a>
"Those who can see the invisible can do the impossible" - Carl Mays
From: David Neves
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <neves-2106952353560001@neves.ils.nwu.edu>
In article <·····················@franz.com>, ···@math.ufl.edu (Kelly
Murray) wrote:

:  In article <······················@bitburg.bbn.com>,
········@bitburg.bbn.com (Ken Anderson) writes:
...
:  >> (defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))
:  
:  Good analysis, but the fastest method is to use arithmetic shift
instead of floor:
:  
:  (defun f4 (a b) (ash (the fixnum (- a b)) -1)))
I would hope that writing "f3" would cause the compiler to compile it as
"f4".  "f3" is certainly cleared to the reader.  If not, the user could
also write a compiler macro to turn an "f3" into an "f4".
-David
From: Kelly Murray
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <1995Jun22.203546.21091@franz.com>
In article <······················@neves.ils.nwu.edu>, ·····@ils.nwu.edu (David Neves) writes:
>> In article <·····················@franz.com>, ···@math.ufl.edu (Kelly
>> Murray) wrote:
>> 
>> :  In article <······················@bitburg.bbn.com>,
>> ········@bitburg.bbn.com (Ken Anderson) writes:
>> ...
>> :  >> (defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))
>> :  
>> :  Good analysis, but the fastest method is to use arithmetic shift
>> instead of floor:
>> :  
>> :  (defun f4 (a b) (ash (the fixnum (- a b)) -1)))

>> I would hope that writing "f3" would cause the compiler to compile it as
>> "f4".  "f3" is certainly cleared to the reader.  If not, the user could
>> also write a compiler macro to turn an "f3" into an "f4".
>> -David

I agree that using ash is not as clear as floor.  And in fact, AllegroCL will
transform a call to (floor (the fixnum x) 2) into a right-shift 
when speed > safety and the second return value from floor is ignored
(see details below.)

However, if you are writing performance critical code, (the 10% taking up 90%)
I think it's best not to rely too much on hope, especially if you want
good portability, as one compiler might not do the tranform.
 
In general, I think there is a tendency to expect too much from a Lisp compiler.
Maybe the original programmer thought the system could transform 
the (floor (/ (- x y) 2)) into the f4 version?
No C programmer would ever expect this kind of magic from their compiler.

The Franz compiler guy (Duane Rettig) works hard to include many of
these optimizations, but you can't expect the system to transform
everything that is theoretically possible, as well as demand
the Lisp image be as small as Basic
(actually Visual Basic's footprint is bigger than AllegroCL :-)

Here's what Duane says:

"There are two reasons f3 does not expand into f4 as is; one generic
and one specific to Allegro CL (I can't speak for the other lisps):

1. (generic) The call to FLOOR is in tail position, and FLOOR returns
   two values.  Thus, if FLOOR is going to be inlined, it had better
   give both values.  ASH doesn't quite do the job.  To get it to
   return only the first value, enclose it in a VALUES form.

2. Allegro CL does not trust declarations, by default, unless the
   speed optimization quality is higher than safety.

So with the following tweaks, f3 does essentially become f4:

user(7): (defun f3 (a b)
           (declare (optimize speed) (fixnum a b))
           (values (floor (the fixnum (- a b)) 2)))
f3
user(8): (compile 'f3)
f3
nil
nil
user(9): (disassemble 'f3)
;; disassembly of #<Function f3>
;; formals: a b

;; code start: #x6ea7e4:
   0:	save	%o6, #x-68, %o6
   4:	cmp	%g3, #x2
   8:	tne	%g0, #x10
  12:	taddcctv	%g0, %g1, %g0
  16:	sub	%i0, %i1, %o4
  20:	sra	%o4, #x3, %o4
  24:	sll	%o4, #x2, %l0
  28:	mov	%l0, %o0
  32:	mov	#x1, %g3
  36:	jmp	%i7 + 8
  40:	restore	%o0, %g0, %o0
user(10): 

Some of the other cruft in the f3 function (like arg-count and
interrupt checking) can be killed by pulling out the stops and
compiling with safety=0 and debug=0. "



-Kelly Murray  ···@franz.com   <a "href=http://www.franz.com"> Franz Inc. </a>
From: Pierpaolo Bernardi
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <3se7hj$11uc@serra.unipi.it>
David Neves (·····@ils.nwu.edu) wrote:
: In article <·····················@franz.com>, ···@math.ufl.edu (Kelly
: Murray) wrote:

: :  In article <······················@bitburg.bbn.com>,
: ········@bitburg.bbn.com (Ken Anderson) writes:
: ...
: :  >> (defun f3 (a b) (declare (fixnum a b)) (floor (the fixnum (- a b)) 2))
: :  
: :  Good analysis, but the fastest method is to use arithmetic shift
: instead of floor:
: :  
: :  (defun f4 (a b) (ash (the fixnum (- a b)) -1)))
: I would hope that writing "f3" would cause the compiler to compile it as
: "f4".  "f3" is certainly cleared to the reader.  If not, the user could
: also write a compiler macro to turn an "f3" into an "f4".
: -David

Since f3 and f4 are not equivalent, I doubt any compiler would do this.
floor returns two values, f3 should cut the second one to be equivalent 
to f4.

  Pierpaolo
From: Richard M. Alderson III
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <aldersonDAJnAz.DvD@netcom.com>
In article <·····················@franz.com> ···@math.ufl.edu (Kelly Murray)
writes:

>Good analysis, but the fastest method is to use arithmetic shift instead of
>floor:

>(defun f4 (a b) (ash (the fixnum (- a b)) -1)))

AARRRRRRRGGGGGGGGGGHHHHHHHHHHHHHHHHHHHHHHHHHHHHH!

To quote from the PDP-10 hardware reference manual (OK, technically, this
edition is the _DECsystem-10/DECSYSTEM-20 Processor Reference Manual, 1982):

	An arithmetic right shift truncates a negative result differently
	from IDIV if 1s are shifted out.  The result of the shift is more
	negative by 1 than the quotient of IDIV.  Hence shifting -1 (all
	1s) gives -1 as a result.
								[page 2-41]

I checked HAKMEM, since I thought this was in there, but it's not:  I seem to
recall that Steele or one of the other MIT hackers did a short report on this
in the late 60's, since people were in the habit of encoding division by 2 in
this very way.

Does anyone remember when, and who, first commited this to writing?
-- 
Rich Alderson   You know the sort of thing that you can find in any dictionary
                of a strange language, and which so excites the amateur philo-
                logists, itching to derive one tongue from another that they
                know better: a word that is nearly the same in form and meaning
                as the corresponding word in English, or Latin, or Hebrew, or
                what not.
                                                --J. R. R. Tolkien,
········@netcom.com                               _The Notion Club Papers_
From: Holger Duerer
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <HOLLY.95Jun23131107@random.pc-labor.uni-bremen.de>
>>>>> On Thu, 22 Jun 1995 20:35:46 GMT, ···@franz.com (Kelly Murray) said:
	Murray> [...]

	Murray> In general, I think there is a tendency to expect too much from a Lisp compiler.
	Murray> Maybe the original programmer thought the system could transform 
	Murray> the (floor (/ (- x y) 2)) into the f4 version?
	Murray> No C programmer would ever expect this kind of magic from their compiler.

Well, it's not a fair comparison since (as your later comments
explained) no C-construct does as much as the Lisp one (i.e. no
multiple values return, no safety assumptions).

Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
my compiler to genrerate the same code as (x-y)>>1, at least for
optimized compilations.  This is no big demand on the compiler really.
(I just checked and my gcc does it even without any optimization flags
turned on.)

For the same reason I would also expect a Lisp compiler to do the same
(i.e. with speed opt. set and when it can be guaranteed that only the
first value ist used).

Lisp is supposed to be for easy prototyping.  Code like (ash <expr> -1)
does not belong in such a language (unless you really *mean*
shifting and not arithmetic).

	Holger
--
------------------------------------------------------------------------------
Holger D"urer                                          Tel.: ++49 421 218-2452
Universit"at Bremen                                    Fax.: ++49 421 218-2720
Zentrum f. Kognitionswissenschaften und    
FB 3  --  Informatik     
Postfach 330 440                        <·············@PC-Labor.Uni-Bremen.DE>
D - 28334 Bremen                 <http://www.uni-bremen.de/Duerer/Holger.html>
From: Kevin Gallagher
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <3sf5d1$kj4@kernighan.cs.umass.edu>
Holger Duerer writes:
>Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
>my compiler to generate the same code as (x-y)>>1, at least for
>optimized compilations.  This is no big demand on the compiler really.
>(I just checked and my gcc does it even without any optimization flags
>turned on.)

So, what would you expect to be the value of (10 - -17) / 2 ?
-3, -4, -3.5 (hah!) -- or one of -3 or -4 depending on the compiler?

C has choosen efficiency over accuracy.  This is a perfectly
reasonable design decision, entirely within the C tradition.  Good C
programmers know this; bad ones just have inexplicible, intermittent
bugs in their programs.

Common Lisp has choosen accuracy over efficiency while, at the same
time, giving you the tools to get efficiency when you say what you
want.

Kevin Gallagher
From: Michael Hosea
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <3sf7gu$693@watson.math.niu.edu>
In article <··········@kernighan.cs.umass.edu>,
Kevin Gallagher <·····@cs.umass.edu> wrote:
>
>C has choosen efficiency over accuracy.

"Accuracy" isn't the right word.  In any language there are pitfalls
and good programming practices to avoid them.  You even allude to this
in the context of C, but you don't mention that there are pitfalls to
avoid in LISP programming as well.  Perhaps "elegance" would be a
better word.  I'd agree with the statements:  "The design of C favors
efficiency over elegance," and "The (traditional) design of LISP favors
elegance over efficiency."

Regards,

Mike Hosea       (815) 753-6740     Department of Mathematical Sciences
http://www.math.niu.edu/~mhosea     Northern Illinois University
······@math.niu.edu                 DeKalb, IL  60115, U.S.A.
From: Ken Anderson
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <KANDERSO.95Jun24123514@bitburg.bbn.com>
In article <·················@hobbes.ISI.EDU> ···@ISI.EDU (Thomas A. Russ) writes:

   In article <...> ·····@random.pc-labor.uni-bremen.de (Holger Duerer) writes:
    > Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
    > my compiler to genrerate the same code as (x-y)>>1, at least for
    > optimized compilations.  This is no big demand on the compiler really.

Be careful! This optimization may not be happening the way you you expect
it to.  To see this try compiling the following with gcc -O2 -S:

long f5 (long a, long b) { return (a-b)/2; }
long f6 (long a, long b) { return (a-b)>>1; }
unsigned long f7 (unsigned long a, unsigned long b) { return (a-b)/2; }

   Except that this is the wrong answer for Common Lisp.  It is only valid
   as long as (x-y) is evenly divisible by 2.  In Common Lisp, the division
   of two integers can (and does) result in a rational number.  For
   example, in CL:  (/ (- 10 7) 2)  ==>   3/2
     whereas in C:  (10-7)/2  ==> 1

   That's why the entire discussion using floor came in.

Another difference between Lisp and C is that in C, functions like floor or
ceiling are not defined for integers.  Writing such a function is tricker
than it might seem since "If either operand is negative, then the choice
(of how to choose the integer closest to the quotient) is left to the
discretion of the implementor." [S.P. Harbison, G.L. Steele Jr., C a
Reference Manual, Prentice Hall, NJ, 1991 p. 187]

    > (I just checked and my gcc does it even without any optimization flags
    > turned on.)
    > 
    > For the same reason I would also expect a Lisp compiler to do the same
    > (i.e. with speed opt. set and when it can be guaranteed that only the
    > first value ist used).

   With floor instead of /, agreed.  Numeric optimization is not one of
   most lisp implementation's strong points  [CMUCL excepted, perhaps],
   which can be really annoying, particularly for the fixnum case.  The
   effect is that many functions called on fixnums do not have special
   handling (for example oddp, evenp!!!).

While CMUCL is quite noteworthy, i don't think this is fair to the other
Lisps implementations out there.  From what i've seen, the do try to make
numeric optimization a strong point, and even handle evenp.  Of course, we
should keep proding them to do better.  Perhaps we should develop a
benchmark set that identifies Lisps that aren't doing all they could.  I
enclose a benchmark for floor below.

   Of course, another source of confusion is that CL has the integer type,
   but since that includes fixnums (what most programmers really think of
   when they say "integer") and bignums, declaring something to be of type
   "integer" doesn't let the system do much in the way of optimization at
   all.

Thanks to everyone for responding to this thread.  In my original post, i
presented some relative timings and forgot to put a VALUES in F3.  Luckily,
this omission brought in discussions of other subtitles.  Here are some
timings for several Lisp's that are within arms reach.  The times are for
the body of the functions (the function call and benchmarking overhead has
been removed.)  So for example, the Allegro time for F3 is essentially the
time for 3 instructions, CMUCL takes several more instructions, and Lucid
calls an internal routine.  Interestingly, the C version of F3, called f5
above, takes 4 instructions.  

(defun f1 (a b) (floor (/ (- a b) 2)))
(defun f2 (a b) (floor (- a b) 2))
(defun f3 (a b) (declare (fixnum a b)) (values (floor (the fixnum (- a b)) 2)))
(defun f4 (a b) (declare (fixnum a b)) (ash (the fixnum (- a b)) -1))

;;; KRA 24JUN95: Bitburg, Sparc 10,
  microsec.  %err  What     
    34.280   0.3 ; (F1 20 5)
     5.595   0.4 ; (F1 20 6)
     4.718   0.2 ; (F2 20 5)
     4.709   0.4 ; (F2 20 6)
     0.076   0.2 ; (F3 20 5)
     0.074   0.4 ; (F3 20 6)

cmucl 17f 
    42.280   3.8 ; (F1 20 5)
     6.576   0.3 ; (F1 20 6)
     4.702   0.3 ; (F2 20 5)
     3.667   0.1 ; (F2 20 6)
     0.158   0.2 ; (F3 20 5)
     0.154   0.2 ; (F3 20 6)
     0.155   0.1 ; (F4 20 5)
     0.157   0.4 ; (F4 20 6)

Lucid 4.1
    29.089   0.2 ; (F1 20 5)
     9.745   1.6 ; (F1 20 6)
     2.203   0.8 ; (F2 20 5)
     2.155   0.4 ; (F2 20 6)
     3.595   0.1 ; (F3 20 5)
     3.548   0.9 ; (F3 20 6)
     0.085   0.5 ; (F4 20 5)
     0.085   0.1 ; (F4 20 6)
--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: Bulent Murtezaoglu
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <MUCIT.95Jun5181518@vein.cs.rochester.edu>
>>>>> "Dave" == Dave Dyer <·····@netcom.com> writes:

    Dave> I have to agree with Dave Yost; In many respects, modern
    Dave> C/C++/Visual Basic development environments rival or exceed
    Dave> the best lisp has to offer.  The underlying language is
    Dave> still crap, but the gloss on top of it demos really well;
    Dave> and truthfully, goes a long way toward improving
    Dave> productivity.

"Demoing well" does not translate into productivity.  People usually don't
complain about Lisp development evironments (which are very good actually), 
but about efficiency/image size/common GUI/resource needs etc.  Given the 
memory/CPU power available on low end machines, and the increasing 
sophistication of off-the-shelf PC operating systems (OS/2 and windows 
promises), it might only be a matter of time before someone comes out with 
a Turbo-Lisp at a reasonable price ($200 or so).  Note, though, that 
visual basic is not really in the same league.  The way I have seen it used
is for cute and flashy windows programs that don't do anything requiring
remotely sophisticated or unusual algorithms.  A turbo-Lisp could do all 
visual Basic could do and more, but most people would not bother to learn 
Lisp when when they can do all they want in Basic.  If this is a problem at 
all, it has more to do with the local (US) pop computer/PC culture than Lisp.  

    Dave> Despite many millions that went into Symbolics, LMI, TI and
    Dave> Xerox (both directly and to their customers) there is not
    Dave> *ONE* really well known "lisp" success story to point to;
    Dave> and on the flip side, everybody knows how much was invested
    Dave> in those companies, and where they are now. [...]

Hmmm.  Off the top of my head I'd say Emacs, AutoCAD, and symbolic math
systems with Lispy engines inside.  Sure, MS Word and Excel aren't written
in Lisp, and they sell well and and serve their intended purpose but when
you try to do anything "unusual" with them you realize how crippled they
really are underneath that polished look.  On the language vendor side, both
Franz and Harlequin seem to be doing well and according to the rumors on the
net it wasn't the Lisp business that brought about Lucid's demise.

cheers,

BM
From: Rich Parker
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <3r7c0q$pkc@moa1.moa.com>
·····@cs.rochester.edu (Bulent Murtezaoglu) wrote:
>Hmmm.  Off the top of my head I'd say Emacs, AutoCAD, and symbolic math
>systems with Lispy engines inside.  Sure, MS Word and Excel aren't written
>in Lisp, and they sell well and and serve their intended purpose but when
>you try to do anything "unusual" with them you realize how crippled they
>really are underneath that polished look.  On the language vendor side, both
>Franz and Harlequin seem to be doing well and according to the rumors on the
>net it wasn't the Lisp business that brought about Lucid's demise.

Also, Interleaf Publisher has a Lispy engine, to my knowledge. For a while they
were a popular multiple platform page layout program, but FrameMaker has
largely taken over Interleaf's market and Frame is written in MPW C & C++.

A smaller footprint compiled Lisp (without eval) is really needed, in order to
compete with modern development environments. The product should also feature
an interface builder for every platform that it supports (at _least_ the Mac)
<g>.

I would also look forward to a _real_ application framework, such as the C++
modernly provide.

FWIW,

-rich-
From: Richard M. Alderson III
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <aldersonD9rMA3.5Hs@netcom.com>
In article <·········@Yost.com> ····@Yost.com (Dave Yost) writes:

>Perhaps someone could do a survey people who still remember what they went
>through to transition from C to Lisp and collect a list of things they found
>difficult or hard to get used to.

Or whatever language(s) you knew prior to learning Lisp; some of us still don't
know C well enough to generate it, though we may be able to read it.

By the time I learned Lisp, I had already worked out all the difficulties with
recursion and pointers by (a) having worked with PL/1 area variables and the
like, and (2) having written a recursive-descent compiler for Pascal.  I'd
already been writing 360/370 assembler for years, so had little difficulty with
the SET vs. SETQ concepts; PDP-10 assembler clarified CAR/CDR *implementation*,
but the concepts were already there.

The biggest problem was using Weissman's book, and the McCarthy manual, with a
PDP-10 version of Standard Lisp (before I graduated to MACLISP):  Quoting top-
level forms wasn't necessary in the first Lisp I used (LISP 360).

Actually, the biggest problem was using Weissman's book at all.  All that time
spent on dotted-pair notation, instead of on application of Lisp itself.  It
only stopped being a toy for me when the first edition of Winston & Horn came
into the Math library.

So I don't think the problems are with C vs. Lisp, but with the usual level of
preparation given to programmers with respect to data structures at all.

Just my not so damned humble opinion, of course.
-- 
Rich Alderson   You know the sort of thing that you can find in any dictionary
                of a strange language, and which so excites the amateur philo-
                logists, itching to derive one tongue from another that they
                know better: a word that is nearly the same in form and meaning
                as the corresponding word in English, or Latin, or Hebrew, or
                what not.
                                                --J. R. R. Tolkien,
········@netcom.com                               _The Notion Club Papers_
From: E. Handelman
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <eliot-0906950003320001@slip53.dialup.mcgill.ca>
····@Yost.com (Dave Yost) writes:


>Perhaps someone could do a survey people who still remember what they went
>through to transition from C to Lisp and collect a list of things they found
>difficult or hard to get used to.

How's about the other way around? I've written huge incomprehensible
self-mutating lisp programs that do nothing and yet I can't figure out the
first thing about C. What boggles the mind is that c programs,  as I
understand it, are actually intended to do things. Am I alone in 
being unable to grasp this idea?
From: Martin Brundage
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3s5ka2$3vv@ixnews3.ix.netcom.com>
In <······················@slip53.dialup.mcgill.ca> 
·····@sunrise.cc.mcgill.ca (E. Handelman) writes: 
>····@Yost.com (Dave Yost) writes:
>>Perhaps someone could do a survey people who still remember what they 
went
>>through to transition from C to Lisp and collect a list of things they 
found
>>difficult or hard to get used to.
>How's about the other way around? I've written huge incomprehensible
>self-mutating lisp programs that do nothing and yet I can't figure out 
the
>first thing about C. What boggles the mind is that c programs,  as I
>understand it, are actually intended to do things. Am I alone in 
>being unable to grasp this idea?

This is a very interesting comment I seem to have heard more than once. 
Based in my exposure to both C and Lisp, I think maybe the explanation 
for this phenomenon (i.e., skilled Lisp programmers have difficult 
grasping C, at least as an applications language) is the level of 
insulation from the hardware provided by Lisp, due to its high level of 
abstraction. C, in contrast, has a very low level of abstraction, being 
little more than a portable assembler. C would be most comprehensible to 
programmers programming "down to the metal" and intermixing C and 
assembler, because it follows the hardware so closely.

As a hardware engineer familiar with assembler programming, my 
experience with "learning C" was that I really didn't have to learn it: 
I just started using it, with an occasional reference to a manual for 
things like syntax. Likewise, I found "learning Lisp" to be equally 
intuitive and obvious, probably because of my previous exposure to 
high-level languages and C. I strongly suspect that doing the process 
backwards, i.e., going in reverse of the levels of abstraction, would 
not only be extremely difficult, but painful as well.

For programmers to say "Lisp is hard" and "C is easy" reminds me of the 
occasional programmer (usually an amateur) who swears that assembler is 
the best and easiest way to go, regardless of the size of the program! 
Yes, assembler's easier from the standpoint of the simplicity of the 
programming language, but that's as far as it goes. I think this is 
equivalent to the discussion of the difficulty of C V. Lisp, raised one 
level of abstraction. It's fascinating to speculate that the popularity 
(if not the very existence) of C++ may be based on the fallacy of C 
being "easier" than Lisp, in view of the possible basis for this belief. 

--
Marty
······@ix.netcom.com
·······@datum.com
Datum Inc, Bancomm Div. "Masters of Time"
From: Ken Anderson
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <KANDERSO.95Jun7122504@bitburg.bbn.com>
In article <··········@engr.orst.edu> ·······@ada.CS.ORST.EDU (John Atwood) writes:

   the number 3 comes from the Garnet/Amulet development team.
   In the Garnet FAQ 
   (http://www.cs.cmu.edu/afs/cs.cmu.edu/project/garnet/garnet/FAQ),
   they answer the question, Why switch to C++, they list political & 
   technical reasons. One technical reason:

   * Speed: We spend 5 years and lots of effort optimizing our Lisp code,
   but it was still pretty slow on "conventional" machines.  The initial

I suspect they spent 5 years developing and some of that optimizing.  Their
change log only mentions optimization 20 times.  

The mention of "conventional" machines suggests that Garnet may have
started on Lisp machines.  When porting to a conventional machine you need
to use declarations wisely, and make other changes to your code.  A quick
grep through the Garnet sources suggets that there is plenty of room for
improvement.  For example:

1.  Of the 1201 declarations there is only 1 fixnum declaration even though
i suspect a graphics application does a lot of fixnum arithmetic (there are
at last 2500 uses of "(+ ", for example.

2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
jokes.

3. There are integer declarations, but they are likely to be useless.

Profiling is required to show what optimizations are important, but this
suggests there is plenty of room for improvement.

   version of the C++ version, with similar functionality, appears to be
   about THREE TIMES FASTER than the current Lisp version without any
   tuning at all.

It would be interesting to see how they got similar functionality in C++.
For example, it looks like the kr language (ie, kr-send, g-value) makes
heavy use of binding special variables, which is not a fast operation.
They probably did that some other way in C++.  Perhaps the Lisp version
should take advantage of this too.

   The C++ code is now available (Amulet alpha 0.2) at:
   http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html

Thanks, i'll take a look.
--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: 55437-olivier clarisse(haim)463
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <D9v0uK.Mq3@ssbunews.ih.att.com>
The case of AMULET versus GARNET is becoming quite entertaining.
Anyone knowledgeable about these systems would care to comment?

In article <·····················@bitburg.bbn.com>, ········@bitburg.bbn.com (Ken Anderson) writes:
|> In article <··········@engr.orst.edu> ·······@ada.CS.ORST.EDU (John Atwood) writes:
|> 
|> I suspect they spent 5 years developing and some of that optimizing.  Their
|> change log only mentions optimization 20 times.

1. What percentage of the total GARNET project resources were spent
   optimizing (for speed) versus discovering and implementing
   new concepts of constraint based VP for example?
   
2. What tools where used to do code profiling, metering on CMU-CL
   for GARNET?

3. Were courses taught on CL optimization and programming CL
   for speed at CMU during the GARNET project? etc.

|>[...] A quick
|> grep through the Garnet sources suggets that there is plenty of room for
|> improvement.  For example:
|> 
|> 1.  Of the 1201 declarations there is only 1 fixnum declaration even though
|> i suspect a graphics application does a lot of fixnum arithmetic (there are
|> at last 2500 uses of "(+ ", for example.
|> 
|> 2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
|> jokes.
|> 
|> 3. There are integer declarations, but they are likely to be useless.
|> 
|> Profiling is required to show what optimizations are important, but this
|> suggests there is plenty of room for improvement.
|> 
I agree, and unless developers from the orginal GARNET team can comment
to counter or confirm the above. These observations indicate
that GARNET speed performance optimization effort was a *joke*.
This suffices to justify a rewrite of GARNET in whatever language
the next team of developers is more familiar with.

I suppose the WEB pages on GARNET and AMULET will soon to
be updated to correct the misinformation they are causing
(right ?).
|> 
|>    version of the C++ version, with similar functionality, appears to be
|>    about THREE TIMES FASTER than the current Lisp version without any
|>    tuning at all.
|> 
It changes the conclusions I would draw from this last statement:
"The GARNET project used Lisp to discover new principles of
GUI and VP that would have been very hard to conceptualize using other
environments [at the time], now a new project called AMULET is using
these results to reimplement a product in C++ based on these findings."
This is perfecly fine and is a great way to get project funding at this time.

Do we see a trend elsewhere in this industry where good new ideas have
often emerged from Lisp projects to be later productized in C and C++?

Perhaps if Lisp and CLOS were taught with clear emphasis on performance
optimization tools and techniques, productizing in another language
would not be necessary. But then again, rewriting is always so
much easier (and rewarding to the rewriters) than inventing...

-- 
----------------
Olivier Clarisse	     "Languages are not unlike living organisms
Member of Technical Staff     can they adapt and improve to survive?"
AT&T Bell Laboratories
From: John Atwood
Subject: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <3rqc23$irt@engr.orst.edu>
Regarding the effiency of Lisp expressions,
Ken Anderson <········@bitburg.bbn.com> wrote:
>
>2. Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
>jokes.
>

I'll bite.  What would be the proper way to code this?

-- 
_________________________________________________________________
Office phone: 503-737-5583 (Batcheller 349);home: 503-757-8772
Office mail:  303 Dearborn Hall, OSU, Corvallis, OR  97331
_________________________________________________________________
From: Steve Haflich
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <SMH.95Jun6210656@vapor.Franz.COM>
In article <··················@netcom.com> ········@netcom.com (Richard M. Alderson III) writes:

   In article <·········@Yost.com> ····@Yost.com (Dave Yost) writes:

   >Perhaps someone could do a survey people who still remember what they went
   >through to transition from C to Lisp and collect a list of things they found
   >difficult or hard to get used to.

I would love to help, but C did not yet exist when many of us made the
transition from C to Lisp.  I do believe knowledge of Lisp did not
hinder my acquisition of C.

   Or whatever language(s) you knew prior to learning Lisp; some of us still don't
   know C well enough to generate it, though we may be able to read it.

C aside, I have still not been able to learn C++ well enough to
generate it.  The fundamental concepts underlying C++ cleverly avoid
any number of fundamental non-problems, but the bizarre lexography
mostly keeps that cleverness hidden.
From: Chris Reedy
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <creedy-1406951118240001@128.29.151.94>
In article <················@vapor.Franz.COM>, ···@Franz.COM (Steve
Haflich) wrote:

> In article <··················@netcom.com> ········@netcom.com (Richard
M. Alderson III) writes:
> 
>    In article <·········@Yost.com> ····@Yost.com (Dave Yost) writes:
> 
>    >Perhaps someone could do a survey people who still remember what they went
>    >through to transition from C to Lisp and collect a list of things
they found
>    >difficult or hard to get used to.
> 
> I would love to help, but C did not yet exist when many of us made the
> transition from C to Lisp.  I do believe knowledge of Lisp did not
> hinder my acquisition of C.
> 
>    Or whatever language(s) you knew prior to learning Lisp; some of us
still don't
>    know C well enough to generate it, though we may be able to read it.
> 

As someone who knows a fairly large number of programming languages (Lisp,
C, C++, Pascal, Ada, 3 assembly languages, a little Smalltalk ...) and who
knew most of them before coming to Lisp, here, off the top of my head, are
some of the things I found myself stumbling over.

1.  The size of the language.

Yes, just about everything I've ever wanted is there.  The problem is
that, even after having read through Steele, I find myself forgetting that
the language has certain capabilities.  I frequently find myself having to
look up how to use those capabilities.

If I was trying to teach this as an initial programming language, I would
definitely want to find a small subset that illustrates the key concepts.

2.  CLOS and Generic Functions

Interestingly enough, the generic function approach to objects took some
getting used to, primarily due to lack of prior examples (neither C++ of
Smalltalk does it this way).  In the long run I am happier with this
approach than the others, especially since it solves one of the pains of
C++ which is that I can't (in C++) create methods on built-in types.

3.  Garbage collection

A person coming from C/C++ has to break themselves of the habit of
worrying about "ownership" concepts, since ownership is the usual approach
to keeping your sanity in a programming language where you have to perform
explicit deallocation.  Lisp code that embodies this concept looks rather
ugly.

P.S. I have anecdotal evidence that this is one of the major pains
associated with trying to convert a Lisp application to C/C++.

4.  Multiple ways to do the same thing

For example:
  car/cdr versus first/rest
  lists versus vectors versus structures versus classes
  setf versus setq

And in some cases the (almost) same way to do very different things, for
example:

  (setq x y) versus (setq x 'y)

This may also be a partial reiteration of number 1.

5.  Macros

In my opinion this is one of the most important capabilities in Lisp. 
When I am using Lisp I want to be able to define "my own language" and use
it.  However, backquote notation takes a little getting used to and is you
have to think about the fact that a macro is a function returning "source
code" for a while before the beauty of the concept sinks in.

6.  The lack of separation between compile, load, and execute

With most other languages (Smalltalk excepted), these are separate steps. 
The fact that which modules are loaded when I attempt a compilation can
significantly impact the results of that compilation is something I'm
still not comfortable with (even though I use it to my advantage, see the
prior comment), especially since ...

7.  Lack of a standard defsystem and dependency analysis tools

When I'm doing C/C++ I have make utilities.  For the more advanced users
there are dependency analysis tools which generate the make dependencies
(the most error prone part) for you.  The more advanced environments on
PCs and Macs provide this capability as a part of the environment so I
don't even have to worry about it.  I still don't know the right way to go
about organizing a large lisp system.

> C aside, I have still not been able to learn C++ well enough to
> generate it.  The fundamental concepts underlying C++ cleverly avoid
> any number of fundamental non-problems, but the bizarre lexography
> mostly keeps that cleverness hidden.

I do generate C++.  It's an ugly language.  After I (finally, really)
understood instance initialization in C++, I decided I definitely prefer
Lisp.  The problem with instance initialization in C++ is that it is easy
to do the easy examples and a major pain in the *** to do the complex
examples.

Hope that helps.

  Chris

The above opinions are my own and not MITRE's.
Chris Reedy, Open Systems Center, Z667
The MITRE Corporation, 7525 Colshire Drive, McLean, VA 22102-3481
Email: ······@mitre.org  Phone: (703) 883-7183  FAX: (703) 883-6991
From: Scott Wheeler
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <jyndnl@bmtech.demon.co.uk>
In Article <·······················@128.29.151.94> Chris Reedy writes:
>As someone who knows a fairly large number of programming languages 
(Lisp,
>C, C++, Pascal, Ada, 3 assembly languages, a little Smalltalk ...) and 
who
>knew most of them before coming to Lisp, here, off the top of my head, 
are
>some of the things I found myself stumbling over.

I have a similar language background but I don't know Lisp in any 
detail, although in general I try to have a reading knowledge of most 
common languages in order to evaluate their strengths. I thought it 
might be useful to comment why I haven't found it easy to get into.

>1.  The size of the language.
>
>Yes, just about everything I've ever wanted is there.  The problem is
>that, even after having read through Steele, I find myself forgetting 
>that the language has certain capabilities. 

Absolutely. From the outside (i.e. having read about 1/3 of Steele 
and browsed the rest, with some other books), it looks like a kitchen 
sink language. Probably there are some unifying concepts in there, but 
they are much harder to find than in something like C++ or Eiffel. 
After working out a way to do something, I'm not confident that it's 
the *right* way.

>3.  Garbage collection
>
>A person coming from C/C++ has to break themselves of the habit of
>worrying about "ownership" concepts, since ownership is the usual 
>approach to keeping your sanity in a programming language where you 
>have to perform explicit deallocation.

I find it awkward to bear in mind that something returned from a 
function is the thing itself, not a copy. In practice it doesn't 
usually make a difference, but it always feels unsafe. I'm left with 
the uneasy feeling that if I modify a variable it could have side 
effects I haven't anticipated. Not being used to GC languages, I prefer 
the C++ version where you can decide whether to take a copy or a 
reference to the original. The worst (i.e. hardest for me personally to 
work with) version seems to be Eiffel, where copy or reference 
semantics are specified on a class by class basis.

>The problem with instance initialization in C++ is that it is easy
>to do the easy examples and a major pain in the *** to do the complex
>examples.

Interesting. I'd have said that constructors and destructors in C++ 
(particularly the use for globals and class data members) were a major 
strong point compared to most OO languages.

Ok, flamebait section. It looks to me as though most of CLOS is 
designed to support CS research, and is inappropriate elsewhere because 
it takes too long to learn and work out which bits you didn't need to 
know about. I work on the software end of industrial research programs. 
Commercial programming is generally simple if you do it right. I've got 
55000 (used to be 85000) lines of C++ in one program, with only one 
"clever" bit in it (implementing a dynamic constructor, i.e. making an 
object who's class is decided at run-time by the data). It uses only 
single inheritance (apart from the standard streams library). If I were 
to implement in Lisp, I could do it in a subset of Scheme. I don't need 
CL or CLOS, and I find it difficult to see for what sort of commercial 
project I'd need that expressive power.

As I understand it, effort is going into further extensions of the 
language, e.g. the loop construct. If it is intended that the language 
should be used as a general-purpose language (and any comparison with 
C++ surely implies this), these extensions seem to be heading in the 
wrong direction. As a technical manager, I do make an honest attempt to  
identify the "best" current language, but in the case of CL and CLOS I 
pretty well have to guess at its utility because I just don't have the 
time to explore it. [Please don't email back with statistics - it's not 
that I don't believe them or that they're not interesting, but 
partisans of any Holy Language have similar stats, and I prefer to use 
my own performance on the Profane Languages as a basemark].

Anyway, as you fill your napalm tanks, please bear in mind that I 
browsing the Lisp groups with an open mind. I'm not interested in 
converting you to C++, I just thought it might be useful to explain why 
I don't use CL.

Scott
From: Stefan Monnier
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3t8kjm$qkk@info.epfl.ch>
In article <······@bmtech.demon.co.uk>,
Scott Wheeler  <······@bmtech.demon.co.uk> wrote:
] I find it awkward to bear in mind that something returned from a 
] function is the thing itself, not a copy. In practice it doesn't 
] usually make a difference, but it always feels unsafe. I'm left with 
] the uneasy feeling that if I modify a variable it could have side 
] effects I haven't anticipated. Not being used to GC languages, I prefer 
] the C++ version where you can decide whether to take a copy or a 
] reference to the original. The worst (i.e. hardest for me personally to 
] work with) version seems to be Eiffel, where copy or reference 
] semantics are specified on a class by class basis.

As an big GC fan I must say it is the first time someone argues against GC based
not on slowness/non-real-timeness/etc... but on semantics. I'm puzzled: I hope
your criticism only goes to CL and its all-pointer based view of the world,
because else you can always choose between reference or copy semantics. 
Also, as far as I know (my Eiffel experience is limited and getting old), Eiffel
doesn't use classes for copy/reference semantics: it's more like C++ where the
choice is based on the variable's declaration (with a slight difference: instead
of having "object vs. object pointer" you have "expanded object vs. object" (the
default is reference semantics)).

] Anyway, as you fill your napalm tanks, please bear in mind that I 
] browsing the Lisp groups with an open mind. I'm not interested in 
] converting you to C++, I just thought it might be useful to explain why 
] I don't use CL.

I do use CL but I must admit that CLOS, though nifty, is a dog: I always find
myself using defstruct to overcome the performance problems of defclass.


	Stefan
From: Scott Wheeler
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <jyploy@bmtech.demon.co.uk>
In Article <··········@info.epfl.ch> Stefan Monnier"  writes:
>In article <······@bmtech.demon.co.uk>,
>Scott Wheeler  <······@bmtech.demon.co.uk> wrote:
>] I find it awkward to bear in mind that something returned from a
>] function is the thing itself, not a copy. In practice it doesn't
>] usually make a difference, but it always feels unsafe. I'm left with
>] the uneasy feeling that if I modify a variable it could have side
>] effects I haven't anticipated. Not being used to GC languages, 
I prefer the C++ version where you can decide whether to take a copy or 
a
>] reference to the original. The worst (i.e. hardest for me personally 
to
>] work with) version seems to be Eiffel, where copy or reference
>] semantics are specified on a class by class basis.
>

> As an big GC fan I must say it is the first time someone argues 
> against GC based not on slowness/non-real-timeness/etc... but on 
> semantics. I'm puzzled: I hope your criticism only goes to CL and its 
> all-pointer based view of the world, because else you can always 
> choose between reference or copy semantics.
> Also, as far as I know (my Eiffel experience is limited and getting 
> old), Eiffel doesn't use classes for copy/reference semantics: it's 
> more like C++ where the choice is based on the variable's declaration 
> (with a slight difference: instead of having "object vs. object 
> pointer" you have "expanded object vs. object" (the default is > 
reference semantics)).

Firstly, I'm not arguing against GC. For some uses, I'm sure it's the 
correct choice, at least on convenience grounds when you are used to 
it. I was talking purely about what I found difficult myself (the 
subject matter of the original question), with no implication that this 
made the language wrong in some way. 

On Eiffel, by the way, I'd guess you are used to Eiffel 2. In Eiffel 3, 
classes can be defined as "expanded", having copy semantics. Hence 
INTEGER is derived from INTEGER_REF, with no additional features, but 
with the "expanded" tag. Perhaps I am unjust, but this looks like 
creeping perfectionism - "Everything must be an object, everything must 
behave the same" - while trying to get numerics to behave sensibly. 
By the way, you are correct in saying that you can also add "expanded" 
to a single variable. 

Scott
From: Richard Urwin
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <804021896snx@soronlin.demon.co.uk>
> is <······@bmtech.demon.co.uk> Scott Wheeler <······@bmtech.demon.co.uk>
>> is <·······················@128.29.151.94> Chris Reedy

>>As someone who knows a fairly large number of programming languages (Lisp,
>>C, C++, Pascal, Ada, 3 assembly languages, a little Smalltalk ...) and 
>>who knew most of them before coming to Lisp, here, off the top of my head, 
>>are some of the things I found myself stumbling over.

>I have a similar language background but I don't know Lisp in any 
>detail, although in general I try to have a reading knowledge of most 
>common languages in order to evaluate their strengths. I thought it 
>might be useful to comment why I haven't found it easy to get into.

I also have a similar background, but I have a third point of view. I
program in C to make money, except for one recent project in C++. I
program in XLisp for fun. Contrary to one of the earlier posts in this
thread I find C++ horendously inelegant. C is a rough workhorse, and I
would not call it elegant, but C++ has so many fudges to avoid problems
created by the C mindset that it is much harder to program than XLisp.
(I cannot comment on CLOS.)

>>1.  The size of the language.
>After working out a way to do something, I'm not confident that it's 
>the *right* way.

Me too. However, if I had the same number of years experience and the odd
course of tuition as I do for C, this would not be the case. There are
plenty of books out there that can teach the *right* way.

>>3.  Garbage collection
>I find it awkward to bear in mind that something returned from a 
>function is the thing itself, not a copy.

This is not a problem, unless you play about with the dangerous functions
(replaca etc.) Lisp is designed to make this distinction invisible.

>Ok, flamebait section. It looks to me as though most of CLOS is 
>designed to support CS research, and is inappropriate elsewhere...

>I've got 
>55000 (used to be 85000) lines of C++ in one program, with only one 
>"clever" bit in it (implementing a dynamic constructor, i.e. making an 
>object who's class is decided at run-time by the data).

Which is trivial in XLisp of course.

>If I were 
>to implement in Lisp, I could do it in a subset of Scheme.I don't need 
>CL or CLOS, and I find it difficult to see for what sort of commercial 
>project I'd need that expressive power.

I work with real time control and graphics, and I can see a use for at
least dynamic construction. I really had problems with C++'s strict
typing. I wanted a list of unrelated objects, trivial in XLisp,
impossible, (or badly error prone because it forgets how big objects
are,) in C++.

But the power of C is in its library, and the same problem exists here.
The reference book is two inches thick and nothing tells you how to do
things. Consider, for example converting a string to a float. How do I
find atof() in the manual? Or is sscanf("%g") as good? Why did I choose
to write my own function when I had to do it? (And I did make the right
decision.)

C also has excessive power for any given project area. A given project
area will not use all of ioctl(), strrchr(), tanhl(), atan2(),
spawnlpe(), intdosx() and inp().

>I just don't have the time to explore it.

Would you expect anyone to know anything about C++ having only used
Pascal before? 

Get XLisp or CLisp for free, buy a book or two and practice. It will
pay you back. When you decide that you like it you can buy a compiler
version.

 / ____Richard Urwin____     | | Space is Big. Really Big...You \
/ ···@soronlin.demon.co.uk   | | may think it's a long way down  \
\ Birmingham, United Kingdom | | the road to the chemists, but   /
 \                           |_| that's just peanuts to space.  /
From: Scott Wheeler
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <jyntyi@bmtech.demon.co.uk>
In Article <············@soronlin.demon.co.uk> Richard Urwin  writes:
>> ...
[I'm replying in comp.lang.lisp if you're interested, but I think the 
thread is moribund here.]
From: Scott Wheeler
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <jyntyd@bmtech.demon.co.uk>
In Article <············@soronlin.demon.co.uk> Richard Urwin  writes:
>>>3.  Garbage collection
>>I find it awkward to bear in mind that something returned from a
>>function is the thing itself, not a copy.
>
>This is not a problem, unless you play about with the dangerous 
functions
>(replaca etc.) Lisp is designed to make this distinction invisible.

Which is why I went on to say that it doesn't usually matter in 
practice. By the way, most of your answer is missing the fundamental 
point: the discussion is about why CL is not popular, and the 
difficulties that people have in adapting to it. While I appreciate 
your advice, it doesn't alter the point that these are stumbling-blocks 
to anyone from a mainly non-GC background.

>I work with real time control and graphics, and I can see a use for at
>least dynamic construction.

Yes, but do you want/need it in the language? I'm happy with 
implementing it in a vanilla OO language (Eiffel, C++ etc.).

> I really had problems with C++'s strict
>typing. I wanted a list of unrelated objects, trivial in XLisp,
>impossible, (or badly error prone because it forgets how big objects
>are,) in C++.

Of course you could now use RTTI in ANSI-draft C++. I'm puzzled as to 
how one would use a list of completely unrelated objects though.

>But the power of C is in its library, and the same problem exists 
here.
>The reference book is two inches thick and nothing tells you how to do
>things. Consider, for example converting a string to a float. How do I
>find atof() in the manual? Or is sscanf("%g") as good? Why did I 
choose
>to write my own function when I had to do it? (And I did make the 
right
>decision.)

Buy a different reference book, you've got a dud :-). PJ Plauger's 
"Standard C library" is about 3/4" thick, contains complete source code 
and hints on appropriateness of use, and it would take me about 15s to 
find the functions - although I'd actually use the online help from my 
compiler. Anyway, that's by the by. More relevant is that C etc. use 
libraries, whereas Lisp generally uses language extensions. Yes, I know 
you can implement them in Lisp, but in practice they look like extra 
bits of the language, carrying new syntax with them: this makes the 
language harder to learn. My guess is that this tradition arose partly 
because CL antedates CLOS.

Take the loop construct as an example. In C++, or most OO languages, if 
you had a new idea for a fancy iteration structure, you'd probably 
build an iterator class and an associated cursor class - both possibly 
parametrised. In CL, you extend the language. Seen from the outside, it 
has a very COBOLish feel just because of this. Of course this ability 
to extend the language is a major strength of Lisp in the research 
environment, but generally it's the last thing I'd want showing up in 
code in a commercial project that has to be maintained by someone other 
than the author.

>Would you expect anyone to know anything about C++ having only used
>Pascal before?

What has that to do with the price of tobacco? My interest is in 
whether one can gain a working knowledge of the language in a 
reasonable evaluation period, then having gained that knowledge judge 
whether the language is useful to us. I've done this many times. It's 
easiest with languages that have a strict division between syntax and 
library, and hardest with languages from the AI community. CL has to be 
the largest language that I've come across. By the way, we've got a 
semi-tame Lisp hacker with no verbal off switch (hi Jon), so it's 
getting a better crack of the whip than say Eiffel, for which I know no 
users.

>Get XLisp or CLisp for free, buy a book or two and practice. It will
>pay you back. When you decide that you like it you can buy a compiler
>version.

Thanks, but I've had XLisp for years, and I've got Allegro\PC at the 
moment. Evaluating XLisp to decide whether to use CL is about as useful 
as evaluating C in order to decide on C++ - they're not the same 
language.

Lest anyone miss the point - I'm *not* anti-CL or particularly fanatic 
about C++. My impression is that it's a nice language for a full-time 
single-seat programmer, but not particularly useful commercially 
(considering languages such as Eiffel and Smalltalk as alternatives), 
and I've tried to give some idea of the reasons for that. Anyway, have 
fun with it (drat, I'm supposed to be a suit) - sorry, have a 
productive work session.

Scott

[by the way, I've pruned the newgroups down a bit]
From: Erik Naggum
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <19950626T153740Z@naggum.no>
[Scott Wheeler]

|   Of course you could now use RTTI in ANSI-draft C++. I'm puzzled as to 
|   how one would use a list of completely unrelated objects though.

the typical "stack" is such a list.  if we consider "list" a little
relaxedly, so is the typical "file system".  of course, "completely" may
have to be relaxed, too.

|   Buy a different reference book, you've got a dud :-). PJ Plauger's
|   "Standard C library" is about 3/4" thick, contains complete source code
|   and hints on appropriateness of use, and it would take me about 15s to
|   find the functions - although I'd actually use the online help from my
|   compiler.

I have actually timed myself (aren't computers great?) when I need to look
something up in CLtL2.  I average 11 seconds to find what I'm looking for,
usually by going through the very good index.  15 seconds sounds excessive.
man, it adds up to _days_ during a lifetime.

|   More relevant is that C etc. use libraries, whereas Lisp generally uses
|   language extensions.

Bjarne Stroustrup doesn't think those two are such opposites as you imply.
(no, he's not my hero, he's a very smart gone very berserk.  doesn't mean
he won't do a lot of good in between.)

|   Take the loop construct as an example.

the loop construct is quite atypical, but I assume you know that.

|   Of course this ability to extend the language is a major strength of
|   Lisp in the research environment, but generally it's the last thing I'd
|   want showing up in code in a commercial project that has to be
|   maintained by someone other than the author.

a new class extends the C++ type system, Scott, including operator
overloading, conversion of objects of various types, etc, etc.  much worse
than Lisp, IMNSHO.  people seem to want them in commercial projects all the
time.  (strictly speaking, I don't know whether they are maintained. :)

it actually seems that language evolution is now in vogue.  all my favorite
tools have cancer.  there's a new syntax for one of them every other day,
and what do you know?  people are actually jumping up and down screaming
for _more_ new syntaxes and _more_ hard-to-learn things in those languages.
Lisp is to blame because it did all this _years_ before these guys picked
it up, so the language evolution in Lisp requires people to study a lot and
know a lot of weird science before they can usefully extend the language.
take a look a the Scheme crowds (plural!).  they have "language extension"
written all over them these days.  Lisp is to blame because it only let a
few arrogant know-it-alls do it on their own so the new kid on the chip
couldn't put his favorite construct in there.  no wonder he doesn't want to
play.

seriously, this too, Lisp did before everybody else.  I sometimes wonder if
there is a cosmic constant for how many times something must be reinvented
before it is considered fully invented.  or maybe it's just the old adage
about pioneers never getting to reap the fruits of their labor.

#<Erik 3013169860>
-- 
NETSCAPISM /net-'sca-,pi-z*m/ n (1995): habitual diversion of the mind to
    purely imaginative activity or entertainment as an escape from the
    realization that the Internet was built by and for someone else.
From: HStearns
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3tmpqo$kd4@newsbf02.news.aol.com>
Scott, you've said that your "interest is in whether one can gain a
working knowledge of the language in a reasonable evaluation period" and
you site loop constructs as an example of a core part of the language
which makes it hard for you to do this. 

I agree that the syntactical differences in loop from  other parts of the
language can be confusing.  I am curious, though, what made you feel that 
loop was a core part of the language which needed to be studied in order
to gain a "working knowledge of the language."  

The way I think about it is this:

The basic rules for Lisp are very few, and very consistent -- especially
as compared to C.   (The price for this is a syntax that many people don't
like, but which others, such as myself, don't mind.  Ce la vie.)   As a
result, Lisp can be taught in an hour, and often is.  

It also happens that the expresive power of Lisp can be used to create a
lot of very handy utilities -- some of which use different programming
styles.  One can do well with just the basics learned in an hour, happily
creating whatever tools you need to get the job done.  In some cases,  one
might wonder if someone hasn't already solved some particular problem, and
might look through Steele and other texts for a solution.  In the case of
iteration, that solution might be found in a (perhaps) dizzying array of
tools including: loop, mapping utilities, do, sequencers, iterators,
streams and even goto.   For me, these options don't make Lisp harder to
learn, just easier to use.

Unfortunately, I believe your viewpoint is not unique.  Many people look
at the entire contents of the ANSI standard as being the "core" language. 
Can you shed any light on why this perception exists, and what Lisp
providers and educators might do to let users just learn the fundamentals
and go have fun?
From: Marc Wachowitz
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3to8bn$1t5@trumpet.uni-mannheim.de>
HStearns (········@aol.com) wrote:
> Many people look
> at the entire contents of the ANSI standard as being the "core" language. 
> Can you shed any light on why this perception exists, and what Lisp
> providers and educators might do to let users just learn the fundamentals
> and go have fun?

I think EuLisp provides some good hints about this: Structure the language
into levels and libraries, such that lower levels and non-library aspects,
as well as the separate libraries, can be learned mostly in isolation, but
of course in design consider the seamless interaction of those pieces. For
more information about EuLisp, look at "ftp://ftp.bath.ac.uk:/pub/eulisp".
(Btw, does anyone have information about the progress of EuLisp? Since 93,
there's version 0.99 of the language definition on that server.)

------------------------------------------------------------------------------
   *   wonder everyday   *   nothing in particular   *   all is special   *
                Marc Wachowitz <··@ipx2.rz.uni-mannheim.de>
From: Simon Brooke
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <DBIFFr.2BB@rheged.dircon.co.uk>
In article <··········@newsbf02.news.aol.com>,
HStearns <········@aol.com> wrote:
>Scott, you've said that your "interest is in whether one can gain a
>working knowledge of the language in a reasonable evaluation period" and
>you site loop constructs as an example of a core part of the language
>which makes it hard for you to do this. 

And lots of other sensible things, ending up with:

>Unfortunately, I believe your viewpoint is not unique.  Many people look
>at the entire contents of the ANSI standard as being the "core" language. 
>Can you shed any light on why this perception exists, and what Lisp
>providers and educators might do to let users just learn the fundamentals
>and go have fun?
     ^^^^^^^^^^^^

I think this may actually be the key point. LisP (in any of its
varieties) is a tool for people for whom computers are *fun* *things*;
for people who play. Don't for a moment think that this implies I
don't think LisP is a serious programming language: far from it, all
the best learning and all the best creative work are carried forward
in a spirit of play: a joyful experimentation, a lack of fear of
failure.

Unfortunately many people in Western societies have lost the
confidence to play. They fear to fail, and surround their work with
elaborate structures of safety nets. They use only tools they
believe they understand fully -- 

this is nonsense of course. Any modern computer system is way too
complex for anyone to understand fully. Like any other work of Magick,
you have to put some faith in the competence of the other wizards.

-- and consequently they'll never learn LisP. They'll never achieve
much, either. It isn't because they won't learn LisP that they wont
achieve much: it's because they've forgotten how to play.

-- 
------- ·····@rheged.dircon.co.uk (Simon Brooke)

	my other car is #<Subr-Car: #5d480>
				;; This joke is not funny in emacs.
From: Scott Wheeler
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <jyrcww@bmtech.demon.co.uk>
In Article <··········@rheged.dircon.co.uk> Simon Brooke writes:
>...
>Unfortunately many people in Western societies have lost the
>confidence to play. They fear to fail, and surround their work with
>elaborate structures of safety nets. They use only tools they
>believe they understand fully --


Whurgh! FLAME BAIT! I wasn't going to continue in this thread again as 
I thought I'd covered everything it was useful to say. But may you 
attempt to work with looped structures in a reference-counted 
environment for your sins.

Now look, I love hacking. I'm presented (slowly) crufting up an occam 
compiler in Eiffel for the fun of it. I've written a couple of OO 
varieties of C (before C++ was commonly available), one of which looked 
vaguely CLOSish with multiple dispatch. But this *doesn't* mean I'm 
going to take chances on a different language when there are jobs at 
stake just because it might be fun, particularly when our existing C++ 
is also fun. What I'm trying to do is to find out what may be a 
successor to C++ 3-5 years from now, and decide whether we should be 
doing any pilot work in it now. My best guesses are Smalltalk and 
Eiffel at the moment, though it's far from clear. Now *personally*, I 
happen to like hacking in languages like Icon, but as a suit, I'd have 
to be stupid to commit to that platform with only one supplier, and 
that freeware.

>
>this is nonsense of course. Any modern computer system is way too
>complex for anyone to understand fully. Like any other work of Magick,
>you have to put some faith in the competence of the other wizards.

What a load of dingo's kidneys. Look, go over to comp.lang.eiffel, and 
see the fuss there about the instability of the PEW Eiffel compiler. 
Suppose I'd trusted Meyer (high priest of reliable software) to 
produce - I'd really be stuffed now. I don't trust *anyone* unless I 
can check their work myself, and I'm always edgy unless I have back 
doors. I don't buy this "specialisation" rubbish either. I may not keep 
everything swapped in, but anything short of the details of CPU chip 
architecture is easy enough to find out about, and easy enough to 
understand.

>-- and consequently they'll never learn LisP. They'll never achieve
>much, either. It isn't because they won't learn LisP that they wont
>achieve much: it's because they've forgotten how to play.

Scruttocks, my dear chap. There's been some interesting work done in 
Lisp, but there's hardly a monopoly. And naff-all commercial software, 
though that's not necessarily something to hold against the language.

Scott
From: Harley Davis
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <DAVIS.95Jul12105507@passy.ilog.fr>
In article <··········@trumpet.uni-mannheim.de> ··@ipx2.rz.uni-mannheim.de (Marc Wachowitz) writes:

   HStearns (········@aol.com) wrote:
   > Many people look
   > at the entire contents of the ANSI standard as being the "core" language. 
   > Can you shed any light on why this perception exists, and what Lisp
   > providers and educators might do to let users just learn the fundamentals
   > and go have fun?

   I think EuLisp provides some good hints about this: Structure the
   language into levels and libraries, such that lower levels and
   non-library aspects, was well as the separate libraries, can be
   learned mostly in isolation, but of course in design consider the
   seamless interaction of those pieces. For more information about
   EuLisp, look at "ftp://ftp.bath.ac.uk:/pub/eulisp".  (Btw, does
   anyone have information about the progress of EuLisp? Since 93,
   there's version 0.99 of the language definition on that server.)

EuLisp itself has not undergone very much evolution since 93.  (Some,
but not much.)  The EuLisp committee no longer has EEC financing and
so work has slowed down.  The good news is that implementation work
continues.  Julian Padget's group at the University of Bath is still
working on FEEL, the public domain implementation of EuLisp, and Ilog
is, of course, still doing Ilog Talk, our implementation of the
proposed ISO Lisp standard with many EuLisp-based extensions.  For
information on Ilog Talk, please contact ····@ilog.com or visit our
Web server at <URL:http://www.ilog.com/>.

-- Harley Davis 
-- 

-------------------++** Ilog has moved! **++----------------------------
Harley Davis                            net: ·····@ilog.fr
Ilog S.A.                               tel: +33 1 49 08 35 00
9, rue de Verdun, BP 85                 fax: +33 1 49 08 35 10
94253 Gentilly Cedex, France            url: http://www.ilog.com/
From: Marcus Daniels
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <MARCUS.95Jul12120154@sayre.sysc.pdx.edu>
>>>>> "Scott" == Scott Wheeler <······@bmtech.demon.co.uk> writes:

> In Article <··········@rheged.dircon.co.uk> Simon Brooke
> writes:

Simon>  this is nonsense of course. Any modern computer system is way too
Simon> complex for anyone to understand fully. Like any other work of
Simon> Magick, you have to put some faith in the competence of the other
Simon> wizards.

Scott> What a load of dingo's kidneys. Look, go over to
Scott> comp.lang.eiffel, and see the fuss there about the instability
Scott> of the PEW Eiffel compiler.  Suppose I'd trusted Meyer (high
Scott> priest of reliable software) to produce - I'd really be stuffed
Scott> now. I don't trust *anyone* unless I can check their work
Scott> myself, and I'm always edgy unless I have back doors. I don't
Scott> buy this "specialisation" rubbish either. I may not keep
Scott> everything swapped in, but anything short of the details of CPU
Scott> chip architecture is easy enough to find out about, and easy
Scott> enough to understand.

There is a leap-of-faith when you use any software.  Learning your way
around any large language package is a big investment of time.  Hell,
learning what constitutes correct behavior is a big investment of
time.  Typically, it is impossible to justify this cost.  So only
the people who accept it as play will evolve skills.  

In business, the concern is security.  One way to have security is to
make sure nothing goes wrong: reliable software.  Of course, the usual
way, is to have insurance and lawyers.  The OO authorities merely
exist to validate to execs. the fact that getting software right isn't
easy.  Spend money, and be responsible.

If methodologies the OO authorities offer are really so powerful, they
should implement them, and write reliable programs.  BUT OBVIOUSLY,
the reason people don't use their stultifying ideas is because it
inhibits synthesis.  The point is that suits can utilize these
design minded people and find someone to blame.  Perfect sense!

..he said "some faith" not "blind faith".  One shouldn't be
in the position of being inhibited (or paralyzed) from fixing a bug.
From: Scott Wheeler
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <jyrhqa@bmtech.demon.co.uk>
>There is a leap-of-faith when you use any software.  Learning your way
>around any large language package is a big investment of time.  Hell,
>learning what constitutes correct behavior is a big investment of
>time.  Typically, it is impossible to justify this cost.  So only
>the people who accept it as play will evolve skills.

Not entirely true. You need one or two "pioneers" in a department to 
explore a new field, typically in their own time (which is what I do). 
Having done that, some others are going to "evolve skills" in a 
successful area purely because they've been told to.

>In business, the concern is security.  One way to have security is to
>make sure nothing goes wrong: reliable software.  Of course, the usual
>way, is to have insurance and lawyers.  

[checks - yup, this is coming from a US address]

>The OO authorities merely exist to validate to execs. the fact that 
>getting software right isn't easy.  

Codswallop.

>Spend money, and be responsible.
>
>If methodologies the OO authorities offer are really so powerful, they
>should implement them, and write reliable programs.  

At first sight, this is reasonable. I tend to find that a lot of the 
"names" in the field are dreadful programmers - have a look at the 
implementation of Wirth's Oberon, for instance. Yet Wirth is still 
worth listening to, providing you take him with a pinch of salt. Even 
more so, Meyer is worth looking at. By the way, how did we get on to 
"methodologies"? I didn't mention them - I don't even use the word 
since it's pidgin English ("methodology"="study of methods", not 
"method")

>BUT OBVIOUSLY,
>the reason people don't use their stultifying ideas is because it
>inhibits synthesis.  The point is that suits can utilize these
>design minded people and find someone to blame.  Perfect sense!

By 'eck, you're even more cynical than me! Some of these ideas *are* 
used successfully. I've seen some very impressive results with 
Schaer-Mellor, for instance, which we used to make sense of a huge 
project for industrial design software. While I'm not keen on them for 
use in every case or even most, your "inhibits synthesis" sounds very 
like the "constrains creativity" squeals that accompanied structured 
programming.

Scott
From: Scott Wheeler
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <jynynm@bmtech.demon.co.uk>
···@bmtech.demon.co.uk> <················@naggum.no>
X-Newsreader: NewsBase v1.36 (Beta)
Lines: 49

In Article <················@naggum.no> Erik Naggum writes:
>|   Of course this ability to extend the language is a major strength 
>|   of Lisp in the research environment, but generally it's the last 
>|   thing I'd want showing up in code in a commercial project that has 
>|   to be maintained by someone other than the author.
>
>a new class extends the C++ type system, Scott, including operator
>overloading, conversion of objects of various types, etc, etc.  much 
>worse than Lisp, IMNSHO.  people seem to want them in commercial 
>projects all thetime.  (strictly speaking, I don't know whether they 
>are maintained. :)

My first reaction was - no, you're wrong, Lisp's language extensions go 
much further, and there's a much bigger gap between someone designing 
an extension, and the user of that extension, than there is for a class 
designer and class user. However I think you do have a point. I very 
rarely define operators or type conversions for my C++ classes, and I 
think it has helped in maintaining the code. One advantage of learning 
C at an early age and getting bitten when I crammed too much into the 
control statement of a for() loop - one learns fast: "Don't Be 
Clever!".

>it actually seems that language evolution is now in vogue.  all my 
>favorite tools have cancer.  there's a new syntax for one of them 
>every other day, and what do you know?  people are actually jumping up 
>and down screaming for _more_ new syntaxes and _more_ hard-to-learn 
>things in those languages.

On the other hand, there is strong interest in minimal OO languages 
with the work done in standard class libraries, particularly Eiffel. It 
will be interesting to see if any take off.

>Lisp is to blame because it did all this _years_ before these guys 
>picked it up, so the language evolution in Lisp requires people to 
>study a lot and know a lot of weird science before they can usefully 
>extend the language. take a look a the Scheme crowds (plural!).  they 
>have "language extension" written all over them these days.  Lisp is 
>to blame because it only let a few arrogant know-it-alls do it on 
>their own so the new kid on the chip couldn't put his favorite 
>construct in there.  no wonder he doesn't want to play.

Err, would you mind standing a bit further away? I'd rather not get hit 
by the napalm. Anyway, I'd disagree - CL at least must be one of the 
languages with the most support for putting your favourite construct 
in, which is a big reason why I think it a bit of a risk in the 
commercial environment. The only one I can think that offers stronger 
support is the Poplog environment.

Scott
From: Harley Davis
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <DAVIS.95Jul4112334@passy.ilog.fr>
In article <······@bmtech.demon.co.uk> Scott Wheeler <······@bmtech.demon.co.uk> writes:

   In Article <··········@info.epfl.ch> Stefan Monnier"  writes:
   >In article <······@bmtech.demon.co.uk>,
   >Scott Wheeler  <······@bmtech.demon.co.uk> wrote:
   >] I find it awkward to bear in mind that something returned from a
   >] function is the thing itself, not a copy. In practice it doesn't
   >] usually make a difference, but it always feels unsafe. I'm left with
   >] the uneasy feeling that if I modify a variable it could have side
   >] effects I haven't anticipated. Not being used to GC languages, 
   I prefer the C++ version where you can decide whether to take a copy or 
   a
   >] reference to the original. The worst (i.e. hardest for me personally 
   to
   >] work with) version seems to be Eiffel, where copy or reference
   >] semantics are specified on a class by class basis.

The original objection seems a little odd for someone used to
programming in Lisp.  Most Lisp libraries do not document the
internals of an object, and there is no equivalent to C/C++ header
files.  All access to objects is via functions (although the use of
setf with an accessor-like function certainly provides a hint that an
object may be modified).  In any case, and in any language, it is
certainly up to the library designer to decide what objects should and
shouldn't be modified --- not up to the user to decide if he "takes" a
copy or pointer; after all, what if some object is *intended* to be
modified? It is therefore also the library designer's responsibility
to document any potentially side-effects that might crop up.

In Lisp, primary application objects don't seem to have this problem
very often since their high-level API provides the necessary security
to avoid unwanted low-level modifications.  However, sometimes there
is a temptation on the part of a library designer to expose an
internal list maintained by the library for efficiency reasons, with
big warnings not to destructively modify the list.  Unfortunately,
beginners don't always know when they are destructively modifying
lists and this can lead to very obscure bugs.  So most mature Lisp
libraries never expose internal structures and bugs of this sort are
avoided.

I don't see how this issue connects much with GC.

-- Harley Davis
-- 

------------------------------------------------------------------------------
Harley Davis                            net: ·····@ilog.fr
Ilog S.A.                               tel: +33 1 46 63 66 66
2 Avenue Galli�ni, BP 85                fax: +33 1 46 63 15 82
94253 Gentilly Cedex, France            url: http://www.ilog.com/
From: Robert Elton Maas, 14 yrs LISP exp, unemployed 3.8 years
Subject: Re: Lisp considered too hard
Date: 
Message-ID: <3to8pf$fnn@openlink.openlink.com>
Why is this thread cross-posted to comp.lang.lisp.x, where I'm seeing
it, since it has nothing specifically to do with XLISP?? Shouldn't this
thread be posted ONLY to the general LISP advocacy/discussion
newsgroup? If you-all agree, could future followups omit
comp.lang.lisp.x and any other inappropriate newsgroups?
From: Ken Anderson
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <KANDERSO.95Jun8120402@bitburg.bbn.com>
In article <····················@wavehh.hanse.de> ········@wavehh.hanse.de (Martin Cracauer) writes:

   ·······@ada.CS.ORST.EDU (John Atwood) writes:

   >In the Garnet FAQ 
   >* Speed: We spend 5 years and lots of effort optimizing our Lisp code,
   >but it was still pretty slow on "conventional" machines.  The initial
   >version of the C++ version, with similar functionality, appears to be
   >about THREE TIMES FASTER than the current Lisp version without any
   >tuning at all.

   I don't think that is a meaningful number to compare the speed of
   Common Lisp and C++ in general. Amulet is the second system and has
   probably a cleaner and tighter implementation.

It certainly isn't meaningful in this case, since the Lisp version has no
useful declarations that i could see.  To get meaningful numbers you need
to do a careful analysis.  If you port something from Lisp to C++, you
should also port it back to Lisp.  The other problem is that when people
see a slow Lisp application it is easy to blame the language,

   Additionally, in some places C++ *requires* faster coding techniques
   where a Lisp solution may be more elegant. In Amulet, formulars are
   mapped to ordinary functions in constant space. This is ugly and the
   Lisp version was more elegant -but slower- in this regard.

Possibly.  It is easy for people to believe that every high level feature
in Lisp has a negative performance impact.  That need not be the case.

   >The C++ code is now available (Amulet alpha 0.2) at:
   >http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html

   Just for note, I had a look at the docs and ran some code and have to
   say this is a nice toolkit, powerful and easy to understand. 
   Congratulations. 

--
Ken Anderson 
Internet: ·········@bbn.com
BBN ST               Work Phone: 617-873-3160
10 Moulton St.       Home Phone: 617-643-0157
Mail Stop 6/4a              FAX: 617-873-2794
Cambridge MA 02138
USA
From: Fernando Mato Mira
Subject: Re: Amulet 3 times faster than Garnet
Date: 
Message-ID: <3r7d48$q11@disunms.epfl.ch>
In article <·····················@bitburg.bbn.com>, ········@bitburg.bbn.com (Ken Anderson) writes:

>    >The C++ code is now available (Amulet alpha 0.2) at:
>    >http://www.cs.cmu.edu/afs/cs/project/amulet/www/amulet-home.html
> 
>    Just for note, I had a look at the docs and ran some code and have to
>    say this is a nice toolkit, powerful and easy to understand. 

Didn't have that luck. I tried to compile with SGI's CC, but it doesn't work (GCC is not an option):

amulet/src/gem/gemX_styles.cc", line 174: error(3390): 
          more than one instance of constructor "Am_Style::Am_Style" matches
          the argument list:
            function "Am_Style::Am_Style(float, float, float, short,
                      Am_Line_Cap_Style_Flag, Am_Join_Style_Flag,
                      Am_Line_Solid_Flag, const char *, int,
                      Am_Fill_Solid_Flag, Am_Fill_Poly_Flag, Am_Image_Array)"
            function "Am_Style::Am_Style(const char *, short,
                      Am_Line_Cap_Style_Flag, Am_Join_Style_Flag,
                      Am_Line_Solid_Flag, const char *, int,
                      Am_Fill_Solid_Flag, Am_Fill_Poly_Flag, Am_Image_Array)"
    Am_Style style (0, 0, 0, thickness);

BTW, is the Motif look and feel simulated, or it does use Xm widgets?

PS:
  Some trivial fixes for those with the time to look into this:

Makefile.vars.CC.SGI:
FLAGS  = -I$(AMULET_DIR)/include -DNEED_BOOL -DNEED_BSTRING -DNEED_UNISTD -DDEBUG

gemX.h:
#ifdef NEED_BSTRING   
#include <bstring.h>
#endif

gemX_windows.cc:
#ifdef NEED_UNISTD
#include <unistd.h>
#endif

-- 
F.D. Mato Mira           http://ligwww.epfl.ch/matomira.html                  
Computer Graphics Lab    ········@epfl.ch 
EPFL                     FAX: +41 (21) 693-5328
From: Stuart Watt
Subject: Re: Lisp considered unfinished
Date: 
Message-ID: <S.N.K.Watt-1406951455530001@uu-stuart-mac.open.ac.uk>
In article <··········@news.aero.org>, ·····@aero.org (John Doner) wrote:

> Debugging: Most Lisps have pretty serviceable debugging tools.  But
> what if your code breaks something at a low level?  It's pretty
> easy to tell what piece of C source some assembly code corresponds
> to, but not so for Lisp.  And there are other difficult situations:
> I've been scratching my head for a while over one where the
> compiler chokes on code produced by several layers of interacting
> macros.  It is bewildering trying to figure out where this code
> originally came from!

IMHO debugging in Lisp hasn't changed at all significantly since the Lisp
Machine days. There are at least two people who have looked at proper
source level debugging recently, and as I am one of them, I would really
like implementors to wake up on this.I get really fed up about the
retrogressive environments which are still using framed windows and,
basically, still trying to be Lisp Machines. Don't get me wrong: the Lisp
Machines were great for the early 1980s and are still a lot better than
most Lisp environments today. That in itself shows the stasis in the
field. We could do a lot better if we tried. 

But the language debate is mostly cultural, not technical, a point
forcefully made by Richard Gabriel in his "Good News, Bad News, How to Win
Big" paper in 1990. It doesn't really matter which language is *better*,
because that isn't the most important criterion. There are already Lisp's
which generate '.o' files, which are almost as fast as optimised C code,
and which have great environments. Does this make any difference?
Apparently not. Lisp will survive because, for many of us, it is a faster
and better environment, and that is enough. Trying to persuade others to
use it is completely ineffectual in my experience. It is better (and
sometimes a lot more fun) just to sit back and watch them taking six
months to build systems we can in six weeks. I think we might as well give
up being evangelists until we've something new to say, and just get on
with making Lisp better in the meantime. 

Regards, Stuart

-- 
Stuart Watt; Human Cognition Research Laboratory, Open University,
Walton Hall, Milton Keynes, MK7 6AA, UK. 
Tel: +44 1908 654513; Fax: +44 1908 653169. 
WWW: http://kmi.open.ac.uk/~stuart/stuart.html
From: Dave Yost
Subject: Re: Lisp considered unfinished - needs a great debugger
Date: 
Message-ID: <3s9kk9$l2@Yost.com>
In article <···························@uu-stuart-mac.open.ac.uk> ··········@open.ac.uk (Stuart Watt) writes:
>
> IMHO debugging in Lisp hasn't changed at all significantly since the Lisp
> Machine days. There are at least two people who have looked at proper
> source level debugging recently, and as I am one of them, I would really
> like implementors to wake up on this.

Anyone in a position to improve lisp's debugging tools
*must* see this:

http://lieber.www.media.mit.edu/people/lieber/Lieberary/ZStep/ZStep.html

It's a paper (with QuickTime animation) given at CHI '95
on how to greatly improve the programming user experience.
Do check it out.

Dave
From: Chris Page
Subject: Re: Lisp considered unfinished - needs a great debugger
Date: 
Message-ID: <chris_page-2306951826290001@d148.claris.com>
In article <·········@Yost.com>, ····@Yost.com (Dave Yost) wrote:

> Anyone in a position to improve lisp's debugging tools
> *must* see this:
> 
> http://lieber.www.media.mit.edu/people/lieber/Lieberary/ZStep/ZStep.html

You're right. This is a very educational paper. I've been thinking about
these kinds of problems for years now. It's amazing how primitive our
debugging tools are.

-- 
Chris Page                      | Internet junk mail, advertisements,
Software Wrangler               | chain letters, and SPAMs bite...
Claris Corporation              |
··········@powertalk.claris.com |          "Cut it out! :-P"

Disclaimer: opinions are not necessarily those of my employer.
From: Bj�rn Remseth
Subject: Re: "Lisp" is poison for NSF research $$$
Date: 
Message-ID: <RMZ.95Jun2201736@solva.ifi.uio.no>
> "C & C++ are God's way of telling Lispers that they're too productive".

(< ((Y (lambda (c) (++ c))) C)
   lisp)                                    ;-)

--

                                                    (Rmz)

Bj\o rn Remseth   !Institutt for Informatikk    !Net:  ···@ifi.uio.no
Phone:+47 22855802!Universitetet i Oslo, Norway !ICBM: N595625E104337
From: Michael McIlrath
Subject: Re: "Lisp" is poison for NSF research $$$
Date: 
Message-ID: <3qpqr7$ae1@life.ai.mit.edu>
······@netcom.com (Henry Baker) wrote:
>From the anonymous reviews of a recent NSF proposal in which Lisp was
>mentioned, but only as an incidental tool:
>
>"The LISP environment is really getting out of date as a viable system
>environment.  Let us not pursue this line of research any more."
>
>and
>
>"The investment may be a wasteful thing for the taxpayers."
>

Well, I have a simple solution for that, I just don't tell them what my
"incidental tools" are going to be.

If you are in engineering, you might be able to get some leverage out of the
fact that the CAD Framework Initiative standard extension language is scheme.

Michael McIlrath <···@mit.edu>
From: Michael McIlrath
Subject: Re: "Lisp" is poison for NSF research $$$
Date: 
Message-ID: <3qpqs7$ae1@life.ai.mit.edu>
······@netcom.com (Henry Baker) wrote:
>From the anonymous reviews of a recent NSF proposal in which Lisp was
>mentioned, but only as an incidental tool:
>
>"The LISP environment is really getting out of date as a viable system
>environment.  Let us not pursue this line of research any more."
>
>and
>
>"The investment may be a wasteful thing for the taxpayers."
>

Well, I have a simple solution for that, I just don't tell them what my
"incidental tools" are going to be.

If you are in engineering, you might be able to get some leverage out of the
fact that the CAD Framework Initiative standard extension language is scheme.

Michael McIlrath <···@mit.edu>
From: Kenneth D. Forbus
Subject: Re: Lisp really isn't poison at NSF
Date: 
Message-ID: <3r2ojm$gn9@movietone.ils.nwu.edu>
My experience was different.  I recently received funding for a large
education project from NSF.  Here is how I described my computing
plans:


"We plan to use Pentium PC�s for all development purposes,
 for several reasons.  First, although they are powerful enough
 to serve as a development platform, we expect machines such as
 these will be widespread in in high schools within five years.
  Second, our collaborators at Oxford have easy access to 486�s 
and Pentiums, and both universities can, if early experiments are
 successful, justify outfitting additional laboratory machines with
 the additional RAM and peripherals required to run our software.
  Third, there is a greater wealth of software libraries and toolkits
 on this platform that can be used to build high-quality interfaces,
 visualization tools, and the other types of necessary software far
 more easily than we could otherwise  And finally, we have found
 software tools that allows us to produce royalty-free runtime
 systems, which simplifies shipping experimental software to our
 collaborators at Oxford and allows us to test our software more
 widely than would otherwise be legally possible.

We will use Franz Common Lisp for Windows, with the Common Lisp
 Interface Manager (CLIM), for much of our programming  work.
 The continued use of high-level tools when possible will maximize
 our productivity, and we can port our workstation-based software
 most easily that way.   We will use C++ for numerically-intensive
 portions of the virtual laboratories to maximize performance.  For
 interface and visualization tool development we will use a
 combination of Visual Basic, C++, and CLIM, so that we can gain
 leverage from the use of third-party libraries, including C++
 libraries (such as the Quinn-Curtis graphing & plotting library)
 and the plethora of Visual Basic-based interface building kits.
 We also will 
 investigate integration with commercial symbolic mathematics
 packages, particularly Matlab because it is currently used at
 Northwestern in teaching control theory, to supply analysis tools
 for one version of the Feedback Articulate Virtual Laboratory."  

Reviewers were sympathetic to the use of Common Lisp for prototyping
and rapid development.  They also thought that relying on third-party
software was a good idea (why write surface plotters, for instance,
when you can buy them?).  I'm developing as much as possible in
ACLPC, using its DLL & DDE facilities to access 3rd party stuff.
The ability to make stand-alone executables without royalties is,
for my purposes, a major win.

So, Lisp isn't necessarily poison at NSF.
There is always alot of variance
in reviewers...

	Ken


······@netcom.com (Henry Baker) wrote:
>
> From the anonymous reviews of a recent NSF proposal in which Lisp was
> mentioned, but only as an incidental tool:
> 
> "The LISP environment is really getting out of date as a viable system
> environment.  Let us not pursue this line of research any more."
> 
> and
> 
> "The investment may be a wasteful thing for the taxpayers."
From: Thomas A. Russ
Subject: Re: Things like (floor (/ (- fixed-width comp-width) 2)) are a performance
Date: 
Message-ID: <TAR.95Jun23111433@hobbes.ISI.EDU>
In article <...> ·····@random.pc-labor.uni-bremen.de (Holger Duerer) writes:
 > Still for a simple expression like (x-y)/2 (x,y as int) I *do* expect
 > my compiler to genrerate the same code as (x-y)>>1, at least for
 > optimized compilations.  This is no big demand on the compiler really.

Except that this is the wrong answer for Common Lisp.  It is only valid
as long as (x-y) is evenly divisible by 2.  In Common Lisp, the division
of two integers can (and does) result in a rational number.  For
example, in CL:  (/ (- 10 7) 2)  ==>   3/2
  whereas in C:  (10-7)/2  ==> 1

That's why the entire discussion using floor came in.

 > (I just checked and my gcc does it even without any optimization flags
 > turned on.)
 > 
 > For the same reason I would also expect a Lisp compiler to do the same
 > (i.e. with speed opt. set and when it can be guaranteed that only the
 > first value ist used).

With floor instead of /, agreed.  Numeric optimization is not one of
most lisp implementation's strong points  [CMUCL excepted, perhaps],
which can be really annoying, particularly for the fixnum case.  The
effect is that many functions called on fixnums do not have special
handling (for example oddp, evenp!!!).


Of course, another source of confusion is that CL has the integer type,
but since that includes fixnums (what most programmers really think of
when they say "integer") and bignums, declaring something to be of type
"integer" doesn't let the system do much in the way of optimization at
all.

--
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu