From: John Flynn
Subject: Philosophical Bee in my Bonnet
Date: 
Message-ID: <35wM6.2646$mA4.74902@ozemail.com.au>
I'm mulling over what seems to be a paradox. I'm putting it out here as a
sincere, if provocative, question because I can't come up with convincing
answers.

After a few weeks of study, I can see no obvious reason why Lisp can't be an
extremely useful and practical general-purpose language. It seems quite
simple in essence, not so difficult to learn, obviously very versatile, has
plenty of power to express whatever ideas we need to express, doesn't limit
our thinking in any obvious way, is plenty fast enough, runs on any machine
that matters, and has matured during 40-odd years of demanding use by some
of the brightest people in the field. And I get real pleasure and
satisfaction from using it. That's good enough for me.

But when I look around at the software I use every day, everything from the
simple stuff: text editors, mail & news programs, word processors,
spreadsheets right down to the stuff that's not so simple: compilers, web
browsers, operating systems, RDBMSs, window systems, opengl, communications
infrastructure, multimedia facilities - it's virtually all in bare-bones C
(interspersed with some pre-standard-library C++).

And it's mostly pretty _good_ software. It's reliable, fast, easy to use,
gets the job done well.

Until six years ago, I hadn't touched a computer. I wasn't around when all
this stuff was being developed. A lot of you guys were. So I ask you: why is
it that the 'industry' as a whole chose to do its most practical work in a
bare bones language? I understand that the situation is different in
academia, but why did the people who actually _built_ the stuff that I'm
using today (even the academics among them) choose to do it in a language
that, to all appearances, seems relatively poorly equipped for application
programming? Was it a (series of) historical accident(s) and little more?

I suppose one reaction might be: Why does it matter? If you know that Lisp
is capable of doing anything that can be done in C or C++, requires you to
waste less time reinventing wheels, solves all tricky memory management
problems, requires fewer lines of code, and allows you to think at a higher
level without sacrificing too much efficiency, why are you concerned about
it? Well, that's precisely _why_ I'm concerned about it. I don't understand
it. I don't understand why C++ hackers are building whole desktop
environments, office suites, multimedia tools, and more, with simple
command-line compilers, while Lisp programmers who own commercial Lisp tools
are talking about hooking up CORBA bridges to Java to give an application a
simple GUI.

But I'm less interested here in practical issues that what (if anything)
underlies them.

When I watch my uncle sketching with a pencil, I always get a thrill to see
him bring a scene or a person alive with a few bold strokes. It's the same
when I hear my cousin playing a guitar. I could play the same notes on the
most finely crafted instrument, and make the same sounds, but not the music.

Give my uncle a fine set of paints and brushes; you would not improve his
art. Give my cousin a better guitar, and you'll get better sound but not
better music.

I wonder whether something similar happens with programming languages?

A couple of years ago I read "Object Oriented Software Construction" by
Bertrand Meyer, designer of the Eiffel language. Throughout the book, he
makes strong arguments for applying semi-formal quality assurance techniques
to software development, and implies that producing high quality software is
virtually impossible in low level languages like C and C++. He patiently
(though distastefully patronisingly, IMHO) explains the rationale for the
notation he's invented to facilitate the techniques he teaches.

His arguments are quite convincing, for the most part. The reader is
gradually introduced, piece by piece, to a language that is clean, spare,
simple, object oriented, elegant, offers a particularly hassle-free
implementation of multiple inheritance, constrained genericity, "design by
constract" and all kinds of other facilities that help to build reliable
software. The back of the book contains a CD demo of an Eiffel development
environment. Software development "done right".

And it is the crappiest, ugliest, most fault-ridden piece of software I have
_ever_ used.

I honestly don't know what's behind this (if anything). But all the best
software I've used is written in languages that make good software
impossible. Good products are being turned out rapidly in languages that
take forever to produce them.

Now I'm not implying that better programming languages actually make worse
programmers and worse software. (Or, heaven forbid, that "worse" programmers
are naturally drawn to "better" languages) ;-)

I honestly don't know why the "better" languages are not the ones being used
to make the (visibly, palpably, 'purchaseably') best software in widespread
use today. (And neither do I doubt that Common Lisp is actually,
objectively, _better_ than C++).

At bottom, what I'm wondering is whether, if a pencil is not good enough, no
expensive paints and brushes are going to be good enough either. And if a
pencil _is_ good enough, perhaps the more advanced tools will be
unnecessarily large and unweildy.

I dunno. It just seems to me that the quantity (and sometimes quality) of
software is inversely proportional to the ... ? ... ? ... quality ... ? ...
? ...? of the languages used to create them, and I can't see why.

So I'd be curious to hear any opinions on (a) why my assumptions are all
wrong and reality is not that way at all, or (b) why it is so.

From: James Hague
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3b02963c_1@newsfeeds>
John Flynn wrote:
>
>I understand that the situation is different in
> academia, but why did the people who actually _built_ the stuff that I'm
> using today (even the academics among them) choose to do it in a language
> that, to all appearances, seems relatively poorly equipped for application
> programming? Was it a (series of) historical accident(s) and little more?

Fifteen years ago, the average desktop computer was an 80286 running at
12MHz or less.  I bought an 8MHz 8088-based bargain machine in January 1988.
640K was the maximum amount of memory on such machines, and MS-DOS was the
norm.  The overhead of a language like Lisp was pretty steep on that kind of
hardware, just as the overhead of running a GUI-based environment--without
any hardware assistance for graphics--was.  But with desktop computers
thousands of times faster, with 32-bit addressing and having 64MB or more of
memory, the overhead has disappeared in most cases.  But most of the key
software is still from the days when you had no choice but to use C or
assembly.

(That's the desktop situation, anyway.  The much larger embedded systems
field, where cost and power consumption are major concerns, still has to
deal with writing software for 8-bit microcontrollers where you can't code
in C++ or Lisp.)

James




-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Kent M Pitman
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <sfwpud99udr.fsf@world.std.com>
"John Flynn" <··········@yahoo.com.au> writes:

> I'm mulling over what seems to be a paradox. I'm putting it out here as a
> sincere, if provocative, question because I can't come up with convincing
> answers.
> 
> After a few weeks of study, I can see no obvious reason why Lisp can't be an
> extremely useful and practical general-purpose language. It seems quite
> simple in essence, not so difficult to learn, obviously very versatile, has
> plenty of power to express whatever ideas we need to express, doesn't limit
> our thinking in any obvious way, is plenty fast enough, runs on any machine
> that matters, and has matured during 40-odd years of demanding use by some
> of the brightest people in the field. And I get real pleasure and
> satisfaction from using it. That's good enough for me.

And for many of us.

> But when I look around at the software I use every day, everything from the
> simple stuff: text editors, mail & news programs, word processors,
> spreadsheets right down to the stuff that's not so simple: compilers, web
> browsers, operating systems, RDBMSs, window systems, opengl, communications
> infrastructure, multimedia facilities - it's virtually all in bare-bones C
> (interspersed with some pre-standard-library C++).

Some of the reasons are technical, some are not.  I'll hit just a few.

Lisp has been around for a long time.  This means that in the early
days it had some properties that it was branded with even though it
outgrew.  Even some very early Lisps had compilers, but since it was
possible to write a Lisp without a compiler, Lisp was branded an
"interpreted" language.  Many Lisps didn't have an array datatype, and
some of those that did did not have a string datatype.  A whole
generation of teachers and administratoors grew up having tried Lisp
in its early days and promoting falsehoods about the language (that it
is not a compiled language, that it is slow, that it has lists only
and no modern data types) in spite of Lisp's having outgrown those problems.

Organizations hate change.  People learn comfortable truths about what 
works and what doesn't, and are disoriented and challenged when they learn
there are "new ways" about.  It suggests whatever comfortable nitch they've
built for themselves will be overturned.  Lisp offered change (and so scared
people) both in terms of what people would have to know and in terms of how
many people would be needed.  Mid-80's ads for Lisp talked about how 1 person
could do the work of 10.  It turned out managers didn't like an ad that said
"Your organization could be a tenth its present size."

Industry has a cyclic nature of investing hugely in the hot technology of
the day.  You've seen the era of AI, of the PC, of the GUI, of the Web (just
now ending), and so on.  The AI era was among the earliest, and I think was
not widely recognized to be one that would end.  Its end was widely heralded
as "AI Winter" or "the death of AI".  In fact, I think all of these eras stop
after 3-5 years, with industry expecting businesses they invested in wildly
to have commoditized the product, so that unlimited funds are no longer
needed.  The Lisp industry did not plan for the end of the AI era, and, worse,
was widely offered as a scapegoat by AI managers (as if, I always add,
had C++ only been used in AI projects instead of Lisp, AI winter could have
been avoided--yeah, right).  

What hurt Lisp more than anything else during what I guess one must call
AI Summer was the unbounded amount of money available.  Too much money is never
a good thing because it means you don't have to stay close to the ear of
the market.  You can plan for the long term.  And the short term can catch
you by surprise, as I think it did the Lispers.  During Lisp's wild success
of the early 1980's, various important things happened in other languages
that Lisp did not stay connected with.

Lisp has always had a different way of delivering product, but the market
pretty much standardized on the "linked library" as a delivery component
in those days, so that different modules of a program could be linked together
into a single executable.  Lisp, at that time, mostly could not do that.
Lisp has scrambled to get back to that, working on more limited funds now,
since seeing the importance of that to the market.  Increasingly you find
Lisps now capable of supporting compile-to-dll kinds of things, but that's
very late in coming.  Lisp always assumed it would be the dominant element
and that other languages would link into it; it did not position itself for
being a component of a larger system, and this situation hurt it.

Lisp spent a lot of work evolving hardawre to match it, which while
incredibly cool stuff, turned out to be something the market didn't
much care about.  Companies like old-Symbolics failed to understand
that people bought its products for the software, not the hardware, to
the point that when the old company was going bankrupt, it continued
to focus overly on how to continue its hardware business and to rescue
its operating system, rather than to port its valued tools as
standlaone commodities.  (Its assets have later been bought and are
resold by a new Symbolics Technologies, Inc.)

Lisp was also hit hard by Java because although the Java langauge is
like programming in a straight-jacket, Java hit hard on one of the few
things that all languages had neglected to date except for Lisp: the
packaging of lots of cool tools into pre-packaged things that you
didn't have to code up from scratch.  Java mostly seriously sucks as a
programming language, IMO, but it has the one advantage that it
managed to convince the public that it would be the canonical source
of tons of libraries.  Lisp should have done this, or should have
moved more quickly to annex this.  It still hurts for not having easy,
reliably access to all that Java has.  

And there has been a fair amount of politics in the area of datatypes,
too.  I've been involved in language standards a fair amount of my
career but I've said many times that had I that to do over, I'd
involve myself not in language standards but datatype standards.
Datatypes are what form the flow glue between modules.  If you and I
don't agree on datatypes, it means the action of "marshalling" data
(converting it to an external representation) or "unmarshalling it"
(converting it back) while calling between programs is not a no-op,
and so it's very expensive.  Lisp has a lot of datatypes it considers
critical that other languages don't recognize.  And this makes it hard
to call in and out.  We have packages for ameliorating some of those
effects, but it would help a lot if other languages could primitively
understand a "symbol" or a "list" or an "arbitrary precision integer".
The idea that other languages have settled on "integers probably mod
something" as a datatype (usually appropriately called by the
truncated name "int") is really awful.  I can't imagine doing regular
work in a language that is willing to randomly truncate my results
just because they got too large, according to some arbitrarily
platform-determined notion of "too large".  The choice of language should
be a private choice, but it cannot be so long as the choice of language
reveals the choice of data.  You and I can exchange data on a business
project by using common devices like forms, money, and so on which represent
common interchange data.  You don't know if I think in English or Spanish.
That's my personal choice.  All you know is how I communicate to you.
But programs aren't like that.  That's sad.  And it's made worse by the
accidental fact that many languages do coincidentally choose more similar
datatypes than Lisp does.  (Though if you look close there are details like
on the Macintosh long ago, and perhaps today--I haven't checked recently--where
people pass either "Pascal strings" or "C strings" because they know they
are not the same.  Mostly, though, languages other than Lisp just mirror
the hardware, and so without cooperating arrive at the same types.  Since
lists, arbitrary precision integers, and symbols aren't in hardware, most
languages don't handle them.  And so special compatibility packages and
special training are required for others to use things that are perfectly
natural to us.)

Part of this problem, of course, is that Sun sells iron while it gives
away Java.  Lisp hasn't the resources of some other commodity working
toward feeding its free distribution (I'd almost say "dumping" but I
don't want to get too harsh here) of other technologies like Sun has.
Java and other nearly-free languages dictate the price point for
commodity languages.  Lisp used to cost $100,000 per seat in the early
80's and people willingly paid it.  But as more and more functionality
comes out for free in Java, the price of Lisp is driven steadily down and
this makes it tough for the language to be supported by commercial entities.
Increasingly, those entities seek "business solution" sales, not language
sales, since they can still command high prices for that.

You'll note that very little of anything I've said has anything at all
to do with the choice of data structure or how comfortable anyone feels
programming in it or what kind of product can be produced.  These items
are, at the level of business, very close to irrelevant except insofar as
someone uses them in a much more strategic way than the Lisp community ever
has in order to snare a new market.

What is lamentable in all this is that now is the time for which Lisp was
designed for.  Lisp is a dynamic language, which has always worked against
it in sales.  Someone could always show some benchmark that proved (or
more likely hinted) that you could get an extra memory cycle or two out of
other languages that you couldn't get out of Lisp.  And there was a time
when that kind of thing mattered.  But now most people have so much compute
power available to them that it's largely wasted, and people build products
that are enormously wasteful of speed and size because they know that next
year's computers will be ever faster and more loaded with memory and disk
so that none of that matters.  Lisp used to be cited for being too large,
but most Lisp systems took the issue so seriously they stopped growing in
image size around 1990 and have held constant to 4-16MB since then, which
seems tiny for a modern day program doing one task, much less a program
like Lisp which usually in that memory size has a full language, development
environment with multiple browsers, compiler, editor, and so on. (Still,
people remember the lemma: "lisp is big, c is small".  It doesn't matter
it's no longer true--it's drilled into them.)  But back to my topic sentence
for this paragraph, the thing is that Lisp was designed for dynamicity.
We paid a price in speed, probably a factor of two (even though on a 
case-by-case basis you can usually squeeze it back out if you really have
to), in order to have most aspects of the language be indirect links, so that
all kinds of things could be snapped in and out dynamically at runtime.
Other languages, in the search for those extra machine cycles, refused to
give up that extra cycle and are fragile by comparison.  This is the time,
when the web is connecting everything, when the world is changing daily,
when new datatypes, protocols, file formats, etc. are invented moment 
by moment, when a dynamic language like Lisp should shine.  But making that
case takes marketing dollars, and the limited resources of Lisp are spent
on technology.  So the marketing doesn't get done and others beat us out.

Fortunately, Lisp has ideas that are not easily killed and that are not 
copied in other languages.  If they were, we'd probably just use those other
languages.  Most of us are after the capability, and don't stand on ceremony
about what you call the language.  So Lisp survives in spite of itself, and
in spite of often-poor marketing, and so on.  Because there are important
ideas many of us know we dare not lose, as it would be too long before they
were reinvented.

This is jsut my personal take on it all.  You can probably tell nearly
everything I've said here is a subjective assessment, and many are things
other people would assess differently.  Perhaps others will join with 
some useful counterpoint to correct what I've said or flesh it out.
 --Kent
From: Biep @ http://www.biep.org/
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <9e08u5$b523$1@ID-63952.news.dfncis.de>
"Kent M Pitman" <······@world.std.com> wrote in message
····················@world.std.com...
> Part of this problem, of course, is that Sun sells iron while it gives
> away Java.  Lisp hasn't the resources of some other commodity working
> toward feeding its free distribution (I'd almost say "dumping" but I
> don't want to get too harsh here) of other technologies like Sun has.

Of course the PLT people are doing a good job with DrScheme and friends,
and their outreach to highschools might even raise a set of people who dont
think that speed = 1/#parens.
Now if they had a nice source of incoming money backing them up..

--
Biep
Reply via http://www.biep.org
From: Joel Ray Holveck
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <y7c1ypnz3q8.fsf@sindri.juniper.net>
> Lisp was also hit hard by Java because although the Java langauge is
> like programming in a straight-jacket, Java hit hard on one of the few
> things that all languages had neglected to date except for Lisp: the
> packaging of lots of cool tools into pre-packaged things that you
> didn't have to code up from scratch.  Java mostly seriously sucks as a
> programming language, IMO, but it has the one advantage that it
> managed to convince the public that it would be the canonical source
> of tons of libraries.  Lisp should have done this, or should have
> moved more quickly to annex this.  It still hurts for not having easy,
> reliably access to all that Java has.  

It's hard for me to read this consistently.  In your first sentence,
you imply that Lisp did not neglect the cool tools.  However, in the
next-to-last, you say that Lisp did not move quickly enough to address
this need.

Could you perhaps clarify this?

Thanks,
joelh
From: Kent M Pitman
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <sfwpud740qy.fsf@world.std.com>
Joel Ray Holveck <·····@juniper.net> writes:

> 
> > Lisp was also hit hard by Java because although the Java langauge is
> > like programming in a straight-jacket, Java hit hard on one of the few
> > things that all languages had neglected to date except for Lisp: the
> > packaging of lots of cool tools into pre-packaged things that you
> > didn't have to code up from scratch.  Java mostly seriously sucks as a
> > programming language, IMO, but it has the one advantage that it
> > managed to convince the public that it would be the canonical source
> > of tons of libraries.  Lisp should have done this, or should have
> > moved more quickly to annex this.  It still hurts for not having easy,
> > reliably access to all that Java has.  
> 
> It's hard for me to read this consistently.  In your first sentence,
> you imply that Lisp did not neglect the cool tools.

Lisp treated "cool tools" almost as a trademarked phrase which if once true
was always true, requiring no ongoing investment/maintenance.  Java took
it on as a "way of life".

> However, in the
> next-to-last, you say that Lisp did not move quickly enough to address
> this need.

The need to see this as a dynamic, ongoing need for investment.
Lisp had seen it as an issue, but not as a moving target--more as a solved
problem.
 
> Could you perhaps clarify this?

When I was in school (late 70's), you learned about things like hash
tables and indirect arrays in algorithms class and when you got to
programming youw anted them, but you had to implement them first in
languages like C.  But by contrast, Lisp had them pre-packaged and you
just said "I want to use that."  Lisp had, comparatively, a lot of
prepackaged stuff for a long time.  Lisp Machines had pre-packaged network
functionality so that you could easily build a connected, distributed system.
Other commercial lisps like Franz, Lucid and Harlequin offered less than
what the LispM had, but still it was a lot easier to use than a lot of what 
C provided.  The idea that you could just start a Lisp and write a couple
of simple calls to mp:process-run-function and mp:process-wait rather than
lots of more complicated code in other languages was a major step up, for
example.  Ditto for listening on network ports.  So Lisp was way ahead
for a while.

Then Java came along and started offering all manner of packages for 
data structures, networks, databases, xml, http, font metrics, graphics,
etc.   And Lisp per se had no answer.  SOme Lisp implementations had
some of these, but most are not portable among vendors--whereas Java is
"write once, run anywhere".  (Laugh as you might, a lot of people do believe
that slogan.  And there is at least enough truth to it that it's not 
utter idiocy to work with the theory and just patch problems as they come up).
Lisp's story of having ready-made process-run-function and 
wait-for-connection-on-port kinds of things were suddenly impoverished
compared to the richness of the Java libraries being cranked out.
A story that had held Lisp in good stead over a long period was suddenly
not good enough, and worse--gets worse every year.  Lisp companies eventually
realized they needed to add a few things to catch up, but I still think they
are not responding in a way that recognizes how fast new Java stuff is
coming out.  If they were, they would be rushing to annex all of Java 
directly by making an RMI interface be a featured centerpiece.  Franz
and Harlequin *did* add CORBA, but they made the mistake of making you buy
it, which meant most users didn't, and so it didn't really act as a solution
to the problems it could have solved.  It was just one more thing most people
didn't have access to.

When I did my Fortran->Lisp translator for Maclisp, years ago, I translated
the IMSL fortran library for Macsyma.  The calling sequence was kind of dumb
because fortran libraries often take "workspace arrays" as arguments, and
my trnslator couldn't tell which tese were, so you actually had to pass
fully-consed Lisp arrays to the Lispy-Fortran interface which it would then
discard to allocate a fixed size area in Fortran-compatibility memory.  This
was wasteful, but worked.  And it felt dopey from Lisp.  But again, it solved
a problem that was often important enough that no one cared.  The critical part
was that all of IMSL could be instantly and automatically be made available
to Macsyma customers.  For a few things people used a lot, I hand-crafted 
better interfaces, and that sufficed to keep everyone happy.  I would suggest
an analogy to Lisp and Java.  Lisp needs instant, ready-made interfaces to
all of Java, just so it can say "Java is not ahead of us--Java is part of us".
Then Lisp can make either native implementations of some Java things people
use a lot, or can make better-than-canned interfaces to calling Java stuff,
or Lisp can call some other language that does a Java-like thing.  But the
important point is keeping pace.  One has to not just "have what it has now"
but "explain how one is going to keep pace in the future".  If one has a 
function that maps Java to Lisp automatically, tht's an easy explanation.  If 
one has to explain that a fixed finite team of Lisp implementors is going to
keep pace with a growing army of Java coders world-wide, that's a hard case
to make.

Of course, an alternate approach is for the Lisp community not to wait on
the vendors and to put together ever-growing armies of people like the Java
community did, writing libraries and building our community.  But while that's 
a more possible solution than relying on vendors to just hire enough coders,
it requires a lot of trust, time, and leaps of faith...

So the different parts of my paragraph were really referring to the before
and after picture, where Lisp was telling the same story all the time, but
where it sounds less credible today than it used to.  Lisp used to be the
only one with the story, and it worked well. Then Java came along and told
the story.  And Java's story sounds more credible than Lisp's.  So it's time
to play catch-up again.

Java is a painful language for writing anything.  But people seem
willing to endure that pain.  I dunno.  Maybe it feels macho to endure
all that work we're trying to have people avoid.  One way or another,
it's a force to be reckoned with.  And, I think, a force that is even
today has not been adequately reckoned with.  Or so say I.

Does this make any more sense?
From: Joel Ray Holveck
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <y7cwv7f80j2.fsf@sindri.juniper.net>
>>> moved more quickly to annex this.  It still hurts for not having easy,
>>> reliably access to all that Java has.  
>> It's hard for me to read this consistently.  In your first sentence,
>> you imply that Lisp did not neglect the cool tools.
> Lisp treated "cool tools" almost as a trademarked phrase which if once true
> was always true, requiring no ongoing investment/maintenance.  Java took
> it on as a "way of life".

As has Perl, in recent times.  I stay out of the Java world, but keep
an ear on Perl for just that reason.  :-/

>> However, in the
>> next-to-last, you say that Lisp did not move quickly enough to address
>>  this need.
> The need to see this as a dynamic, ongoing need for investment.
> Lisp had seen it as an issue, but not as a moving target--more as a solved
> problem.

Not that this is what you're saying, but I have this picture in my
head of one Lisp implementor telling another: "So we can reduce the
problem of implementing these libraries to functions on the standard
library, so it's solved!"

Maybe I've been examining topology too much, or shouldn't post at 1:30
right after banging my head against CMUCL's FFI bugs...

> A story that had held Lisp in good stead over a long period was suddenly
> not good enough, and worse--gets worse every year.

Both because it's standing nearly still while Java and Perl are
leaping ahead, and because users are adding new demands.

> Lisp companies eventually realized they needed to add a few things
> to catch up, but I still think they are not responding in a way that
> recognizes how fast new Java stuff is coming out.  If they were,
> they would be rushing to annex all of Java directly by making an RMI
> interface be a featured centerpiece.  Franz and Harlequin *did* add
> CORBA, but they made the mistake of making you buy it, which meant
> most users didn't, and so it didn't really act as a solution to the
> problems it could have solved.  It was just one more thing most
> people didn't have access to.

Never learned CORBA.  For now, I've filed it under "buzzword to learn
when I get the time", which is admittedly unfair, but I haven't had a
need for it yet.

It's possible that there are others with the same mindset-- which
means that for them, CORBA isn't going to cut it, since they don't see
that as a solution to the library problem.

> I would suggest an analogy to Lisp and Java.  Lisp needs instant,
> ready-made interfaces to all of Java, just so it can say "Java is
> not ahead of us--Java is part of us".  Then Lisp can make either
> native implementations of some Java things people use a lot, or can
> make better-than-canned interfaces to calling Java stuff, or Lisp
> can call some other language that does a Java-like thing.  But the
> important point is keeping pace.  One has to not just "have what it
> has now" but "explain how one is going to keep pace in the future".
> If one has a function that maps Java to Lisp automatically, tht's
> an easy explanation.  If one has to explain that a fixed finite
> team of Lisp implementors is going to keep pace with a growing army
> of Java coders world-wide, that's a hard case to make.

Do you know if anybody's looked at the question of how hard that
problem is, using whatever user-accessible glue the Lisp
implementation provides?  That is, does it effectively have to come
from the vendors, or can it come from the community?  I've never tried
to communicate or link with Java, so I don't know what's involved
here.

> Of course, an alternate approach is for the Lisp community not to
> wait on the vendors and to put together ever-growing armies of
> people like the Java community did, writing libraries and building
> our community.

I think part of it may have to do with organization, as well.  Perl
has CPAN.  Java probably has something similar.  Lisp
has... well... as far as I can tell, various little bits of
collections scattered around the net, frequently disorganized and
rarely documented.

CMU's AI repository has a lot of good stuff, but in the "What's New"
section it lists R4RS and "the Mosaic interface" to the repository.
Not the place if you need buzzword-compliant interfaces.

CLiki seems to have helped that, but it still only lists two dozen or
so packages, most of which duplicate each others' efforts.  Are we all
reinventing each others' wheels these days?

I'm not talking about expert system shells or Macsyma here, and I'm
not saying there should be more code being given away.  (Whether I
believe it or not is irrelevant to my point.)  I just don't see the
Lisp libraries anywhere, from vendors or the community.

Am I missing something?  Are we really reinventing each others' wheels
every day?  This isn't just rhetoric, I seriously want to know if
there are code collections-- free or commercial-- that I'm missing out
on.

> But while that's a more possible solution than relying on vendors to
> just hire enough coders, it requires a lot of trust, time, and leaps
> of faith...

Eh?  The time I can see, but what trust and faith does it require?

> Does this make any more sense?

It does, and it really upsets me too.  I've been trying this whole
time to avoid getting into a pointless rant, with (as you can see) a
limited degree of success.  Apologies all around.

But thanks, you've helped me get some perspective on what's happened
to Lisp while I've been away.

Cheers,
joelh
From: Paolo Amoroso
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <ME8FO4K6GB1V7UkV3ChNIpUxl0Pg@4ax.com>
On 18 May 2001 02:33:05 -0700, Joel Ray Holveck <·····@juniper.net> wrote:

> I think part of it may have to do with organization, as well.  Perl
> has CPAN.  Java probably has something similar.  Lisp
> has... well... as far as I can tell, various little bits of
> collections scattered around the net, frequently disorganized and
> rarely documented.

Lisp has a bunch of volunteers working on CCLAN, and your help is welcome.
See the relevant page at CLiki.


> CMU's AI repository has a lot of good stuff, but in the "What's New"
> section it lists R4RS and "the Mosaic interface" to the repository.
> Not the place if you need buzzword-compliant interfaces.

A repository which is no longer being maintained is hardly a good place to
look at for buzzword-compliant interfaces.


> CLiki seems to have helped that, but it still only lists two dozen or
> so packages, most of which duplicate each others' efforts.  Are we all

Note that CLiki lists only open-source material for Lisp under Unix. There
is other useful software which is commercial, or freely available--for
appropriate values of free--but not open-source. Not that this is a huge
amount of material, but it's still more than you can find at CLiki alone.

And besides, not all open-source Lisp software gets listed at CLiki, just
the packages that volunteers explicitly add to the list. At times not even
the authors themselves mention their material at CLiki or other Lisp
venues. Again, your help for adding more links is welcome.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Thom Goodsell
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <7vheyi1sxy.fsf@shalott.cra.com>
Paolo Amoroso <·······@mclink.it> writes:

> On 18 May 2001 02:33:05 -0700, Joel Ray Holveck <·····@juniper.net> wrote:
>
> Lisp has a bunch of volunteers working on CCLAN, and your help is welcome.
> See the relevant page at CLiki.

And that relevant page is? I couldn't find it anyplace I thought it
should be (Development Aids, Library Packages, Networking, Web).

Thanks,
Thom 

-- 
Thom Goodsell                           ···@cra.com
Scientist                       (617) 491-3474 x574
Charles River Analytics         http://www.cra.com/
From: Christophe Rhodes
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <sqlmnu60m5.fsf@lambda.jesus.cam.ac.uk>
Thom Goodsell <···@cra.com> writes:

> Paolo Amoroso <·······@mclink.it> writes:
> 
> > On 18 May 2001 02:33:05 -0700, Joel Ray Holveck <·····@juniper.net> wrote:
> >
> > Lisp has a bunch of volunteers working on CCLAN, and your help is welcome.
> > See the relevant page at CLiki.
> 
> And that relevant page is? I couldn't find it anyplace I thought it
> should be (Development Aids, Library Packages, Networking, Web).

http://ww.telent.net/cliki/cclan

It's true that this isn't immediately obvious -- feel free to link it
in to somewhere you might have expected it to be. :-)

Cheers,

Christophe
-- 
Jesus College, Cambridge, CB5 8BL                           +44 1223 524 842
http://www-jcsu.jesus.cam.ac.uk/~csr21/                  (defun pling-dollar 
(str schar arg) (first (last +))) (make-dispatch-macro-character #\! t)
(set-dispatch-macro-character #\! #\$ #'pling-dollar)
From: Chris Double
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <wkd795621t.fsf@double.co.nz>
Joel Ray Holveck <·····@juniper.net> writes:

> Do you know if anybody's looked at the question of how hard that
> problem is, using whatever user-accessible glue the Lisp
> implementation provides?  That is, does it effectively have to come
> from the vendors, or can it come from the community?  I've never
> tried to communicate or link with Java, so I don't know what's
> involved here.

I believe Allegro Common Lisp can talk to Java via some sort of
bridge. I think the java runs in a seperate process and Lisp talks to
it via sockets (I could be wrong here though...).

I've had Corman Lisp talking to Java using JNI (Java Native
Interface). Basically JNI provides a DLL that allows you to call Java
functions. I was able to create Java instances, call methods, display
Swing user interfaces, etc all from Lisp. It was fun seeing the
HotJava browser come up as part of my lisp program. I still have the
code...but never got beyond prototyping so it's quite rough. I must
make it available some day!

Chris.
-- 
http://www.double.co.nz/cl
From: Marco Antoniotti
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <y6c1ypm6au4.fsf@octagon.mrl.nyu.edu>
Kent M Pitman <······@world.std.com> writes:

	...

> Then Java came along and started offering all manner of packages for 
> data structures, networks, databases, xml, http, font metrics, graphics,
> etc.   And Lisp per se had no answer.  SOme Lisp implementations had
> some of these, but most are not portable among vendors--whereas Java is
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

> "write once, run anywhere".

End of the argument.

Cheers

-- 
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group        tel. +1 - 212 - 998 3488
719 Broadway 12th Floor                 fax  +1 - 212 - 995 4122
New York, NY 10003, USA                 http://bioinformatics.cat.nyu.edu
                    "Hello New York! We'll do what we can!"
                           Bill Murray in `Ghostbusters'.
From: Sashank Varma
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <sashank.varma-1805011736320001@129.59.212.53>
In article <···············@world.std.com>, Kent M Pitman
<······@world.std.com> wrote:

[snip]
>Lisp companies eventually
>realized they needed to add a few things to catch up, but I still think they
>are not responding in a way that recognizes how fast new Java stuff is
>coming out.  If they were, they would be rushing to annex all of Java 
>directly by making an RMI interface be a featured centerpiece.
[snip]
>The critical part
>was that all of IMSL could be instantly and automatically be made available
>to Macsyma customers.  For a few things people used a lot, I hand-crafted 
>better interfaces, and that sufficed to keep everyone happy.  I would suggest
>an analogy to Lisp and Java.  Lisp needs instant, ready-made interfaces to
>all of Java, just so it can say "Java is not ahead of us--Java is part of us".
>Then Lisp can make either native implementations of some Java things people
>use a lot, or can make better-than-canned interfaces to calling Java stuff,
>or Lisp can call some other language that does a Java-like thing.  But the
>important point is keeping pace.  
[snip]

This seems like the most realistic solution to the problem of the
ever-widening divide between the standard libraries of Java and 
Common Lisp.  That is, it seems we should leverage the great amounts
of personpower the Java folks bring to the endless creation of new
libraries by becoming parasites on their work.  This would solve
the problem of incompatabilities in how the various Common Lisp 
vendors implement non-standard functionality (e.g., networking,
multiprocessing).  It would also make Common Lisp more familiar to
Java programmers looking for something different, in the same way
that the Java developers adopted a syntax superficially similar to
C to ease the transition of C programmers to Java.

What is the best way to pursue this goal?

Sashank
From: Michael Parker
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <EF1C8DCC90B87732.5FBF2E88786772E5.45C5F786EC438338@lp.airnews.net>
Sashank Varma wrote:
> 
> In article <···············@world.std.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> [snip]
> >Lisp companies eventually
> >realized they needed to add a few things to catch up, but I still think they
> >are not responding in a way that recognizes how fast new Java stuff is
> >coming out.  If they were, they would be rushing to annex all of Java
> >directly by making an RMI interface be a featured centerpiece.
> [snip]
> >The critical part
> >was that all of IMSL could be instantly and automatically be made available
> >to Macsyma customers.  For a few things people used a lot, I hand-crafted
> >better interfaces, and that sufficed to keep everyone happy.  I would suggest
> >an analogy to Lisp and Java.  Lisp needs instant, ready-made interfaces to
> >all of Java, just so it can say "Java is not ahead of us--Java is part of us".
> >Then Lisp can make either native implementations of some Java things people
> >use a lot, or can make better-than-canned interfaces to calling Java stuff,
> >or Lisp can call some other language that does a Java-like thing.  But the
> >important point is keeping pace.
> [snip]
> 
> This seems like the most realistic solution to the problem of the
> ever-widening divide between the standard libraries of Java and
> Common Lisp.  That is, it seems we should leverage the great amounts
> of personpower the Java folks bring to the endless creation of new
> libraries by becoming parasites on their work.  This would solve
> the problem of incompatabilities in how the various Common Lisp
> vendors implement non-standard functionality (e.g., networking,
> multiprocessing).  It would also make Common Lisp more familiar to
> Java programmers looking for something different, in the same way
> that the Java developers adopted a syntax superficially similar to
> C to ease the transition of C programmers to Java.

I've been thinking about this for awhile myself, mostly about
doing a Java-import that would basically translate a java .class
file to a .lisp equivalent, that would present the class as a package
with a CLOS class & generic functions.  The trick it seems is really
how to deal with Java FFI -- most of the interesting bits are in C
after all.  Unfortunately as my current lisp system is an XL1200,
there isn't an existing Java port to leverage off of, so I figured
on having to rewrite the basic native classes in lisp.

Still, given that we've got the native lisp compiler to compile
and optimize the code, it seems that such a .class->.ibin (or .o )
compiler could deliver enough performance to make such a thing
feasible, possibly even comparable to the current class of batch-mode
java compilers?

Just a thought.
From: Kent M Pitman
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <sfwg0e1ke2r.fsf@world.std.com>
·············@vanderbilt.edu (Sashank Varma) writes:

> ... it seems we should leverage the great amounts
> of personpower the Java folks bring to the endless creation of new
> libraries by becoming parasites on their work.  This would solve
> the problem of incompatabilities in how the various Common Lisp 
> vendors implement non-standard functionality (e.g., networking,
> multiprocessing).  It would also make Common Lisp more familiar to
> Java programmers looking for something different, in the same way
> that the Java developers adopted a syntax superficially similar to
> C to ease the transition of C programmers to Java.
> 
> What is the best way to pursue this goal?

I suppose a Java metaclass, a standard way of calling Java from Lisp,
and a Lisp-based JVM wouldn't hurt... especially if it was contributed
code that the whole community could use so we could rely on it in
portable code.

I never looked at the JVM so have no idea how big it is or hard it is
to write one.

Old Symbolics wrote some cool stuff with its Statice database code
but found that becuase not everyone had a Statice license, it had to
double-implement everything--once for using Statice and once for not,
in case the delivery environment had no Statice license.  Eventually
it got smart and made a distinction between development (expensive)
Statice licences and runtime (free) licences, which meant you could
finally rely on delivering applications.  Stupid marketing error.

Likewise, Java would be similar for the CL community.  It's of
enormously less value if everyone doesn't have access, since you can't
build anything you want everyone to use on top of Java unless the Java
part is free, since some people will refuse to take things that are
not free.
From: Chris Double
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <wkpud5a2kd.fsf@double.co.nz>
Kent M Pitman <······@world.std.com> writes:

> I never looked at the JVM so have no idea how big it is or hard it
> is to write one.

A Smalltalk system called Frost did this. They wrote a JVM in
Smalltalk which effectively allowed them to run Java applications,
classes, etc from inside Smalltalk.

A Lisp implementation of the JVM would allow the same of course.

See: http://wiki.cs.uiuc.edu/cs497rej/Frost+-+merging+Java+and+Smalltalk

Chris.
-- 
http://www.double.co.nz/cl
From: Donald Fisk
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3B0A92D2.9AAB8891@enterprise.net>
Kent M Pitman wrote:
> 
> "John Flynn" <··········@yahoo.com.au> writes:
> 
> > I'm mulling over what seems to be a paradox. I'm putting it out here as a
> > sincere, if provocative, question because I can't come up with convincing
> > answers.
> >
> > After a few weeks of study, I can see no obvious reason why Lisp can't be an
> > extremely useful and practical general-purpose language. It seems quite
> > simple in essence, not so difficult to learn, obviously very versatile, has
> > plenty of power to express whatever ideas we need to express, doesn't limit
> > our thinking in any obvious way, is plenty fast enough, runs on any machine
> > that matters, and has matured during 40-odd years of demanding use by some
> > of the brightest people in the field. And I get real pleasure and
> > satisfaction from using it. That's good enough for me.
> 
> And for many of us.
> 
> > But when I look around at the software I use every day, everything from the
> > simple stuff: text editors, mail & news programs, word processors,
> > spreadsheets right down to the stuff that's not so simple: compilers, web
> > browsers, operating systems, RDBMSs, window systems, opengl, communications
> > infrastructure, multimedia facilities - it's virtually all in bare-bones C
> > (interspersed with some pre-standard-library C++).
> 
> Some of the reasons are technical, some are not.  I'll hit just a few.
> 
> Lisp has been around for a long time.  This means that in the early
> days it had some properties that it was branded with even though it
> outgrew.  Even some very early Lisps had compilers, but since it was
> possible to write a Lisp without a compiler, Lisp was branded an
> "interpreted" language.  Many Lisps didn't have an array datatype, and
> some of those that did did not have a string datatype.  A whole
> generation of teachers and administratoors grew up having tried Lisp
> in its early days and promoting falsehoods about the language (that it
> is not a compiled language, that it is slow, that it has lists only
> and no modern data types) in spite of Lisp's having outgrown those problems.

I haven't read every article in this thread but no one seems to have
mentioned Paul Graham's article "being popular".   This is well worth a
read and goes a long way towards explaining why C and Perl is popular
with
hackers but Lisp isn't.   It's on his web site.

But there's a general point, which Fritz Kunze [1] once made to me --
not to
assume that just because I (and others who have exposed themselves to a
variety of programming languages and paradigms) think highly of
something
that my taste will be shared by more than a small minority.   If you
want
your programming language to be popular, you have to appeal to the
majority
of people who lack your discerning taste.   You may think that picture
of
the Chinese girl with the green face (by Trechnikoff) is hideous, but it
hangs on more walls than the collected works of Turner.   You may
disdain
the novels of Barbara Cartland, but she's sold more than any other
English
language author.   It applies to programming languages too.   Do we want
Lisp to be popular (X)or to be good?

> Organizations hate change.  People learn comfortable truths about what
> works and what doesn't, and are disoriented and challenged when they learn
> there are "new ways" about.  It suggests whatever comfortable nitch they've
> built for themselves will be overturned.  Lisp offered change (and so scared
> people) both in terms of what people would have to know and in terms of how
> many people would be needed.  Mid-80's ads for Lisp talked about how 1 person
> could do the work of 10.  It turned out managers didn't like an ad that said
> "Your organization could be a tenth its present size."
> 
> Industry has a cyclic nature of investing hugely in the hot technology of
> the day.  You've seen the era of AI, of the PC, of the GUI, of the Web (just
> now ending), and so on.  The AI era was among the earliest, and I think was
> not widely recognized to be one that would end.  Its end was widely heralded
> as "AI Winter" or "the death of AI".  In fact, I think all of these eras stop
> after 3-5 years, with industry expecting businesses they invested in wildly
> to have commoditized the product, so that unlimited funds are no longer
> needed.  The Lisp industry did not plan for the end of the AI era, and, worse,
> was widely offered as a scapegoat by AI managers (as if, I always add,
> had C++ only been used in AI projects instead of Lisp, AI winter could have
> been avoided--yeah, right).
> 
> What hurt Lisp more than anything else during what I guess one must call
> AI Summer was the unbounded amount of money available.  Too much money is never
> a good thing because it means you don't have to stay close to the ear of
> the market.  You can plan for the long term.  And the short term can catch
> you by surprise, as I think it did the Lispers.  During Lisp's wild success
> of the early 1980's, various important things happened in other languages
> that Lisp did not stay connected with.
> 
> Lisp has always had a different way of delivering product, but the market
> pretty much standardized on the "linked library" as a delivery component
> in those days, so that different modules of a program could be linked together
> into a single executable.  Lisp, at that time, mostly could not do that.
> Lisp has scrambled to get back to that, working on more limited funds now,
> since seeing the importance of that to the market.  Increasingly you find
> Lisps now capable of supporting compile-to-dll kinds of things, but that's
> very late in coming.  Lisp always assumed it would be the dominant element
> and that other languages would link into it; it did not position itself for
> being a component of a larger system, and this situation hurt it.
> 
> Lisp spent a lot of work evolving hardawre to match it, which while
> incredibly cool stuff, turned out to be something the market didn't
> much care about.  Companies like old-Symbolics failed to understand
> that people bought its products for the software, not the hardware, to
> the point that when the old company was going bankrupt, it continued
> to focus overly on how to continue its hardware business and to rescue
> its operating system, rather than to port its valued tools as
> standlaone commodities.  (Its assets have later been bought and are
> resold by a new Symbolics Technologies, Inc.)
> 
> Lisp was also hit hard by Java because although the Java langauge is
> like programming in a straight-jacket, Java hit hard on one of the few
> things that all languages had neglected to date except for Lisp: the
> packaging of lots of cool tools into pre-packaged things that you
> didn't have to code up from scratch.  Java mostly seriously sucks as a
> programming language, IMO, but it has the one advantage that it
> managed to convince the public that it would be the canonical source
> of tons of libraries.  Lisp should have done this, or should have
> moved more quickly to annex this.  It still hurts for not having easy,
> reliably access to all that Java has.

Java has been relentlessly hyped and is loved by suits more than
hackers.   It's also the self-styled language of the WWW.   It may
suffer, or even get some of the blame heaped on it, during any
forthcoming
WWW winter, when people realize that the sort of applications delivered
in it are not that different from the ones built in RPG3 twenty
years ago -- forms based interfaces to commercial DP systems.

It also looks like it has been designed for other people to use.
I wonder if Guy Steele prefers it to Common Lisp or Scheme.   Sometimes
I wonder what he sees in it at all.

The other point I might make about Java is that those vastly bloated
libraries do things which are either built into Lisp (e.g.
java.lang.String),
not needed (java.lang.Vector and other types of collections), overcome
language design deficiencies (iterators).   I've once used Java in
preference to Lisp, because it's free, runs on Windows (so no XLib/CLX),
and comes with support for graphics.   I now regret that decision.

> And there has been a fair amount of politics in the area of datatypes,
> too.  I've been involved in language standards a fair amount of my
> career but I've said many times that had I that to do over, I'd
> involve myself not in language standards but datatype standards.
> Datatypes are what form the flow glue between modules.  If you and I
> don't agree on datatypes, it means the action of "marshalling" data
> (converting it to an external representation) or "unmarshalling it"
> (converting it back) while calling between programs is not a no-op,
> and so it's very expensive.  Lisp has a lot of datatypes it considers
> critical that other languages don't recognize.  And this makes it hard
> to call in and out.  We have packages for ameliorating some of those
> effects, but it would help a lot if other languages could primitively
> understand a "symbol" or a "list" or an "arbitrary precision integer".
> The idea that other languages have settled on "integers probably mod
> something" as a datatype (usually appropriately called by the
> truncated name "int") is really awful.  I can't imagine doing regular
> work in a language that is willing to randomly truncate my results
> just because they got too large, according to some arbitrarily
> platform-determined notion of "too large".  The choice of language should
> be a private choice, but it cannot be so long as the choice of language
> reveals the choice of data.  You and I can exchange data on a business
> project by using common devices like forms, money, and so on which represent
> common interchange data.  You don't know if I think in English or Spanish.
> That's my personal choice.  All you know is how I communicate to you.
> But programs aren't like that.  That's sad.  And it's made worse by the
> accidental fact that many languages do coincidentally choose more similar
> datatypes than Lisp does.  (Though if you look close there are details like
> on the Macintosh long ago, and perhaps today--I haven't checked recently--where
> people pass either "Pascal strings" or "C strings" because they know they
> are not the same.  Mostly, though, languages other than Lisp just mirror
> the hardware, and so without cooperating arrive at the same types.  Since
> lists, arbitrary precision integers, and symbols aren't in hardware, most
> languages don't handle them.  And so special compatibility packages and
> special training are required for others to use things that are perfectly
> natural to us.)
> 
> Part of this problem, of course, is that Sun sells iron while it gives
> away Java.  Lisp hasn't the resources of some other commodity working
> toward feeding its free distribution (I'd almost say "dumping" but I
> don't want to get too harsh here) of other technologies like Sun has.
> Java and other nearly-free languages dictate the price point for
> commodity languages.  Lisp used to cost $100,000 per seat in the early
> 80's and people willingly paid it.  But as more and more functionality
> comes out for free in Java, the price of Lisp is driven steadily down and
> this makes it tough for the language to be supported by commercial entities.
> Increasingly, those entities seek "business solution" sales, not language
> sales, since they can still command high prices for that.
> 
> You'll note that very little of anything I've said has anything at all
> to do with the choice of data structure or how comfortable anyone feels
> programming in it or what kind of product can be produced.  These items
> are, at the level of business, very close to irrelevant except insofar as
> someone uses them in a much more strategic way than the Lisp community ever
> has in order to snare a new market.
> 
> What is lamentable in all this is that now is the time for which Lisp was
> designed for.  Lisp is a dynamic language, which has always worked against
> it in sales.  Someone could always show some benchmark that proved (or
> more likely hinted) that you could get an extra memory cycle or two out of
> other languages that you couldn't get out of Lisp.  And there was a time
> when that kind of thing mattered.  But now most people have so much compute
> power available to them that it's largely wasted, and people build products
> that are enormously wasteful of speed and size because they know that next
> year's computers will be ever faster and more loaded with memory and disk
> so that none of that matters.  Lisp used to be cited for being too large,
> but most Lisp systems took the issue so seriously they stopped growing in
> image size around 1990 and have held constant to 4-16MB since then, which
> seems tiny for a modern day program doing one task, much less a program
> like Lisp which usually in that memory size has a full language, development
> environment with multiple browsers, compiler, editor, and so on. (Still,
> people remember the lemma: "lisp is big, c is small".  It doesn't matter
> it's no longer true--it's drilled into them.)  But back to my topic sentence
> for this paragraph, the thing is that Lisp was designed for dynamicity.
> We paid a price in speed, probably a factor of two (even though on a
> case-by-case basis you can usually squeeze it back out if you really have
> to), in order to have most aspects of the language be indirect links, so that
> all kinds of things could be snapped in and out dynamically at runtime.
> Other languages, in the search for those extra machine cycles, refused to
> give up that extra cycle and are fragile by comparison.  This is the time,
> when the web is connecting everything, when the world is changing daily,
> when new datatypes, protocols, file formats, etc. are invented moment
> by moment, when a dynamic language like Lisp should shine.  But making that
> case takes marketing dollars, and the limited resources of Lisp are spent
> on technology.  So the marketing doesn't get done and others beat us out.
> 
> Fortunately, Lisp has ideas that are not easily killed and that are not
> copied in other languages.  If they were, we'd probably just use those other
> languages.  Most of us are after the capability, and don't stand on ceremony
> about what you call the language.  So Lisp survives in spite of itself, and
> in spite of often-poor marketing, and so on.  Because there are important
> ideas many of us know we dare not lose, as it would be too long before they
> were reinvented.
> 
> This is jsut my personal take on it all.  You can probably tell nearly
> everything I've said here is a subjective assessment, and many are things
> other people would assess differently.  Perhaps others will join with
> some useful counterpoint to correct what I've said or flesh it out.
>  --Kent

[1] Paul Graham pokes fun at him because his CV doesn't mention Lisp.
And Allegro Common Lisp documentation doesn't have "Lisp" on the cover
--
it's Allegro CL, and something about dynamic objects, which has been
rumoured to have been bought by some people who thought they were buying
C++.   Lisp is indeed a four letter word.   But like a certain other
four letter word, it is fun.

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Paolo Amoroso
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <LcYCO1N2ee3mWsVyCXQPGMGtJAiq@4ax.com>
On Thu, 17 May 2001 00:41:36 +1000, "John Flynn" <··········@yahoo.com.au>
wrote:

> this stuff was being developed. A lot of you guys were. So I ask you: why is
> it that the 'industry' as a whole chose to do its most practical work in a
> bare bones language? I understand that the situation is different in

I can not answer your question, but I can point you to some background
material, i.e. what two well known Lisp experts wrote about the issues you
raise:

  Richard Gabriel (see the "worse is better" saga)
  http://www.dreamsongs.com/WorseIsBetter.html

  Paul Graham (see the papers section)
  http://www.paulgraham.com/


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: DJC
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3b03b425.5516750@news.blueyonder.co.uk>
On Thu, 17 May 2001 00:41:36 +1000, "John Flynn"
<··········@yahoo.com.au> wrote:

>Until six years ago, I hadn't touched a computer. I wasn't around when all
>this stuff was being developed. A lot of you guys were. So I ask you: why is
>it that the 'industry' as a whole chose to do its most practical work in a
>bare bones language? I understand that the situation is different in
>academia, but why did the people who actually _built_ the stuff that I'm
>using today (even the academics among them) choose to do it in a language
>that, to all appearances, seems relatively poorly equipped for application
>programming? Was it a (series of) historical accident(s) and little more?

I think the PC was an important factor in the rise of C. More than a
dozen years ago not only were the capacities of the machines
relatively limited, they virtualy all depended on an operating system
(MS-DOS) that provided a very poor interface to the hardware. Writing
useable programs for the PC meant bypassing the OS and going direct to
hardware. C, as a glorified version of assembly language, designed for
the writing of operating systems, was suitable for the purpose. Too
suitable, the habit has stuck.
I think also that computing has long been dominated by its
mathematico-logical and positivist foundations; the predominance of a
low-level inside out view that confuses algorithms with programs. The
result is a short-focused concern with making a machine work. We need
to think more of computing as a literature, that is, using language to
create something intangible that relates to people not machinery.
In such a context LISP as a high level expressive language has more to
give. But just as most language is not great literature, I suspect
great programming and languages such as LISP will continue to be rare.
 

-- 
David Clark
<http://www.orditur-telas.com/>
From: David Thornley
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <s6VM6.1249$Dd5.818865@ruti.visi.com>
In article <················@news.blueyonder.co.uk>,
DJC <·····@lostcause.co.uk> wrote:
>On Thu, 17 May 2001 00:41:36 +1000, "John Flynn"
><··········@yahoo.com.au> wrote:
>
>>Until six years ago, I hadn't touched a computer. I wasn't around when all
>>this stuff was being developed. A lot of you guys were. So I ask you: why is
>>it that the 'industry' as a whole chose to do its most practical work in a
>>bare bones language? I understand that the situation is different in
>>academia, but why did the people who actually _built_ the stuff that I'm
>>using today (even the academics among them) choose to do it in a language
>>that, to all appearances, seems relatively poorly equipped for application
>>programming? Was it a (series of) historical accident(s) and little more?
>
I'm going to give you my take on it.

Academia is very important in language choices.  I'm not referring to
any sort of theory or principles here, but students learn stuff that
works and stick to it.

At colleges and universities, you get a lot of highly competent
but inexperienced people with quite a bit of time and very little
money.  This means that they will tend to get fairly inexpensive
equipment and try to get relatively cheap software that they can
change themselves with reasonable effort.

This is how Unix came along.  It was passed from university to
university, and it could be installed and enhanced with student
labor.  It native language was C.  Once you learned C, you could
theoretically do anything on the machine, and this reduced the
popularity of other languages.

Read Stroustrup's "Design and Evolution of C++".  (Do it.  It
has a great deal of bearing on this topic.)  One of his goals
was to provide a language could do anything, so that it would
be necessary only to provide one language.  Another was to
provide a language that could be dropped in to an environment
without a whole lot of support.  CL doesn't do well on these.
CL has only been the systems programming language on rather
expensive hardware, and generally doesn't produce small
executables.  (Neither do lots of things nowadays, but
the tradition continues of calling Lisp big and slow.)

Another thing to consider is that the universities got varied
hardware, and it was desirable to pass programs around.  The
early spread of Unix is tied in with "pcc" (the "portable C
compiler") and later on gcc became widespread.

Right now, I maintain an open source project.  It's written in
C.  The reason is that it is multiplatform, and I can count on
enough people to have C compilers.  A similar project is being
written in Common Lisp, and it's not going to be nearly as
portable.  There's lots of platforms without a decent CL
compiler, and some of the others are somewhat restricted.
For example, Macintosh Common Lisp costs about $1000 for
the same level of shippability you get with Metrowerks Codewarrior
(C/C++/Java) for $400.  Obviously, if you're writing programs
for a living, it's a good deal.  It's something of an entry
barrier for hobby programmers writing open source stuff.

That's one strong reason why many things are written in C:
portability.  Nothing else is anywhere near as portable.
When the program is something that has to run the same on a
wide variety of machines (like anything to do with networking),
it's a real advantage to use C.  It's harder to write anything
that actually works in C than in CL, but it's usually easier to
port to a new system.  If many more people port than write,
there's a clear advantage to C.

>I think the PC was an important factor in the rise of C. More than a
>dozen years ago not only were the capacities of the machines
>relatively limited, they virtualy all depended on an operating system
>(MS-DOS) that provided a very poor interface to the hardware.

I don't think this is sufficient to explain why C took over
instead of Pascal.  Remember also that very many PC programmers
learned assembly language, because the screwy architecture of
Intel chips makes it difficult to compile to efficiently.
I've seen arguments that assembly was more portable than C,
the arguer listing various MS-DOS-based products that would
run an assembly program.

>I think also that computing has long been dominated by its
>mathematico-logical and positivist foundations; the predominance of a
>low-level inside out view that confuses algorithms with programs. The
>result is a short-focused concern with making a machine work. We need
>to think more of computing as a literature, that is, using language to
>create something intangible that relates to people not machinery.

Yup.  I've seen lots of very intelligent people saying things like
that for quite a few years.  They're still saying it, because
it still needs saying.  I don't think that is changing any time
soon.

The C/C++ mindset is heavily based on microefficiency and staying
close to the machine.  (See Koenig, "Ruminations on C++".  In
one of the early chapters, he discusses why not all functions should
be automatically virtual.  The first reason why not, and the most
discussed, is microefficiency.  The other two reasons related to
semantics, and were good reasons.)


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Lars Lundback
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3b03bfb6.66983406@news.ericsson.se>
John,

The title of your subject is so good! In a recent thread ("Uses of
Lisp") I tried to shift the focus away from the laguage itself, saying
that "A Common Lisper is a Philosopher and an Artist". There were no
applauds .... 


On Thu, 17 May 2001 00:41:36 +1000, "John Flynn" 
<··········@yahoo.com.au> wrote:

>I'm mulling over what seems to be a paradox. I'm putting it out here as a
>sincere, if provocative, question because I can't come up with convincing
>answers.

It ceases to be a paradox the moment you put people in the picture.

Common Lispers generally do not excel in building everyday
applications or marketing them. We do not worry too much about GUI:s
or connecting to our computing environments, or even the update or
redesign of the 20-years old Common Lisp standard. It's not that this
cannot be done, it is that we as a community are not that interested
really, because if we were, the old complaints about Common Lisp would
be gone long ago. The aficionados, of which I am one, may frown a bit
but that's all.

By now you have probably read the references that Paolo posted, and
Kent's analysis. Maybe Gabriel's last (?) contribution:

  http://www.dreamsongs.com/NewFiles/ProWorseIsBetterPosition.pdf

is to the point. I quote:

"With Lisp and Smalltalk, for example, you are working with their big
abstractions in a ham-fisted way - just fine for prototyping and
understanding larger issues, but not good for minute algorithm and
data structure design. For these we need craftsmen, not storyboard
designers."

>After a few weeks of study, I can see no obvious reason why Lisp can't be an
>extremely useful and practical general-purpose language. It seems quite
>simple in essence, not so difficult to learn, obviously very versatile, has
>plenty of power to express whatever ideas we need to express, doesn't limit
>our thinking in any obvious way, is plenty fast enough, runs on any machine
>that matters, and has matured during 40-odd years of demanding use by some
>of the brightest people in the field. And I get real pleasure and
>satisfaction from using it. That's good enough for me.

Common Lisp _is_ an extremely useful general-purpose language. My view
is simply that Lispers are not interested in delivering that kind of
applications. If they were, you would have found them.

>But when I look around at the software I use every day, everything from the
>simple stuff: text editors, mail & news programs, word processors,
>spreadsheets right down to the stuff that's not so simple: compilers, web
>browsers, operating systems, RDBMSs, window systems, opengl, communications
>infrastructure, multimedia facilities - it's virtually all in bare-bones C
>(interspersed with some pre-standard-library C++).
>
>And it's mostly pretty _good_ software. It's reliable, fast, easy to use,
>gets the job done well.

That is because of experience and of feedback from many users.
High-level languages and brilliant programmers are only a good
starting point.

>Until six years ago, I hadn't touched a computer. I wasn't around when all
>this stuff was being developed. A lot of you guys were. So I ask you: why is
>it that the 'industry' as a whole chose to do its most practical work in a
>bare bones language? I understand that the situation is different in
>academia, but why did the people who actually _built_ the stuff that I'm
>using today (even the academics among them) choose to do it in a language
>that, to all appearances, seems relatively poorly equipped for application
>programming? Was it a (series of) historical accident(s) and little more?
>
>I suppose one reaction might be: Why does it matter? If you know that Lisp
>is capable of doing anything that can be done in C or C++, requires you to
>waste less time reinventing wheels, solves all tricky memory management
>problems, requires fewer lines of code, and allows you to think at a higher
>level without sacrificing too much efficiency, why are you concerned about
>it? Well, that's precisely _why_ I'm concerned about it. I don't understand
>it. I don't understand why C++ hackers are building whole desktop
>environments, office suites, multimedia tools, and more, with simple
>command-line compilers, while Lisp programmers who own commercial Lisp tools
>are talking about hooking up CORBA bridges to Java to give an application a
>simple GUI.

CLIM is way too complicated and expensive for many applications. Btw,
when I got interested in Lisp, GUI:s was one _the_ things we wanted to
improve. But few of the Lisp-based user interfaces today (that I have
seen) go beyond ordinary 'icons-buttons-and pull-down-menues'. 

>But I'm less interested here in practical issues that what (if anything)
>underlies them.
>
>When I watch my uncle sketching with a pencil, I always get a thrill to see
>him bring a scene or a person alive with a few bold strokes. It's the same
>when I hear my cousin playing a guitar. I could play the same notes on the
>most finely crafted instrument, and make the same sounds, but not the music.
>
>Give my uncle a fine set of paints and brushes; you would not improve his
>art. Give my cousin a better guitar, and you'll get better sound but not
>better music.
>
>I wonder whether something similar happens with programming languages?
>

Programming is rarely a one-man show. A symphony orchestra does not
'sound' better if all the violinists have a Stradivarius.

>I honestly don't know why the "better" languages are not the ones being used
>to make the (visibly, palpably, 'purchaseably') best software in widespread
>use today. (And neither do I doubt that Common Lisp is actually,
>objectively, _better_ than C++).


>At bottom, what I'm wondering is whether, if a pencil is not good enough, no
>expensive paints and brushes are going to be good enough either. And if a
>pencil _is_ good enough, perhaps the more advanced tools will be
>unnecessarily large and unweildy.

You are too general. "Advanced", "large", "unweildy" - related to
what, or for whom? Yourself only, a research team or the programming
departments of large corporations?

>I dunno. It just seems to me that the quantity (and sometimes quality) of
>software is inversely proportional to the ... ? ... ? ... quality ... ? ...
>? ...? of the languages used to create them, and I can't see why.
>
>So I'd be curious to hear any opinions on (a) why my assumptions are all
>wrong and reality is not that way at all, or (b) why it is so.

Because man is the way he is. According to statistics, greying guys
such as me are the safest car drivers, and young people with
lightning-fast reflexes the worst. Why?

Design and implementation of software is basically not different from
other human intellectual activities. We just whish it were because we
see the fancy tools. Or forget that languages do not build software. 

Regards, Lars
From: John Flynn
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <gI2N6.199$Ld4.6163@ozemail.com.au>
Thanks for all the replies. Together, they've given me a much better
understanding of the history, culture and economics of language choice and
the influence of hardware constraints & costs.

I've read of Lisp's reputation ("big and slow") as a myth (which today it
certainly is). I hadn't considered the significance of this myth actually
being true during the PC software industry's formative years. It explains a
lot.

It now seems that Lisp was born for a time when hardware would be cheap,
disk space irrelevant, processors fast, memory plentiful, and programmers'
mindset no longer constrained by economies of same. That's all here today,
so it seems the conditions for a renaissance are met.

Lars Lundback takes up the psychological subplot:

"Lars Lundback" <·············@era.ericsson.se> wrote in message
······················@news.ericsson.se...

> [...] I tried to shift the focus away from the laguage itself, saying
> that "A Common Lisper is a Philosopher and an Artist". There were no
> applauds ....
[...]
> It ceases to be a paradox the moment you put people in the picture.
>
> Common Lispers generally do not excel in building everyday
> applications or marketing them. [...]
>
> By now you have probably read the references that Paolo posted, and
> Kent's analysis. Maybe Gabriel's last (?) contribution:
>
>   http://www.dreamsongs.com/NewFiles/ProWorseIsBetterPosition.pdf
>
> is to the point. I quote:
>
> "With Lisp and Smalltalk, for example, you are working with their big
> abstractions in a ham-fisted way - just fine for prototyping and
> understanding larger issues, but not good for minute algorithm and
> data structure design. For these we need craftsmen, not storyboard
> designers."

OK. This is interesting. In my (limited and possibly atypical) experience of
working with C and C++ programmers (or, _people_ who _use_ C and C++), I've
seen little evidence of an aesthetic appreciation for finely hand-crafted
algorithms and data structures, with attention to minute detail.

Instead, I've seen an obsession with efficiency, often ill-conceived and
sometimes ludicrously misdirected. I've also seen people try to _create_ big
abstractions in C++ in a "hamfisted way", only to end up with a sprawling
mess of generalities that don't handle the specifics too well. (Witness the
religious adherence to OO doctrine, wherein "information" is hidden for no
real benefit. You find yourself digging through thick layers of wrappers and
interfaces to get at a little piece of data you need, and then find it only
offers you a const reference when you really needed a pointer, so you have
to bypass the safety mechanisms that were put there by omniscient designer
to protect you from yourself. Bah!).

But Lars, if what you're saying about the psychology of Lispers is correct
(ie. we/they're overt or closet philosophers and artists), then I think the
passage you've quoted reveals Gabriel's Lispiness more clearly than it
reflects the motivations of Joe C. (Who would appreciate micro-elegance more
than an artist/philosopher? And for its own sake, no less ...).

It's an interesting perspective though. I don't want to misrepresent your
gently humorous observations, but reading between the lines you and Gabriel
seem to be saying that:

* Lisp itself is a masterpiece, but is seldom used to create masterpieces.

* This is not the fault of Common Lisp the language, but consequence of
being used by:

(a) People whose heads are high in the clouds;

(b) People who value the excellence of their tools even more than the
pursuit of excellence _with_ them, or ...

(c) People who pursue excellence in directions that lead them away from the
mundane practicalies of mass-market software products.

Well ... in light of some of the answers I've received about why Lisp wasn't
as widely used as it might have been, perhaps you're onto something. People
who stuck with Lisp because they found other popular languages shoddy by
comparison must have been motivated by something deeper than convenience or
fast money. (As it turns out, they no longer have to choose between
affordability and "QWAN", so I hope their patience and perseverence is
rewarded with something more than grey hair).

> Because man is the way he is. According to statistics, greying guys
> such as me are the safest car drivers, and young people with
> lightning-fast reflexes the worst. Why?

It is because Volvos and their silvering cargo travel well together, and
nobody ever dies in a Volvo.

> Design and implementation of software is basically not different from
> other human intellectual activities. We just whish it were because we
> see the fancy tools. Or forget that languages do not build software.

OK. People build software.

When those people are use Java, they're not paying attention to minute
details of hand-crafted algorithms and data structures though, are they?
They're building palaces of polystyrene, roads of rubber, fountains of foam,
parks of plastic; not a living thing in sight (in Gabriel's worse-is-better
terms).

Maybe Java isn't a good example though. Besides being bland and featureless,
it doesn't seem to be getting much work done either. I've yet to see a
sizeable - or even simple - proof of concept that's impressive in any way,
in spite of _huge_ amounts of money thrown at it.

In my opinion, the opposite is true of C++. But, suffice to say people are
being forced by the language to pay a lot of attention to detail that is,
for most purposes - you must admit - largely irrelevant.

So it's likely I'm missing your point here.

It seems to me that Common Lisp now "flies at the right height" - as Denis
Ritchie once said about C. Perhaps, now that Java has broken the shackles of
C++ and failed to go anywhere, Lisp is in a position to attract pragmatic
programmers who want to get some simple, practical work done in a more
pleasant way. Which is, IMO, worth a hell of a lot. (Those whose heads are
in the clouds can always follow the yellow brick road to Mozart (oz), or
something suitably abstract and rarefied. And those who want to do something
_really_ complex already have what they need).

In the end, I have no practical reason to care who uses Lisp and who
doesn't. I was simply curious about why more people aren't taking advantage
of what seems, to me, a stunningly good deal. All of the answers have given
me clues, so thanks to all who took the time...

Regards,
J.
From: mikel evins
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <9e2hq1$ii2@dispatch.concentric.net>
"John Flynn" <··········@yahoo.com.au> wrote in message
·······················@ozemail.com.au...

> Lars Lundback takes up the psychological subplot:

> > Because man is the way he is. According to statistics, greying guys
> > such as me are the safest car drivers, and young people with
> > lightning-fast reflexes the worst. Why?
>
> It is because Volvos and their silvering cargo travel well together, and
> nobody ever dies in a Volvo.

Then how coming silvering old guys with shiny new silvery Porsches can get
special insurance deals from certain companies because Porsche drivers get
in fewer accidents than average?

:-)
From: John Markus Bjorndalen
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <hvg0e33rdb.fsf@johnmnb.jmbnet>
"mikel evins" <·······@mikelevins.com> writes:

> Then how coming silvering old guys with shiny new silvery Porsches can get
> special insurance deals from certain companies because Porsche drivers get
> in fewer accidents than average?

If you want an attempt at a serious answer: 

Quick reflexes are only going to help you when you get into
trouble. The trick is to avoid the trouble in the first place, which
is one of the things that a slightly more defensive driving style and
experience would help you with (actually, it's a question of
controlling the time you are exposing yourself to risk and the
type/level of risk you expose yourself to).

Another problem with reflexes is that you need to train them, which
isn't too hard for simple things (rear wheels slipping for
instance). More complex things, or things happening less frequently
can be a bit more difficult to manage, and hopefully you won't have to
train those reflexes in dangerous situations such as dense traffic ;-)

-- 
	// John Markus Bj�rndalen
From: Thaddeus L Olczyk
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3b05ef5e.78796218@nntp.interaccess.com>
On 18 May 2001 07:08:49 GMT, "mikel evins" <·······@mikelevins.com>
wrote:

>
>"John Flynn" <··········@yahoo.com.au> wrote in message
>·······················@ozemail.com.au...
>
>> Lars Lundback takes up the psychological subplot:
>
>> > Because man is the way he is. According to statistics, greying guys
>> > such as me are the safest car drivers, and young people with
>> > lightning-fast reflexes the worst. Why?
>>
>> It is because Volvos and their silvering cargo travel well together, and
>> nobody ever dies in a Volvo.
>
>Then how coming silvering old guys with shiny new silvery Porsches can get
>special insurance deals from certain companies because Porsche drivers get
>in fewer accidents than average?
>
That's because Porches are expensive. Whenever I see a Porches Jaguar,
Corvette in front or behind me I drive more carefully because
financially it would hurt to hit one of those. So not only do these
drivers have benifit from their own driving ability they benifit from
others around them driving more carefully.
From: George Neuner
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3b180eab.171433628@helice>
On Sat, 19 May 2001 04:00:36 GMT, ······@interaccess.com (Thaddeus L
Olczyk) wrote:

>On 18 May 2001 07:08:49 GMT, "mikel evins" <·······@mikelevins.com>
>wrote:
>
>>Then how coming silvering old guys with shiny new silvery Porsches can get
>>special insurance deals from certain companies because Porsche drivers get
>>in fewer accidents than average?
>>
>That's because Porches are expensive. Whenever I see a Porches Jaguar,
>Corvette in front or behind me I drive more carefully because
>financially it would hurt to hit one of those. So not only do these
>drivers have benifit from their own driving ability they benifit from
>others around them driving more carefully.

Hmm ... must be Porche specific or maybe location?  My beamer cost a
lot more than the average car on the road and nobody seems to give it
any special consideration.  

Can't say I have noticed other expensive cars getting any either.  And
come to think of it, I haven't noticed anybody backing off the few
Porches I do see.


George
From: John Flynn
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <9SnN6.724$Ld4.32698@ozemail.com.au>
"mikel evins" <·······@mikelevins.com> wrote in message
···············@dispatch.concentric.net...
>
> Then how coming silvering old guys with shiny new silvery Porsches can get
> special insurance deals from certain companies because Porsche drivers get
> in fewer accidents than average?
>
> :-)

It is because relatively few accidents occur while posing for photographs
and admiring one's reflection in the polish. (Most of 'em happen on the
road, I hear).

;-)
From: mikel evins
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <9e53me$cfl@dispatch.concentric.net>
"John Flynn" <··········@yahoo.com.au> wrote in message
························@ozemail.com.au...
>
> "mikel evins" <·······@mikelevins.com> wrote in message
> ···············@dispatch.concentric.net...
> >
> > Then how coming silvering old guys with shiny new silvery Porsches can
get
> > special insurance deals from certain companies because Porsche drivers
get
> > in fewer accidents than average?
> >
> > :-)
>
> It is because relatively few accidents occur while posing for photographs
> and admiring one's reflection in the polish. (Most of 'em happen on the
> road, I hear).
>
> ;-)

Whoops! I've been going about this all wrong. Better not let my insurance
agent catch me on the road -- or see the bird poop I haven't cleaned off...

:-)

--me
From: Kent M Pitman
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <sfwofsr3zew.fsf@world.std.com>
"John Flynn" <··········@yahoo.com.au> writes:

> It now seems that Lisp was born for a time when hardware would be cheap,
> disk space irrelevant, processors fast, memory plentiful, and programmers'
> mindset no longer constrained by economies of same. That's all here today,
> so it seems the conditions for a renaissance are met.

Maybe.  But industry is full of cute little pat phrases that substitute for
reasoning.

 - "Been there, done that."  [I tried Lisp and it didn't work for me.
   Therefore, it never could.]

 - "There are more modern things." [Only new technology can be smart.
   No one ever did anything smart in the past.  Maturity has no value.
   Follow only trendy stuff.]

 ...

Industry loves new and novel.  Investors are shy about investing in
things that have failed (by their metric of failed, which is usually
"didn't yield tenfold returns inside of 3 years") before.  They'd 
often rather invest in a complete unknown.

Some have suggested the greatest barrier to Lisp's acceptance at this
point is its name.  That is, that people recognize it as something
they have seen before.  On the Drew Carey (comedy) tv show this
evening, Drew's pals were talking about seeing an old friend and they
were salivating at meeting her, too.  He said the woman already knew
them, but they insisted "no, our strength is that we're forgettable.
so every few years we get to try again".  One of his pals proudly
related the story of having dated someone who told him about her worst
date, not ever realizing the story was about him!  Lisp is not thus
forgettable, and often doesn't get that second chance to make a first
impression.

So we could change the name.  And we'd have to hide the parens.  Some
have tried that--Dylan was such an example.  New name.  Parens gone.
Many otherwise-similiar ideas. But the parens have a purpose and
getting rid of them hairs up a lot of things.  Dylan also lost most of
its support base by alienating Lispers, and then failed to attract the
audience it sought instead.  (Also due to Java, probably, though in
very different ways.  Dylan expected C++ to die of its own weight and
was going to clean up on refugees.  But Java grabbed that niche by making
its syntax more compatible, by hitting the market at the right moment,
by co-marketing with the hot topic of the day - web tools, etc.)

> reading between the lines [Lars] and Gabriel seem to be saying that:
> 
> * Lisp itself is a masterpiece, but is seldom used to create masterpieces.

How often are masterpieces created?  I might prefer to say Lisp has a 
conceptually ergonomic design.   It's at least pleasant to use.  And its
programs might grow up to be masterpieces, even if not every one does.

> * This is not the fault of Common Lisp the language, but consequence of
> being used by:
> 
> (a) People whose heads are high in the clouds;

Well, the language allows you more rope to hang yourself.  Some languages
are more restrictive, and this leads to more normalization of design (though
sometimes a more normal case of declaring certain things impossible).
Business does like things that keep people in line, though, even when it
costs more to do.  Management is no small, discountable aspect of process.
It's possible Lisp is just a niche language for people who are capable of
usefully being free thinkers.
 
> (b) People who value the excellence of their tools even more than the
> pursuit of excellence _with_ them, or ...

I'm sure there is some of this in any community.  Surely there must be C
programmers equally guilty of this.  I'd ignore this.
 
> (c) People who pursue excellence in directions that lead them away from the
> mundane practicalies of mass-market software products.

Probably some.
 
> Well ... in light of some of the answers I've received about why Lisp wasn't
> as widely used as it might have been, perhaps you're onto something. People
> who stuck with Lisp because they found other popular languages shoddy by
> comparison 

Well, exactly.  For commodity problems, commodity solutions suffice and
are often commercially preferred.  Lispers tend to be, not coincidentally,
people who like solving problems that others have declared impossible. 
Often becausewith commodity/commercial tools, they are impossible.

Remember that Lisp was designed originally to meet the needs of AI researchers
concentrating on the hardest of problems: planning, vision systems, language
processing, and so on, dating back decades.  Languages like C and Java were not
designed and tuned specifically for such problems--they were designed and
tuned for specific, more short-term goals.  So Lisp has its continued niche
wiht the really hard stuff.

BUT again, human behavior plays in. 

 Sales person:  Want to buy Lisp?

 Customer: Do I really need it?

 Sales person:  Not today maybe, but tomorrow.

 Customer: Then I'll buy C today.

 [Time passes.]

 Customer: This is painful.

 Lisper: You should use Lisp.

 Customer: I would, but I'm used to C.  Maybe some other time I'll
   look at Lisp.  I'm sure I can stretch this C thing a little farther.

> Maybe Java isn't a good example though. Besides being bland and featureless,
> it doesn't seem to be getting much work done either. I've yet to see a
> sizeable - or even simple - proof of concept that's impressive in any way,
> in spite of _huge_ amounts of money thrown at it.
> 
> In my opinion, the opposite is true of C++. But, suffice to say people are
> being forced by the language to pay a lot of attention to detail that is,
> for most purposes - you must admit - largely irrelevant.
> 
> So it's likely I'm missing your point here.

Maybe it just hasn't played out yet.  I don't think the investment in C++
libraries is adequate.  C++ is also generally believed by many to be its
own worst enemy.  C is a much mroe stable long-term language.  But neither
has the investment in libraries really.  Maybe C# will be a player.


 
> It seems to me that Common Lisp now "flies at the right height" - as Denis
> Ritchie once said about C. Perhaps, now that Java has broken the shackles of
> C++ and failed to go anywhere, Lisp is in a position to attract pragmatic
> programmers who want to get some simple, practical work done in a more
> pleasant way.

I think the thing still standing in the way of this is the wilingness of
Lisp marketeers to take a risk.  The big companies seem risk-averse, which
I suspect will make it hard for them to grab market turf.  And getting someone
to invest $$ in being aggressive may be hard.  Maybe the little Lisp vendors
like Corman will be more brave in aggressive pricing and positioning, etc.,
having less to lose in some ways.  Or having at least fewer people to 
convince...  I think Franz could have gone head to head with Visual Basic
a few years ago -- I don't know if it still could, since I perceive VB as
somewhat having lost its punch.  But I think Franz hasn't the will to take
the necessary risks to break into this market.

> Which is, IMO, worth a hell of a lot. (Those whose heads are
> in the clouds can always follow the yellow brick road to Mozart (oz), or
> something suitably abstract and rarefied. And those who want to do something
> _really_ complex already have what they need).
> 
> In the end, I have no practical reason to care who uses Lisp and who
> doesn't.

Enough people have to use it for ongoing implementations, whether commercial
or community maintained, to continue to be developed.  Otherwise, investment
in writing code in it is wasted.  I think that's why most people care. 
There is a critical mass issue.  For now, critical mass is satisfied.  But
it doesn't hurt to keep up the influx of new entrants to the community to
keep it healthy.

> I was simply curious about why more people aren't taking advantage
> of what seems, to me, a stunningly good deal. All of the answers have given
> me clues, so thanks to all who took the time...
From: James Hague
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3b053c75_3@newsfeeds>
Kent M Pitman wrote:
>
> Maybe the little Lisp vendors
> like Corman will be more brave in aggressive pricing and positioning,
etc.,
> having less to lose in some ways.

How so?  It's free for personal use, comes with full source code (even for
the C parts), and costs $200 for the registered version.  Maybe $200 is a
bit steep for some, but commercial Lisps for Windows start at $799 (and go
through the roof after that) so I don't have a problem with it.

I'm a registered Corman Lisp user, partially because I find it comfortable
and partially because I can't bring myself to plunk down the substantial
fees for ACL.

> convince...  I think Franz could have gone head to head with Visual Basic
> a few years ago -- I don't know if it still could, since I perceive VB as
> somewhat having lost its punch.  But I think Franz hasn't the will to take
> the necessary risks to break into this market.

Wasn't the personal edition of ACL priced at $500 a few years ago?  I keep
thinking that it was, but I don't have any old ads to look at.  In a couple
of years, if Corman Lisp keeps improving, I could see the difference between
it and a $2500 product being difficult to justify.  But right now there's no
doubt that ACL is an order of magnitude better in the code generation
department.

James




-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Paolo Amoroso
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <sEIFO47OEgtmjQY8bBxRxJI0m7vf@4ax.com>
On Fri, 18 May 2001 07:10:47 GMT, Kent M Pitman <······@world.std.com>
wrote:

> How often are masterpieces created?  I might prefer to say Lisp has a 
> conceptually ergonomic design.   It's at least pleasant to use.  And its
> programs might grow up to be masterpieces, even if not every one does.

It's funny that while there seems to be interest in ergonomics in the
workplace, much of the effort is put on things like placement and quality
of furniture, office size, lighting conditions and other practical issues,
but not much is done for important intellectual tools such as programming
languages.

If a workplace is noisy or dirty, workers are even willing to go on strike
and their complaints often get media coverage. But when it comes to bad
software tools, nobody seems to complain.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: ·@b.c
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <skccgt0nc8umct20aqjsu7jlctlu96gnho@4ax.com>
On Fri, 18 May 2001 18:41:56 +0200, Paolo Amoroso <·······@mclink.it> wrote:

>If a workplace is noisy or dirty, workers are even willing to go on strike
>and their complaints often get media coverage. But when it comes to bad
>software tools, nobody seems to complain.

Noise and dirt affect your health.  Bad software tools affect your employer's
health.
From: James Hague
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <3b0536b4$1_4@newsfeeds>
John Flynn wrote:
>
> This is interesting. In my (limited and possibly atypical) experience of
> working with C and C++ programmers (or, _people_ who _use_
> C and C++), I've seen little evidence of an aesthetic appreciation
> for finely hand-crafted algorithms and data structures, with
> attention to minute detail.
>
> Instead, I've seen an obsession with efficiency, often ill-conceived
> and sometimes ludicrously misdirected.

Often it is more of an indirect obsession with efficiency, meaning that
there is an obsession of sorts, but it isn't reflected in the code that is
written.  There is the assumption that C++ is fast and efficient, therefore
programs written in C++ are fast and efficient.  This belief is often clung
to even if the C++ code is so layered and full of abstraction and poor
algorithms that it is difficult to get any kind of feel of performance.

To be fair, the one fault of Lisp in performance is that implementations can
have hotspots that are outside the control of the programmer.  For example,
there was a comp.lang.lisp post from 1999 containing benchmark results for
three Windows Lisps.  On the BOYER benchmark, Lisp Works was 12 times slower
than ACL.  On the TRIANG benchmark, Lisp Works was 3.8 times *faster* than
ACL.  On that same benchmark, Corman Lisp was 28 times slower than Lisp
Works.  C++ compiler benchmarks are rarely that wacky.

James




-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----==  Over 80,000 Newsgroups - 16 Different Servers! =-----
From: Raffael Cavallaro
Subject: Why Lisp languishes
Date: 
Message-ID: <raffael-CC90CB.00035122052001@news.ne.mediaone.net>
In article <··················@ozemail.com.au>, "John Flynn" 
<··········@yahoo.com.au> wrote:

>* Lisp itself is a masterpiece, but is seldom used to create masterpieces.
>
>* This is not the fault of Common Lisp the language, but consequence of
>being used by:
>
>(a) People whose heads are high in the clouds;
>
>(b) People who value the excellence of their tools even more than the
>pursuit of excellence _with_ them, or ...
>
>(c) People who pursue excellence in directions that lead them away from the
>mundane practicalies of mass-market software products.

A more sobering view is that Lisp, being the most advanced language, was 
first to arrive at the ultimate truth about software:

There are no truly interesting applications to be written. Anything 
truly creative, truly interesting, would be AI complete, and hence, for 
the present, unattainable in software.

What's left are rehashes of stuff that was done, or at least roughed 
out, twenty or thirty years ago - that is to say, solved problems. C/C++ 
and other lower forms of computer life are good at grinding out resource 
efficient solutions to already solved problems. Lisp is better for 
solving them in the first place. But in a world where there are no more 
interesting problems to solve (save the one paramount one), Lisp 
languishes.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: glauber
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <892f97d1.0105220808.35885e7c@posting.google.com>
Raffael Cavallaro <·······@mediaone.net> wrote in message news:<·····························@news.ne.mediaone.net>...

> A more sobering view is that Lisp, being the most advanced language, was 
> first to arrive at the ultimate truth about software:
> 
> There are no truly interesting applications to be written. Anything 
> truly creative, truly interesting, would be AI complete, and hence, for 
> the present, unattainable in software.
> 
> What's left are rehashes of stuff that was done, or at least roughed 
> out, twenty or thirty years ago - that is to say, solved problems. C/C++ 
> and other lower forms of computer life are good at grinding out resource 
> efficient solutions to already solved problems. Lisp is better for 
> solving them in the first place. But in a world where there are no more 
> interesting problems to solve (save the one paramount one), Lisp 
> languishes.


A very interesting point of view. However, new generations of
programmers need to tackle both the important and unimportant things.
Looking at the state of the programming world, the first true AI will
be done in Java (not long ago i would have said Perl!), not in Lisp.
Later it will be commercialized in C# (ouch!).

g
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B0A8738.FC1696A7@enterprise.net>
glauber wrote:

> A very interesting point of view. However, new generations of
> programmers need to tackle both the important and unimportant things.
> Looking at the state of the programming world, the first true AI will
> be done in Java (not long ago i would have said Perl!), not in Lisp.
> Later it will be commercialized in C# (ouch!).

I cannot see any way how it is capable of building an innovative
AI sytem in a language as inexpressive as Java.   I'm sorry but the
human wave approach is about as likely to work as monkeys with
typewriters.

The only way it could be done is by writing a Lisp or Prolog interpreter
in Java (you at least don't need to worry about garbage collection)
and then using that to build your AI.   Or use one that already exists,
such as Kawa.   But that would be cheating.

> g

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: glauber
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <892f97d1.0105250634.5869e51e@posting.google.com>
Donald Fisk <················@enterprise.net> wrote in message news:<·················@enterprise.net>...
> glauber wrote:
> 
> > A very interesting point of view. However, new generations of
> > programmers need to tackle both the important and unimportant things.
> > Looking at the state of the programming world, the first true AI will
> > be done in Java (not long ago i would have said Perl!), not in Lisp.
> > Later it will be commercialized in C# (ouch!).
> 
> I cannot see any way how it is capable of building an innovative
> AI sytem in a language as inexpressive as Java.   I'm sorry but the
> human wave approach is about as likely to work as monkeys with
> typewriters.

Yes, give them enough monkeys (programmers), time, money and
typewriters (hardware) and they'll get it done. About 20 years ago in
"Godel Escher Bach", Douglas Hoffstader (sp?) predicted that we would
never have a computer "smart enough" to play chess better than a human
chess master. He was wrong, because he was thinking that building
subtle intelligence into the chess-playing program was the way to go.
What finally did it was massive parallelism using the same "stupid"
techniques that have always been available. The elegant solution
doesn't always win.

My progression in learning how to program went: Basic -> Pascal -> C
-> Perl -> Java -> Lisp. If i had started with Lisp instead of
finishing with it, things might have been a lot different in my
career.



> The only way it could be done is by writing a Lisp or Prolog interpreter
> in Java (you at least don't need to worry about garbage collection)
> and then using that to build your AI.   Or use one that already exists,
> such as Kawa.   But that would be cheating.


As somebody said, any program of sufficient complexity includes within
it a buggy and incomplete Lisp interpreter!

g
From: Wade Humeniuk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <9elud7$gkt$1@news3.cadvision.com>
"glauber" <··········@my-deja.com> wrote in message
·································@posting.google.com...
> My progression in learning how to program went: Basic -> Pascal -> C
> -> Perl -> Java -> Lisp. If i had started with Lisp instead of
> finishing with it, things might have been a lot different in my
> career.
>

Interesting progression, mine was,
Basic->Fortran->Assembler->Cybil(Pascal)->C->C++->Scheme->Common Lisp.  I
was first exposed to Lisp around the Basic introduction.  A seed planted
long ago.

> As somebody said, any program of sufficient complexity includes within
> it a buggy and incomplete Lisp interpreter!

That is one of the reasons I wondered why I was not using Lisp.  I was
always implementing lists for my programming jobs.  It got quite annoying.
I even went so far as implementing the basic lisp list functions in a C
library so I had some tools handy.

Wade
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B0D897E.48EC0926@enterprise.net>
Wade Humeniuk wrote:
> 
> "glauber" <··········@my-deja.com> wrote in message
> ·································@posting.google.com...
> > My progression in learning how to program went: Basic -> Pascal -> C
> > -> Perl -> Java -> Lisp. If i had started with Lisp instead of
> > finishing with it, things might have been a lot different in my
> > career.
> >
> 
> Interesting progression, mine was,
> Basic->Fortran->Assembler->Cybil(Pascal)->C->C++->Scheme->Common Lisp.  I
> was first exposed to Lisp around the Basic introduction.  A seed planted
> long ago.

For me, it was Fortran, then Basic, which I used for AI.   Then a friend
told me that if I was doing AI I should learn Lisp.   I bought my copy
of
Winston and Horn 1st Edition, and he gave me his password for a machine
at Cambridge which he no longer used.   It was batch processed and I had
to connect to it over the phone line using an acoustic coupler but it
was
a start.   In my first job I tracked down an implementation of PSL that
once ran on a Burroughs mainframe I used but was archived, and got it
put
back.   I then alternated between using my own buggy Lisp interpreter
and PS Scheme.   Along with one other person whom I interviewed and
worked
with, I may well be the only person to have done commercial Lisp
programming
in Hong Kong.   Unless anyone else here knows better.

> > As somebody said, any program of sufficient complexity includes within
> > it a buggy and incomplete Lisp interpreter!

Philip Greenspun.   There is, though, my own corrollary of it: that
any Lisp program of sufficient complexity includes within it a buggy
and incomplete Snobol4 interpreter.

> That is one of the reasons I wondered why I was not using Lisp.  I was
> always implementing lists for my programming jobs.  It got quite annoying.
> I even went so far as implementing the basic lisp list functions in a C
> library so I had some tools handy.

<aol />

> Wade

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Christopher Stacy
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <u4ru5lxdm.fsf@spacy.Boston.MA.US>
FORTRAN IV, BASIC, APL, Assembler, C, Lisp, Perl, Java,
with a smattering of many others thrown in here and
there along the way.  The ones I programmed in the most,
by far: BASIC, APL, Assembler, Lisp.
From: Nicholas Geovanis
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <Pine.HPX.4.10.10105221035460.3273-100000@merle.acns.nwu.edu>
On Tue, 22 May 2001, Raffael Cavallaro wrote:

> A more sobering view is that Lisp, being the most advanced language, was 
> first to arrive at the ultimate truth about software:
> There are no truly interesting applications to be written. Anything 
> truly creative, truly interesting, would be AI complete, and hence, for 
> the present, unattainable in software.
> 
> What's left are rehashes of stuff that was done, or at least roughed 
> out, twenty or thirty years ago - that is to say, solved problems.

True, but your observation applies to a different domain than that which
the earlier poster spoke of. He was specifically concerned with the
application of lisp to the "solved problems", not to the "Great Unsolved
Problem". In the realm of the "solved problems", the only criterion which
matters is the overall contribution to the sponsoring corporation's
financial well-being, the same criterion which would be applied to a crane
or punch-press or two-way radio. There are no other important 
considerations. In that context, the chosen solution to the "solved
problems" must satisfy sociological and economic constraints which do not
apply to the solution of the "Great Unsolved Problem".

There may come a time when the solution of the "Great Unsolved Problem"
falls within the domain of problems with meaning to the business world (or
when the approximations to the solution become sufficiently accurate
to be useful to the business world, as a result of hardware and/or
software evolution). But clearly, this would be a result of changes
in the boundaries of the sociological and economic constraints. Those
changes would say nothing at all about any intrinsic applicability or
importance of the "Great Unsolved Problem", nor would they say anything
about suitability of implementation languages.

For simplicity, I have intentionally ignored the interests of the
nation-state and its associated military, a serious omission in light of
the history of AI. But I argue that the same considerations apply.

> Raf

* Nick Geovanis
| IT Computing Svcs               Our revenge will be the laughter
| Northwestern Univ                  of our children.
| ··········@nwu.edu                    - Bobby Sands
+------------------->
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-E0E17C.23063522052001@news.ne.mediaone.net>
In article 
<········································@merle.acns.nwu.edu>, 
··········@nwu.edu wrote:

>True, but your observation applies to a different domain than that which
>the earlier poster spoke of. He was specifically concerned with the
>application of lisp to the "solved problems", not to the "Great Unsolved
>Problem".

And I was explaing to him that Lisp is best suited to solving the as yet 
"unsolved problems," and languages like C, etc., are better suited to 
crafting these known solutions into resource efficient implementations.

"The application of lisp to the 'solved problems,'" as you put it, is a 
waste of computational resources, because once you know how to solve 
your problem, you can almost always write a faster, and more cheaply 
maintained implementation in C, etc. (because of the abundance of 
inexpensive programmers experienced in such languages).

Unfortunately, the unsolved problems for algorithmic languages are 
pretty much exhausted. Computer scientists, like most naive positivists, 
assumed that cognition could be algorithmically modelled (some still do 
- I saw Marvin Minsky talk a couple of years ago, and he asserted that, 
once we figure it all out, we'll reduce human level intelligence to an 
algorithmic solution that would run just fine on a Pentium 100 speed 
CPU!). They failed to take account of the fact that most of what we do 
cognitively is completely unavailable to us consciously, and hence, far 
from straightforward to put into algorithmic form.

In other words, we (many of us anyway) cherish the illusion that most of 
what we do mentally is within our conscious awareness, even volitional 
control. Nothing could be farther from the truth. Most of our cognition 
takes place outside, and completely unavailable to our awareness. We 
(our conscious awareness) are merely presented with it's end products. 

It's not at all clear that most of our cognition is even susceptible of 
algorithmic representation (I'm not saying it isn't, but it may very 
well not be). This, IMHO, is why strong AI failed, and why it may never 
succeed. If human level intelligence is  a complex, indeterminate 
interaction of numerous, largely autonomous, evolving, real time 
systems, then a purely algorithmic solution is doomed.

This revelation struck the Lisp community first, because they were at 
the forefront of AI research. The implication for Lisp (apart from it's 
being scapegoated for "AI Winter," as Kent has mentioned) is that 
there's a sneaking suspicion, that everything interesting that's 
susceptible of algorithmic representation, has already been done (or at 
least roughed out).

Since solving these problems is Lisp's strength, it has since been 
outsrtipped by languages that are bad at solving hard problems, but good 
at implementing resource efficient versions of problems that were 
originally solved in dynamic languages, like lisp.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: John Flynn
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <13HO6.2698$Ld4.120886@ozemail.com.au>
"Raffael Cavallaro" <·······@mediaone.net> wrote in message
··································@news.ne.mediaone.net...
[...]
> Unfortunately, the unsolved problems for algorithmic languages are
> pretty much exhausted. Computer scientists, like most naive positivists,
> assumed that cognition could be algorithmically modelled (some still do
> - I saw Marvin Minsky talk a couple of years ago, and he asserted that,
> once we figure it all out, we'll reduce human level intelligence to an
> algorithmic solution that would run just fine on a Pentium 100 speed
> CPU!). They failed to take account of the fact that most of what we do
> cognitively is completely unavailable to us consciously, and hence, far
> from straightforward to put into algorithmic form.

[ failure of "strong AI" to date ]

> This revelation struck the Lisp community first, because they were at
> the forefront of AI research. The implication for Lisp (apart from it's
> being scapegoated for "AI Winter," as Kent has mentioned) is that
> there's a sneaking suspicion, that everything interesting that's
> susceptible of algorithmic representation, has already been done (or at
> least roughed out).

So the effort to endow machines with full human-like consciousness was as
doomed to failure as the alchemist's dream? Lisp programmers were the first
'alchemists' to realise they couldn't turn base metals into gold? So they
preferred to turn away in disappointment, leaving the development of the
science of 'chemistry' they'd inadvertently spawned to less ... flighty ...
folk?

That's certainly an interesting psychological/historical supplement to the
other answers I received. Thanks! It helps to explain for the paucity of
bookkeeping software written in Lisp ;-)

OTOH it doesn't make Lisp any less suited to making such software. If I ever
have to do it (hopefully not), I'd prefer to do it in Lisp than in C any
day - even though it is a well and truly solved problem.

But if all the interesting unsolved computational problems are either
already solved or impossible to solve with current methods, surely there are
still enough unsolved problems in other sciences (esp. biology) that
desperately need advances in computer science to help unravel their
mysteries (and possibly vice versa!). I don't think we're going to run out
of interesting unsolved problems any time soon, and maybe we're as likely to
solve some of them _with_ computers as we are to apply computers to a known
solution.

Aside: I don't know much about AI (but I'd like to). You imply that
consciousness and volition are at the tip of the iceberg; that they depend
on vastly complicated unconscious parallel processes that generate it; we do
not control it, we mostly experience (a tiny fraction of) the end result. I
think that's fairly obvious, and should have been obvious for a long time.

What I'd like to know is: how does this equate to a "solved" or "insoluble"
problem? Is there any evidence to suggest that these processes (if they ARE
processes) can never be simulated with machines? Or are you saying that we
don't or can't know because the hardware required to simulate wetware is
much more than we currently possess?  And/or we still have no idea how to go
about it? Or that it's simply proving to be too vast a problem to
contemplate solving?

Is there ever going to be an "AI Spring" in our lifetimes? If so, where are
the breakthroughs most likely to be sought?
From: Nicholas Geovanis
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <Pine.HPX.4.10.10105230959070.27109-100000@merle.acns.nwu.edu>
On Wed, 23 May 2001, John Flynn wrote:

> Aside: I don't know much about AI (but I'd like to). You imply that
> consciousness and volition are at the tip of the iceberg; that they depend
> on vastly complicated unconscious parallel processes that generate it; we do
> not control it, we mostly experience (a tiny fraction of) the end result. I
> think that's fairly obvious, and should have been obvious for a long time.

You are not the first to come to that conclusion. Donning my flak-jacket
in this bastion of strong-AI :-), I heartily recommend to you the works of
John R. Searle.

> What I'd like to know is: how does this equate to a "solved" or "insoluble"
> problem? Is there any evidence to suggest that these processes (if they ARE
> processes) can never be simulated with machines? Or are you saying that we
> don't or can't know because the hardware required to simulate wetware is
> much more than we currently possess?  And/or we still have no idea how to go
> about it? Or that it's simply proving to be too vast a problem to
> contemplate solving?

IMHO: Yes. A qualitatively different problem, not merely quantitatively
different (though that also). On this particular subject, you might enjoy
the works of mathematician Roger Penrose on consciousness and AI: "The
Emperor's New Mind" and "Shadows of the Mind".

> Is there ever going to be an "AI Spring" in our lifetimes? If so, where are
> the breakthroughs most likely to be sought?

IMHO: There will be no "breakthroughs", only incremental improvements. 
The biggest such increment will occur when the techniques of automated
reasoning (in various logics) become more widely deployed in "everyday"
applications, a process which has already begun. In the words of
Feigenbaum (I think): "Every time we think we've created intelligence, it
turns-out that we've merely written a better program".

* Nick Geovanis
| IT Computing Svcs               Our revenge will be the laughter
| Northwestern Univ                  of our children.
| ··········@nwu.edu                    - Bobby Sands
+------------------->
From: David Thornley
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <OhSO6.3387$Dd5.1362315@ruti.visi.com>
In article <·········································@merle.acns.nwu.edu>,
Nicholas Geovanis   <··········@nwu.edu> wrote:
>On Wed, 23 May 2001, John Flynn wrote:
>
>> Aside: I don't know much about AI (but I'd like to). You imply that
>> consciousness and volition are at the tip of the iceberg; that they depend
>> on vastly complicated unconscious parallel processes that generate it; we do
>> not control it, we mostly experience (a tiny fraction of) the end result. I
>> think that's fairly obvious, and should have been obvious for a long time.
>
My personal opinion is that it is possible to write a Lisp program that
simulates human cognition.  I don't know what sort of hardware would
be necessary to run it at real-time speed.  I am completely sure that
studying the code would yield nothing but frustration.  It's a really
hard problem, and I don't think it has any solution significantly
less complicated than the human cerebral cortex.

>You are not the first to come to that conclusion. Donning my flak-jacket
>in this bastion of strong-AI :-), I heartily recommend to you the works of
>John R. Searle.
>
I don't.  Read the paper that introduced the Chinese Room analogy
*very* carefully.  Note how he basis his initial argument on the
claim that anything that understands something must have a component
that understands something.  Note also his response to any objection
to his initial claim:  he carefully analyzes the reasoning (watch
as the syllogisms never leave the hand!) and reduces it to his
first claim, which he claims to have proved.  Realize that there isn't
a valid and relevant argument in the whole paper.

>> What I'd like to know is: how does this equate to a "solved" or "insoluble"
>> problem? Is there any evidence to suggest that these processes (if they ARE
>> processes) can never be simulated with machines?

Nope.  There are wild speculations on both sides.  There is evidence that
they can never be simulated with understandable algorithms, but you
don't need to understand an algorithm to use it.  Consider, for example,
an artificial neural net.  At each node, there will be one or more
values.  These values are usually meaningless in themselves, but the
combination of values, in some way that cannot easily be understood,
does something that is understandable.

 Or are you saying that we
>> don't or can't know because the hardware required to simulate wetware is
>> much more than we currently possess?  And/or we still have no idea how to go
>> about it?

Bingo.  That is exactly what I think.  We may or may not have adequate
hardware.

 Or that it's simply proving to be too vast a problem to
>> contemplate solving?
>
Maybe.  So far, that's what it looks like, but last century's
vast scientific problems are today's undergrad work.

>IMHO: Yes. A qualitatively different problem, not merely quantitatively
>different (though that also). On this particular subject, you might enjoy
>the works of mathematician Roger Penrose on consciousness and AI: "The
>Emperor's New Mind" and "Shadows of the Mind".
>
I found Penrose fascinating, although I don't remember him saying
anything definite in TENM.

>> Is there ever going to be an "AI Spring" in our lifetimes? If so, where are
>> the breakthroughs most likely to be sought?
>
>IMHO: There will be no "breakthroughs", only incremental improvements. 

There may be incremental improvements that amount to effective
breakthroughs.  For example, suppose that there are enough improvements
so that a computer can see a picture and describe it.  There's
techniques to do many of the bits of this, and perhaps all that's
really necessary is more such techniques and putting them together.
(Then again, perhaps not.)  If that was possible, it would be a
breakthrough.

>The biggest such increment will occur when the techniques of automated
>reasoning (in various logics) become more widely deployed in "everyday"
>applications, a process which has already begun.

Something like a more widespread use of expert systems?  I don't know
about that (literally).  An expert system is a different sort of
programming language, after all, one that has certain advantages
and likely certain disadvantages.

 In the words of
>Feigenbaum (I think): "Every time we think we've created intelligence, it
>turns-out that we've merely written a better program".
>
So far, he's right.  At this point, it is customary to have a heated
discussion of exactly what intelligence is and can be, which I suggest
we just take as read.


--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Francois-Rene Rideau
Subject: Artificial Intelligence (was: Why Lisp languishes)
Date: 
Message-ID: <87snhw47ce.fsf_-_@Kadath.augustin.thierry>
This is getting off-topic from comp.lang.lisp,
so I'm redirecting toward a more appropriate forum that I know of,
the archived mailing list ············@tunes.org
where the topic has already been discussed
(see archives for september and october 2000):
        http://lists.tunes.org/mailman/listinfo/cybernethics

Followup-To: ············@tunes.org


"John Flynn" <··········@yahoo.com.au> writes on comp.lang.lisp:
> Is there any evidence to suggest that these processes (if they ARE
> processes) can never be simulated with machines?
There are many hard issues to overcome before we can possibly achieve AI:

SPEED DISCREPANCY
One big problem is that you can define intelligence but
in terms of adaptable (or at least adapted) interaction.
Now, how much do you interact with computers,
what's the bandwidth and what's the feedback?
Ever tried talking with someone with a filter
that makes him 1000 times slower?
Whichever way the speed discrepancy goes,
it makes useful interaction of little interest,
since either party will be bored to death before anything comes out.
Thus, even the "right" program that you'd find by miracle
won't be of much use to you if it doesn't run at the right speed.

COMPLEXITY OF THE STATE OF MIND
Now, we can't hope to build the right "mind"
with the right "state of mind" all of apiece
        If the human mind were simple enough to understand,
        we'd be too simple to understand it.
        	-- Pat Bahn
Thus, we cannot hope build any kind of explainable algorithm
that "just works" and yields immediate intelligent behavior.
All we can hope for is to factor the problem into general rules
that be simple enough for us to manage, yet are able to gather
external complexity from interaction and/or a database,
and shape it into an internal complexity of its state of mind,
so it can show intelligent behaviour adapted to interacting back.
So we'll have to train and educate an AI just like we do with NI.

SLOW CONVERGENCE OF THE TRAINING PROCESSES
Consider that breeding a baby, even the most gifted one,
into being able to express intelligence takes quite a few years
(and decades afterwards to get him to be proficient at any valuable job).
Consider that we have a long experience of breeding babies,
and even then we create perfectly stupid or brainwashed human beings.
Consider that we have no experience whatsoever of successfully breeding AIs.
Consider that we have been rivaling into creating more adaptative
biological behaviour for 15 billion years.
Consider neither such strong drive as individual survival,
nor the duration to have it be a selective force,
is going to help creating an AI.

CHAOTIC DIVERGENCE WITH RESPECT TO SEED RULES
Education takes years, but crucially depends on the seed rules
being right from the beginning of the training period.
Yet, Getting these rules right is in itself quite a challenge:
under 2% of the genetic rules differ between chimps and humans,
and that genetic factors for mental insanity are much less than that.
Getting a one parameter slightly wrong (misordering of rules,
wrong factor, etc.), can yield critical unability to learn and adapt.
And since it may take years of efforts to train and test an AI,
we have little feedback on the long-term effects of seed rules,
so we have basically little process to get them right.

LACK OF A SUCCESS CRITERION
So even if go through all these processes and overcome the difficulties,
how do we know if or when we succeeded? How can we judge intelligence?
How do you know that you or I or anyone on this forum is intelligent?
Are we intelligent, despite all the nonsense we utter?
This only get worse if our goals for an AI is
to be "more intelligent" than man.
	The risk is that if, one day, machines become intelligent,
        we mightn't be mentally equipped to notice they are.
		-- Tir�sias, in J.-P. Petit, "A quoi r�vent les robots?"


To overcome these difficulties is quite challenging.
Happily, computers also have advantages with respect to biology,
that can help go toward an AI:
* we can split and factor our system into subsystems
 that we can grow, adapt, or recombine separately,
 with much more flexibility than hereditary evolution has.
* we can save states (for a given set of rules) and restart them,
 or save interaction scenarios and replay them,
 so that when training a new baby AI, we needn't start from scratch.
* once computers get much faster than needed,
 we can replay training periods in accelerated time,
 train several AIs in parallel and have them interact with each other, etc.
* we can use dumber programs to help create, breed, and select
 smarter ones, so progress can be not only incremental,
 but hyperexponential (just like in biology). That's bootstrap theory.

As for the success criterion, well, we'll find out that just like
"intelligence" is ultimately a moot criterion for judging human beings,
it is a moot criterion for judging machines.
Machines, AI or not, autonomous or not, will have to prove
their efficiency through social and economical interaction;
either they are marginally useful to others and will thus survive
by earning their living, or they are not, and will disappear or be replaced.


> Is there ever going to be an "AI Spring" in our lifetimes?
Depends on how long we live, and how much we work toward that goal.
Supposing that we are superrational beings
(in the sense used by Hofstadter in "Metamagical Themas"),
this "we" means you and I and similar-minded persons:
If we don't do it, odds are nobody will;
whereas if we do, odds are lots of like-minded persons will do.

There is at least one institute dedicated to bringing up an AI,
that has very challenging prose on its website:
        http://www.singinst.org/
Check its links, too.


> If so, where are the breakthroughs most likely to be sought?
I think a crucial point in achieving an AI is bootstrap:
getting programs to help us design better programs.
That's where LISP comes into play: it is designed for metaprogramming.
Now, if we want a competitive process where lots of metaprograms
are trained on lots of programs, we'd better lower
the protectionist barriers that prevent us from running
meta^n-programs on meta^(n-1)-programs; I argued in this way in
        http://fare.tunes.org/articles/ll99/index.en.html

Another interesting side property of
such a metaprogramming bootstrap process
is that as long as we do get better programs through metalevel automation
(hence, as long as we can understand anything that can be explained
about the way we ourselves design programs),
the process is economically self-sustaining,
so that we actually get a chance to indefinitely pursue this activity
for its positive side-effects in all domains of computing,
whether or not we eventually reach AI in the end.
Such a metaprogramming bootstrap process is what TUNES ought to be about,
with the much more limited initial ambitions
to build a decent computing system.

Yours freely,

[ Fran�ois-Ren� �VB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[  TUNES project for a Free Reflective Computing System  | http://tunes.org  ]
Laziness is the mother of Intelligence. The father is Greed.
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-D3327A.23481623052001@news.ne.mediaone.net>
In article <·····················@ozemail.com.au>, "John Flynn" 
<··········@yahoo.com.au> wrote:

>OTOH it doesn't make Lisp any less suited to making such software. If I 
>ever
>have to do it (hopefully not), I'd prefer to do it in Lisp than in C any
>day - even though it is a well and truly solved problem.


Actually, Lisp is much better suited to making software, even of already 
solved problems. However, Lisp is worse suited to *maintaining* software 
implementations of already solved problems, since there are many 
thousands of inexpensive C programmers who can be hired when your team 
members move on (personnel turnover is very high in software 
engineering), but inexpensive lisp hackers are not exactly abundant.

Secondly, C has a smaller per developer cost (often free, and many C 
hackers prefer the free tools). Since we're just making sausage at this 
point (not solving hard problms) the boss says go with the cheap 
solution, and that's why C gets picked.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Steven L. Collins
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <9ei1ed$t4k$1@slb7.atl.mindspring.net>
"Raffael Cavallaro" <·······@mediaone.net> wrote in message
··································@news.ne.mediaone.net...
> In article <·····················@ozemail.com.au>, "John Flynn"
> <··········@yahoo.com.au> wrote:
>
> >OTOH it doesn't make Lisp any less suited to making such software. If I
> >ever
> >have to do it (hopefully not), I'd prefer to do it in Lisp than in C any
> >day - even though it is a well and truly solved problem.
>
>
> Actually, Lisp is much better suited to making software, even of already
> solved problems. However, Lisp is worse suited to *maintaining* software
> implementations of already solved problems, since there are many
> thousands of inexpensive C programmers who can be hired when your team
> members move on (personnel turnover is very high in software
> engineering), but inexpensive lisp hackers are not exactly abundant.
>
> Secondly, C has a smaller per developer cost (often free, and many C
> hackers prefer the free tools). Since we're just making sausage at this
> point (not solving hard problms) the boss says go with the cheap
> solution, and that's why C gets picked.

I just read a paper by Richard Gabriel today that makes a few interesting
points about C and Software design that seems to be relevant to several
threads going on in C.L.L
http://www.jwz.org/doc/worse-is-better.html

Steven
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-5F0C12.23590323052001@news.ne.mediaone.net>
In article <·····················@ozemail.com.au>, "John Flynn" 
<··········@yahoo.com.au> wrote:

>You imply that
>consciousness and volition are at the tip of the iceberg; that they depend
>on vastly complicated unconscious parallel processes that generate it; we 
>do
>not control it, we mostly experience (a tiny fraction of) the end result. 
>I
>think that's fairly obvious, and should have been obvious for a long time.


Well the subtlety is this: Since we're only aware of what we're aware 
of, we tend to extrapolate from that. Our conscious mind presents us 
with a logical picture of how we've managed to acomplish somthing. We 
convert this into an alorithm, however complex, and we've created a 
limited bit of artificial intelligence. So we extrapolate that all the 
mental processing we *aren't* aware of is also of the same neat, 
algorithmic kind. Not necessarily so.

Moreover, there's good evidence to indicate that our conscious awareness 
creates just-so stories to make what goes on behind the scenes seem more 
consistent and logical than it really is. So even that which we think is 
logical/consistent/algorithmic may not be so.

So it isn't that people weren't aware that most of cognition is 
unconscious, it's that people weren't aware that most of that 
unconscious cognition isn't immediately amenable  to the sorts of 
logical ordering that we're presented with by verbal consciousness.

It may not be algorithmic. It may be non-determinate. It may depend 
greatly on real time interaction of quasi independent subsystems. It may 
depend heavily on those subsystems co-evolving, phylogenetically since 
conception, and ontogenetically for millions of years. It may depend 
greatly on being in a human body, with human sensations. In other words, 
all sorts of things which get elided in the nice pat representations 
available to verbal consciousness, and upon which we base our 
algorithmic solutions to problems.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: DJC
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3b0d6602.301360593@news.blueyonder.co.uk>
On Thu, 24 May 2001 03:59:01 GMT, Raffael Cavallaro
<·······@mediaone.net> wrote:

>So it isn't that people weren't aware that most of cognition is 
>unconscious, it's that people weren't aware that most of that 
>unconscious cognition isn't immediately amenable  to the sorts of 
>logical ordering that we're presented with by verbal consciousness.

To put it another way. Consious, verbal, 'rational', intelligence is
not what comes naturally to people (or any animal); so people who can
have been regarded as intelligent. Now we have machines that are
rather good at some of the things we used to think rather clever. "THe
hard things turn out to be easy and the easy things hard."

-- 
David Clark
<http://www.orditur-telas.com/>
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-1A9161.00093624052001@news.ne.mediaone.net>
In article <·····················@ozemail.com.au>, "John Flynn" 
<··········@yahoo.com.au> wrote:

>Is there any evidence to suggest that these processes (if they ARE
>processes) can never be simulated with machines? Or are you saying that we
>don't or can't know because the hardware required to simulate wetware is
>much more than we currently possess?

What if it's not just the wetware we need to simulate, but the whole 
environment that wetware develops in? That would mean simulating a body, 
and other intelligences to interact with, and a realistic physical 
environment to interact with. That's a pretty tall order, 
computationally.

If you're willing to have real people stand in for the "other 
intelligences," (since the whole point of this exercise is to create an 
intelligence, you don't yet have one to use as part of your simulation) 
now you're talking about a full time family that devotes as much time as 
as real parents, sibs, relatives, playmates, teachers, etc. to 
interacting with a computer simulated child. That's a pretty tall order 
too. It might happen some day, but I'm not holding my breath.

Even worse, what if some processes are so fine grained, that simulating 
them means effectively recapitulating them. In other words, computers 
are powerful because they can have bits and bytes stand in for rather 
more complex real entities, and the symbols can be manipulated much more 
easily than the reality for which they stand. But what if some systems 
don't admit of significant symbolic reduction? What if a substantial 
portion of the elements can't be summarizee, reduced, cut down to size, 
without making your simulation invalid, or producing a significantly 
impoverished result?

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: gswork
Subject: Re: Why Lisp languishes - algorithmic cognition
Date: 
Message-ID: <81f33a98.0105240018.551d4104@posting.google.com>
Raffael Cavallaro <·······@mediaone.net> wrote in message news:<·····························@news.ne.mediaone.net>...

> Unfortunately, the unsolved problems for algorithmic languages are 
> pretty much exhausted. Computer scientists, like most naive positivists, 
> assumed that cognition could be algorithmically modelled....

> They failed to take account of the fact that most of what we do 
> cognitively is completely unavailable to us consciously, and hence, far 
> from straightforward to put into algorithmic form.

There is also the issue of how cognitive processes and environmental
influences interact, to take a constructivist stance.

Put simply, our personal sensitivity to the environment and the
experiential PLUS innate mix that fuel our cognitive development is
going to be very hard to emulate - at any meanignful level - in a
machine designed by our minds existing at, as you say, the end result
of those processes.    Searching for an algorithm to 'be' cognitive
may be jumping the gun at this point in time.    We may not have
meaningful feedback systems to complement whatever "bios/rom" we
design.

Anyway...
From: Pierre R. Mai
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <87r8xhl3v7.fsf@orion.bln.pmsf.de>
Raffael Cavallaro <·······@mediaone.net> writes:

> A more sobering view is that Lisp, being the most advanced language, was 
> first to arrive at the ultimate truth about software:
> 
> There are no truly interesting applications to be written. Anything 
> truly creative, truly interesting, would be AI complete, and hence, for 
> the present, unattainable in software.
> 
> What's left are rehashes of stuff that was done, or at least roughed 
> out, twenty or thirty years ago - that is to say, solved problems. C/C++ 
> and other lower forms of computer life are good at grinding out resource 
> efficient solutions to already solved problems. Lisp is better for 
> solving them in the first place. But in a world where there are no more 
> interesting problems to solve (save the one paramount one), Lisp 
> languishes.

Oh, there are many interesting problems to solve, and not even most of
them are AI-complete.  It's just that people have so gotten used to
computers being dysfunctional, dumb and barely as flexible as
punch-card counting machines, that they need real convincing that

a) it is possible to write software that goes beyond pre-coded office
   automation stuff, and

b) it is hence worthwhile to persue such goals.

Just look at e.g. SAP R/3, which has gotten to the point that complete
businesses will be restructured in order to fit them better to the
processes SAP R/3 implements, rather than vice-versa.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-4B2BFD.22282422052001@news.ne.mediaone.net>
In article <··············@orion.bln.pmsf.de>, "Pierre R. Mai" 
<····@acm.org> wrote:

>Oh, there are many interesting problems to solve, and not even most of
>them are AI-complete.

I didn't merely say "interesting," I said "truly creative, truly 
interesting."

For many of us, the standard of "truly creative, truly interesting," 
rises a little higher than automating business processes. Automating 
business processes is what I referred to as a "solved problem."

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Larry Loen
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B0BB4EA.A250BEAB@us.ibm.com>
Raffael Cavallaro wrote:

[snip]
> 
> I didn't merely say "interesting," I said "truly creative, truly
> interesting."
> 
> For many of us, the standard of "truly creative, truly interesting,"
> rises a little higher than automating business processes. Automating
> business processes is what I referred to as a "solved problem."
> 
> Raf

There are plenty of truly interesting business process problems,
awaiting solution.  Some even are supposedly solved, but aren't.

If you walked over to the nearest University Industrial Engineering
department, they would probably show you Pert or Gannt charts.  
You might have even seen one in your various well-educated 
travels before.  Judging from your posting, 
you would view that as a solved problem for scheduling
business processes.

Well, maybe it is for pencil making or steel making where
the industrial processes are capital intensive and difficult to
change quickly, but in my business, operating system software,
all the existing tools of this kind (and, I've tried several
over several decades) are entirely qualitatively insufficient
to be of much value.  In brief, they don't work.

Yes, they produce the appropriate Pert or Gannt charts,
but the problem is, they are never accurate long enough to
be worth the effort to produce them.

Scheduling and managing software projects therefore remains
a primative process, difficult to automate.  Most tools are
cumbersome and the information is obsolete before the data 
input completes.  Things change too frequently and, perhaps,
there are just too many things going on in most software
shops.  It takes too long, and too much maintenance, to
put in all the dependencies and, as important, update them.

So, projects live on blackboards, e-mail, and xfig drawings.

There are tracking mechanisms, to be sure, that are 
reasonably effective for individual activities, but true 
dependency management remains elusive IMO, because the 
situation is often very fluid, with rapid changes
of subproject status and the ability to invent "make 
do" methods that change (at least temporarily,
sometimes permanently) dependencies.

Now, maybe this is just plain a data input problem
that is pragmatically unsolvable.  But, maybe its a 
problem with the underlying implementation languages.

Looks to me like someone ought to find and try a 
language that is capable of rapidly
expressing and implementing relationships between 
entities, especially very dynamic ones.  And,
see if the product is useful.  If it is, the
resulting program could probably make 
some money.

One might even find, after working on it
a while, that there is something fundamentally
missing from today's theory, but regardless,
the program would be worthwhile if it worked.

Do you know of such a language?


Larry Loen
From: Pierre R. Mai
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <871ypgmoct.fsf@orion.bln.pmsf.de>
Raffael Cavallaro <·······@mediaone.net> writes:

> In article <··············@orion.bln.pmsf.de>, "Pierre R. Mai" 
> <····@acm.org> wrote:
> 
> >Oh, there are many interesting problems to solve, and not even most of
> >them are AI-complete.
> 
> I didn't merely say "interesting," I said "truly creative, truly 
> interesting."

Well, if your definition of truly creative, truly interesting is
"AI-complete", then that isn't an argument, it's a tautology.  So what
is your definition of truly creative, truly interesting?

> For many of us, the standard of "truly creative, truly interesting," 
> rises a little higher than automating business processes. Automating 
> business processes is what I referred to as a "solved problem."

And I'm claiming that it isn't a solved problem, because it requires
programs that are much more intelligent and adaptable than most of the
stuff we have today, because business processes involve people, and
aren't automatable in some modern tayloristic way.  But if this is a
solved problem, pray tell, where is the solution?

Or you may claim that any program that is sufficiently intelligent
and adaptable to solve this is by necessity AI-complete.  But without
supporting evidence I'm not going to believe you, because many
problems have been called AI-complete in the past, that turned out to
far from requiring human-level intelligence.  Labelling everything
that we don't know how to do today AI-complete seems like a perfect
recipe for shutting down all proper research.

Regs, Pierre.

-- 
Pierre R. Mai <····@acm.org>                    http://www.pmsf.de/pmai/
 The most likely way for the world to be destroyed, most experts agree,
 is by accident. That's where we come in; we're computer professionals.
 We cause accidents.                           -- Nathaniel Borenstein
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-EBD294.00400124052001@news.ne.mediaone.net>
In article <··············@orion.bln.pmsf.de>, "Pierre R. Mai" 
<····@acm.org> wrote:

>Or you may claim that any program that is sufficiently intelligent
>and adaptable to solve this is by necessity AI-complete.

Yes, I do. As you yourself stated, business automation involves people. 
When you've created software that has sufficient social intelligence to 
deal with a variety of different people's cognitive and social styles of 
interaction instead of merely pissing them off by acting dorky (i.e., 
how software typically works today) you've pretty much defined human 
level intelligence.

Remember, some of the greatest, if not the greatest, computational 
intelligences are/were autistic. That means that they have essentially 
no social skills whatsoever. They can calculate rings around you, but 
they have no ability at all to engage in simple  social interactions. So 
pretty clearly, what makes us intelligent in the human sense is not 
powerful computational abilities (which both computers and idiot savants 
posess in abundance) but social skills, which computers lack entirely, 
and any reasonably intelligent person posesses.

>But without
>supporting evidence I'm not going to believe you, because many
>problems have been called AI-complete in the past, that turned out to
>far from requiring human-level intelligence.

Social intelligence of the sort you've alluded to is AI complete, 
because it pretty much defines what separates normal humans from both 
computers and idiot savants on the one hand, and non-human animals on 
the other. (Yes I'm aware that some non-human animals have other sorts 
of social skills, but none that we know of begins to compare with the 
complexity and sophistication of human social intelligence). To interact 
with humans successfully (i.e., to not act like a dork) you need human 
level social intelligence. Computers act like dorks.

Note also in this context, that human level social intelligence implies 
the ability to read non-verbal communication cues (some experts put 
non-verbal  communication at over half of the bandwith in a typical 
conversation), so that subsumes human visual acuity (artificial vision). 
And of course, reading prosody (the inflections of the voice, and the 
emotional message that conveys, as distinct from it's literal 
meaning),so that includes speech recognition *and* prosody.

>  Labelling everything
>that we don't know how to do today AI-complete seems like a perfect
>recipe for shutting down all proper research.

No, actually, the perfect recipe would be to cut off funding, because 
researchers will always keep looking as long as their project is funded,  
whether they're on track to find an answer or not. ;^)

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B0A8749.82B1222A@enterprise.net>
Raffael Cavallaro wrote:
> 
> In article <··············@orion.bln.pmsf.de>, "Pierre R. Mai"
> <····@acm.org> wrote:
> 
> >Or you may claim that any program that is sufficiently intelligent
> >and adaptable to solve this is by necessity AI-complete.
> 
> Yes, I do. As you yourself stated, business automation involves people.
> When you've created software that has sufficient social intelligence to
> deal with a variety of different people's cognitive and social styles of
> interaction instead of merely pissing them off by acting dorky (i.e.,
> how software typically works today) you've pretty much defined human
> level intelligence.
>
> Remember, some of the greatest, if not the greatest, computational
> intelligences are/were autistic. That means that they have essentially
> no social skills whatsoever. They can calculate rings around you, but
> they have no ability at all to engage in simple  social interactions. So
> pretty clearly, what makes us intelligent in the human sense is not
> powerful computational abilities (which both computers and idiot savants
> posess in abundance) but social skills, which computers lack entirely,
> and any reasonably intelligent person posesses.

These social skills do not, apparently, extend to dealing with people
who
lack social skills.   Your typical software manager hasn't a clue how to
manage a hacker (in the Tech Model Railroad Club sense of the word).  
Your
typical hacker generally has a better idea how to manage another hacker,
because he (hackers are with few exceptions male) understands how the
hacker
thinks.   Still, being a hacker is perceived as lacking social skills
because hackers are in the minority and are regarded as a problem.

Society needs hackers and others like them to progress technically.
Without them, I suspect the good socializers would probably still be
debating what colour the wheel should be instead of getting on with
making and using them.   And read my .sig.

> Social intelligence of the sort you've alluded to is AI complete,
> because it pretty much defines what separates normal humans from both
> computers and idiot savants on the one hand, and non-human animals on
> the other. (Yes I'm aware that some non-human animals have other sorts
> of social skills, but none that we know of begins to compare with the
> complexity and sophistication of human social intelligence). To interact
> with humans successfully (i.e., to not act like a dork) you need human
> level social intelligence. Computers act like dorks.
> 
> Note also in this context, that human level social intelligence implies
> the ability to read non-verbal communication cues (some experts put
> non-verbal  communication at over half of the bandwith in a typical
> conversation),

Estimated to be something like 87% if my memory serves me correctly.
However, it's clearly bogus.   When first told that, at one of those
corporate staff development events one gets sent on from time to time,
I put my hand up and asked a question -- in Gaelic.   I was not
understood -- hardly surprising because everyone else there was
English.   I repeated myself, and was still not understood.   I then
asked the speaker why, if 87% of information was conveyed to her through
my body language, that she hadn't understood me at all.   Furthermore,
had I sent the very same words I spoke by electronic mail to a Gael,
they would have understood my point completely.

One skill -- telling whether someone is lying -- turns out not to
depend on social skills at all but highly technical ones possessed
few people outside the intelligence gathering community.   But many
people still think they're good at it.

> so that subsumes human visual acuity (artificial vision).
> And of course, reading prosody (the inflections of the voice, and the
> emotional message that conveys, as distinct from it's literal
> meaning),so that includes speech recognition *and* prosody.
> 
> >  Labelling everything
> >that we don't know how to do today AI-complete seems like a perfect
> >recipe for shutting down all proper research.
> 
> No, actually, the perfect recipe would be to cut off funding, because
> researchers will always keep looking as long as their project is funded,
> whether they're on track to find an answer or not. ;^)

The funding was cut off because certain people (we know who they are)
promised too much and actual results didn't live up to the hype.   And
(bringing the subject back on topic) Lisp and Prolog got the blame, and
suffered badly.   Doug Lenat and Rodney Brooks are about the only people
left doing AI research nowadays.

> Raf

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-38F60F.23034124052001@news.ne.mediaone.net>
In article <·················@enterprise.net>, Donald Fisk 
<················@enterprise.net> wrote:

>I then
>asked the speaker why, if 87% of information was conveyed to her through
>my body language, that she hadn't understood me at all.   Furthermore,
>had I sent the very same words I spoke by electronic mail to a Gael,
>they would have understood my point completely.

Because 

1. the content of non-verbal communication is social, and unless you are 
discussing social matters, entirely independent of the semantic content 
of your speech. Your dealing with two entirely different channels. Your 
point is like saying that because I can't understand what's going on in 
an episode of Friends dubbed into Gaelic, that the sound content of a TV 
broadcast takes up more bandwidth than the picture (it doesn't).

2. most of the information that is conveyed non-verbally is below 
conscious awareness. You've clearly missed the whole point of this 
thread. The overwhelming majority of human cognition is completely 
unavailable to consciousness.

 

Non-verbal communication includes such information as who is the 
socially dominant person in the exchange, whether any of the parties 
have any sexual interest in each other, whether they bear each other any 
ill will, whether they are dissembling. Is the speaker tired, anxious, 
confused, etc. People, having social skills, read these cues, largely 
unconsciously, some, who are more aware than others, somewhat 
consciously.

None of this information would have been available to your email 
recipient, unless you took to trouble to explictly discuss these 
matters, and even then, your recipient would have absolutely no way of 
evaluating your truthfullness (which s/he would in a face to face 
conversation).

>One skill -- telling whether someone is lying -- turns out not to
>depend on social skills at all but highly technical ones possessed
>few people outside the intelligence gathering community.   But many
>people still think they're good at it.

No, you're wrong. Some people *are* good at it. Research conducted with 
police officers and taped interviews with suspects shows that some 
officers are better able than others (and much better than chance) at 
assessing the truthfullness of suspects. Some of the taped interviews 
were in languages that the officers did not understand, so they were 
relying in those cases completely on non-verbal cues. They don't rely on 
"highly technical [skills] possessed [by] few people outside the 
intelligence gathering community." When asked, they don't know how they 
can tell, which means, by definition, they are using unconscious, 
non-verbal communication.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B0C6A08.BFE8A862@enterprise.net>
Raffael Cavallaro wrote:
> 
> In article <·················@enterprise.net>, Donald Fisk
> <················@enterprise.net> wrote:
> 
> >I then
> >asked the speaker why, if 87% of information was conveyed to her through
> >my body language, that she hadn't understood me at all.   Furthermore,
> >had I sent the very same words I spoke by electronic mail to a Gael,
> >they would have understood my point completely.
> 
> Because
> 
> 1. the content of non-verbal communication is social, and unless you are
> discussing social matters, entirely independent of the semantic content
> of your speech. Your dealing with two entirely different channels. Your
> point is like saying that because I can't understand what's going on in
> an episode of Friends dubbed into Gaelic, that the sound content of a TV
> broadcast takes up more bandwidth than the picture (it doesn't).

In the Shannon sense, there might be more information communicated but
almost all of it is wasted bandwidth.   Most of the useful information I
wanted
to communicate was in Gaelic, and most of the rest was a flag --
sticking
my hand up to get noticed.   The flag can be represented by a single
bit.

Going back to Friends -- if you couldn't understand it it proves my
point.
There's a very interesting film called "What's up, Tiger Lily".   This
was
originally a Japanese film which flopped at the box office and Woody
Allen
bought the rights to it.   He then dubbed it into English, completely
changing
the plot in the process.   This, perhaps surprisingly, worked, but would
have confused just about everyone had non-verbal communication
dominated.

> 2. most of the information that is conveyed non-verbally is below
> conscious awareness. You've clearly missed the whole point of this
> thread. The overwhelming majority of human cognition is completely
> unavailable to consciousness.

So?   That doesn't mean we can't model it and find out if our model
makes interesting predictions.

> Non-verbal communication includes such information as who is the
> socially dominant person in the exchange, whether any of the parties
> have any sexual interest in each other, whether they bear each other any
> ill will, whether they are dissembling. Is the speaker tired, anxious,
> confused, etc. People, having social skills, read these cues, largely
> unconsciously, some, who are more aware than others, somewhat
> consciously.

In most cases these are completely subordinate.   Supposing one wants
to shag a particular girl and convey this in one's body language.   How
many bits does that represent?   How many degrees of shaggability are
there?   Then take the log to the base two to find out how much
information that can be expressed as.   But in the course of talking
to her, one might have sent and received many kilobytes of information.

> None of this information would have been available to your email
> recipient, unless you took to trouble to explictly discuss these
> matters, and even then, your recipient would have absolutely no way of
> evaluating your truthfullness (which s/he would in a face to face
> conversation).
> 
> >One skill -- telling whether someone is lying -- turns out not to
> >depend on social skills at all but highly technical ones possessed
> >few people outside the intelligence gathering community.   But many
> >people still think they're good at it.
> 
> No, you're wrong. Some people *are* good at it. Research conducted with
> police officers and taped interviews with suspects shows that some
> officers are better able than others (and much better than chance) at
> assessing the truthfullness of suspects. Some of the taped interviews
> were in languages that the officers did not understand, so they were
> relying in those cases completely on non-verbal cues. They don't rely on
> "highly technical [skills] possessed [by] few people outside the
> intelligence gathering community." When asked, they don't know how they
> can tell, which means, by definition, they are using unconscious,
> non-verbal communication.

"Professor Paul Ekman is one of the authors of our expert articles. In
his research work he set out to find out how good people were at
spotting liars. 

Non-experts like you and me scored no better than chance, suggesting
that we can't really tell when we're being lied to. He also tested the
experts - the people whose professions rely on them being able to
catch liars - police, trial judges, psychiatrists, and the people who
carry
out lie detector tests. They also scored no better than chance. 

It?s surprising that none of these people could spot the liars -
but there was one group who could: one
third of secret service agents scored 80%. "

-- From the BBC web site.

> Raf
> 
> --
> 
> Raffael Cavallaro, Ph.D.
> ·······@mediaone.net

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-E47157.22185028052001@news.ne.mediaone.net>
In article <·················@enterprise.net>, Donald Fisk 
<··········@enterprise.net> wrote:

>"Professor Paul Ekman is one of the authors of our expert articles. In
>his research work he set out to find out how good people were at
>spotting liars. 
>
>Non-experts like you and me scored no better than chance, suggesting
>that we can't really tell when we're being lied to. He also tested the
>experts - the people whose professions rely on them being able to
>catch liars - police, trial judges, psychiatrists, and the people who
>carry
>out lie detector tests. They also scored no better than chance. 
>
>It?s surprising that none of these people could spot the liars -
>but there was one group who could: one
>third of secret service agents scored 80%. "

See the on-line text from ScienceNews below, but first a few notes:

The study alluded to above was certainly a lab test, not real life. The 
problem with lab tests is that researchers pay volunteers to lie to 
other volunteers. But since there's no real pressure (i.e., no 
consequence if they're caught lying) their behaviour is not that of real 
life liars in real situations. As a result, people trying to spot their 
"let's-pretend" lies don't evaluate them properly.

The study below, to which I referred in my previous post, was based on 
real suspect interviews, in a real murder case. The liar's life was on 
the line, so this is a real test, not a no-stakes lab simulation. This 
study is the *first* study to use a real life liar in a real life 
situation, with very real consequences, and the results are quite 
interesting.

And, of course, since only *some* of the officers were good at it, it 
clearly isn't a matter of following some conscious guidelines, otherwise 
every officer on the force would be trained to know what to look for. 
And, as the interviews were in a foreign language, all evaluations were 
based *entirely* on non-verbal cues.

Quotes:

>But some cops can't be fooled, according to a new study. Shown videotapes 
>of an interrogation of a murder suspect speaking a language they didn't 
>understand, some British police officers consistently knew when the man 
>was lying and when he was telling the truth. Other officers detected lies 
>and truths about as well as if they had guessed, and some detected lies 
>less often than if they had guessed, report Aldert Vrij and Samantha Mann, 
>both psychologists at the University of Portsmouth in England.
>
>Their study, published in the March-April Applied Cognitive Psychology, 
>assesses, for the first time, people's ability to size up a highly 
>motivated liar. Earlier deception studies had used people who lied at the 
>behest of experimenters. With little to lose by getting caught, laboratory 
>liars are better able to obscure their falsehoods, Vrij and Mann say.



>Of 65 police officers shown the segments, 18 made no more than one error 
>in detecting lies and truths. Another 36 judged three or four segments 
>correctly, and the remaining 11 identified only one or two segments 
>correctly. Because the words were unrecognizable, they had to detect lies 
>using nonverbal cues and speech intonations.

Source: <http://www.sciencenews.org/20010303/fob3.asp>

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B156913.35BB713B@enterprise.net>
Raffael Cavallaro wrote:

> See the on-line text from ScienceNews below, but first a few notes:
> 
> The study alluded to above was certainly a lab test, not real life. The
> problem with lab tests is that researchers pay volunteers to lie to
> other volunteers. But since there's no real pressure (i.e., no
> consequence if they're caught lying) their behaviour is not that of real
> life liars in real situations. As a result, people trying to spot their
> "let's-pretend" lies don't evaluate them properly.

Except, apparently, intelligence officers -- so it *can* be done.

And remember that, unlike real life, in lab tests truth and falsehood
can be objectively checked.   Precisely because there's nothing at
stake the subjects can be more credible. 

> The study below, to which I referred in my previous post, was based on
> real suspect interviews, in a real murder case. The liar's life was on
> the line, so this is a real test, not a no-stakes lab simulation. This
> study is the *first* study to use a real life liar in a real life
> situation, with very real consequences, and the results are quite
> interesting.
> 
> And, of course, since only *some* of the officers were good at it,

Quite.   Some were, others presumably weren't.   And my original point
was that people, in general, are very poor at it.   Neither you nor
the article you cite has suggested otherwise.

> it
> clearly isn't a matter of following some conscious guidelines, otherwise
> every officer on the force would be trained to know what to look for.

The intelligence officers were able to explain what they noticed.
You're working on the assumption that the criminal justice system is
more efficient than it actually is.  Just because a particular practice
is effective doesn't mean it will be adopted.

> And, as the interviews were in a foreign language, all evaluations were
> based *entirely* on non-verbal cues.

These are police officers, and for them it's a job related skill.   Even
if
taken at face value it says nothing about the ability of lay people to
tell 
whether someone lies, which was my original point but one that seems to
have
been forgotten.

And if these British police officers were any good at doing this
why is it that literally one person gets freed every month or so after
subsequently being found not to have committed the murder he was
convicted
of?   God only knows the rates of miscarriage of justice for other,
lesser
crimes.   The impression I get here, in Britain, is that British police
officers are extremely weak at telling who is lying.

And what about them watching an interrogation of an entirely innocent
prisoner (who has just been arrested because a police officer mistakenly
assumed he'd committed a crime), and asking them when he's telling lies?
Did they do that?

> Raf

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-361993.20072601062001@news.ne.mediaone.net>
In article <·················@enterprise.net>, Donald Fisk 
<················@enterprise.net> wrote:

>Except, apparently, intelligence officers -- so it *can* be done.

But it's a useless skill to be able to tell if someone is lying when 
that person has nothing to lose if caught. The only useful skill is 
being able to tell when someone is lying in a *real life* situation, not 
a silly lab mock up.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B1B8F49.7CD95AA4@enterprise.net>
Raffael Cavallaro wrote:
> 
> In article <·················@enterprise.net>, Donald Fisk
> <················@enterprise.net> wrote:
> 
> >Except, apparently, intelligence officers -- so it *can* be done.
> 
> But it's a useless skill to be able to tell if someone is lying when
> that person has nothing to lose if caught. The only useful skill is
> being able to tell when someone is lying in a *real life* situation, not
> a silly lab mock up.

Intelligence officers also make use of their skills in situations
where it's useful.   I could tell you more, but then I'd have to
shoot you.

But you have still offered no evidence that the skill of being
able to tell if someone is lying is widespread or just limited
to a few professionals.

> Raf

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-F9A37B.20161901062001@news.ne.mediaone.net>
In article <·················@enterprise.net>, Donald Fisk 
<················@enterprise.net> wrote:

>Quite.   Some were, others presumably weren't.   And my original point
>was that people, in general, are very poor at it.   Neither you nor
>the article you cite has suggested otherwise.

We don't evaluate human abilites by the performance of average people.

We don't say "people have no musical ability" because most people can't 
play the piano (they can't). We don't say "people can't do calculus" 
because most people have no knowlege of calculus (they don't). This, by 
the way, was one of the pioneer AI programs (doing calculus). The fact 
that the overwhelming majority of people who have ever lived on the 
planet couldn't do calculus didn't stop researchers from using this as 
an example of human intelligence to be simulated by computer.

Rather, we define human abilities by those that have 'em, not those that 
don't. People have social skills of varying degrees. Many of these 
social skills are based on non-verbal communication. Your denials 
notwithstanding, human social skills and non-verbal communication are 
one of the things that define human intelligence as distinct from 
machine abilities on the one hand, and non-human animals on the other.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Chris Johansen
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B1DCD29.64BD7811@main.nc.us>
Raffael Cavallaro wrote:
> 
> In article <·················@enterprise.net>, Donald Fisk
> <··········@enterprise.net> wrote:
> 
> >"Professor Paul Ekman is one of the authors of our expert articles. In
> >his research work he set out to find out how good people were at
> >spotting liars.

> >Of 65 police officers shown the segments, 18 made no more than one error
> >in detecting lies and truths. Another 36 judged three or four segments
> >correctly, and the remaining 11 identified only one or two segments
> >correctly. Because the words were unrecognizable, they had to detect lies
> >using nonverbal cues and speech intonations.
> 
> Source: <http://www.sciencenews.org/20010303/fob3.asp>

Not to diminish serious research in any way, but here is 
an irreverent thought.  Replace "police officers" above 
with "coin tossers."  It is not farfetched that persons 
flipping coins to choose lies/truths could achieve a 
similar Gaussian distribution.  

Regards,
--
Chris Johansen            <········@main.nc.us>
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B0D9C61.EF78004D@enterprise.net>
Raffael Cavallaro wrote:
> 
> In article <·················@enterprise.net>, Donald Fisk
> <················@enterprise.net> wrote:
> 
> >I then
> >asked the speaker why, if 87% of information was conveyed to her through
> >my body language, that she hadn't understood me at all.   Furthermore,
> >had I sent the very same words I spoke by electronic mail to a Gael,
> >they would have understood my point completely.
> 
> Because
> 
> 1. the content of non-verbal communication is social, and unless you are
> discussing social matters, entirely independent of the semantic content
> of your speech. Your dealing with two entirely different channels. Your
> point is like saying that because I can't understand what's going on in
> an episode of Friends dubbed into Gaelic, that the sound content of a TV
> broadcast takes up more bandwidth than the picture (it doesn't).

In the Shannon sense, there might be more information communicated but
almost all of it is wasted bandwidth.   Most of the useful information I
wanted to communicate was in Gaelic, and most of the rest was a flag --
sticking my hand up to get noticed.   The flag can be represented by a
single bit.

Going back to Friends -- if you couldn't understand it it proves my
point.   There's a very interesting film called "What's up, Tiger 
Lily".   This was originally a Japanese film which flopped at the
box office there and Woody Allen bought the rights to it.   He then
dubbed it into English, completely changing the plot in the process.
This, perhaps surprisingly, worked, but would have confused just
about everyone had non-verbal communication dominated.

> 
> 2. most of the information that is conveyed non-verbally is below
> conscious awareness. You've clearly missed the whole point of this
> thread. The overwhelming majority of human cognition is completely
> unavailable to consciousness.
> 
> 
> 
> Non-verbal communication includes such information as who is the
> socially dominant person in the exchange, whether any of the parties
> have any sexual interest in each other, whether they bear each other any
> ill will, whether they are dissembling. Is the speaker tired, anxious,
> confused, etc. People, having social skills, read these cues, largely
> unconsciously, some, who are more aware than others, somewhat
> consciously.
> 
> None of this information would have been available to your email
> recipient, unless you took to trouble to explictly discuss these
> matters, and even then, your recipient would have absolutely no way of
> evaluating your truthfullness (which s/he would in a face to face
> conversation).

No they wouldn't (see below for why).

Those social skills you apparently value so much are negatively
correlated with creativity and techical skills -- the first electronic
computer was developed by someone working in complete isolation and
built in his parents' living room.   The inventor, the importance of
whose work wasn't realized until recently, never managed to interest
the authorities in his machine despite the fact that there was a war
on and it had obvious military uses.   The Lisp Machine was invented
by someone *renowned* for his lack of social skills.

Today those social skills are in high demand, which means that those
with creative and technical skills (hereinafter called "hackers") often
don't get their point of view accepted or even heard, and are often
discouraged from even applying for jobs by ads which stipulate "must
be good team player".   (The only sports I've ever been interested into
involve cordite and high velocity lead.)   Office politics tends to
determine what gets done and of course hackers are very weak at it.
And good Lisp hackers, who could produce the output of a small team
if they used the language of choice, don't get to use it.   You have
to work with other people and nobody else uses Lisp, and it's been
no longer taught at universities, and it isn't buzzword compliant,
so it doesn't get used.   The little remaining work on it, at least
in the UK, is on legacy systems started in the 1980s.

The upshot of all this is that innovation in computing has ground
to a halt, and there is no way to get progress back on track that
is apparent to me.

I have three options available:

(1) Hang on in in the hope that things will improve.   I can still
hack in other languages.   That is what I'm doing at the moment.

(2) Go back to college and learn a new, and perhaps more socially
useful skill outside of computing.

(3) Do a Stallman.   But unlike Stallman, the culprit would be
almost the entire computer industry rather than just Symbolics,
and who would join me?   Like Stallman's Emacs I have a product,
-- a new computer language which I believe is a worthy successor
to Lisp -- by John McCarthy's stated criterion.   There's a
partially working prototype and those few people who have seen
it realize it's not just another language.   I don't want it
GPLed because that would mean that the companies that have made
computing more boring than accountancy, and have stifled
innovation, would be able to benefit from it.

> >One skill -- telling whether someone is lying -- turns out not to
> >depend on social skills at all but highly technical ones possessed
> >few people outside the intelligence gathering community.   But many
> >people still think they're good at it.
> 
> No, you're wrong. Some people *are* good at it. Research conducted with
> police officers and taped interviews with suspects shows that some
> officers are better able than others (and much better than chance) at
> assessing the truthfullness of suspects. Some of the taped interviews
> were in languages that the officers did not understand, so they were
> relying in those cases completely on non-verbal cues. They don't rely on
> "highly technical [skills] possessed [by] few people outside the
> intelligence gathering community." When asked, they don't know how they
> can tell, which means, by definition, they are using unconscious,
> non-verbal communication.

"Professor Paul Ekman is one of the authors of our expert articles. In
his research work he set out to find out how good people were at
spotting liars. 

Non-experts like you and me scored no better than chance, suggesting
that we can't really tell when we're being lied to. He also tested the
experts - the people whose professions rely on them being able to
catch liars - police, trial judges, psychiatrists, and the people who
carry
out lie detector tests. They also scored no better than chance. 

It?s surprising that none of these people could spot the liars -
but there was one group who could: one
third of secret service agents scored 80%. "

-- From the BBC web site.

There's an important and obvious point that you don't seem to realize:
if the skill was at all common, and present in people with "good social
skill", there would never be any miscarriages of justice.   We could,
in fact, quite safely send people to the gallows for offences which
they obviously committed, safe in the knowledge that they deserved
their fate.   We no longer do, and thank God (and those enlightened
MPs that got the death penalty repealed) because since doing so we've
been able to free literally dozens of prisoners who would have met
their end at the end of a rope, except those who died in prison of
natural causes.

> Raf
> 
> --
> 
> Raffael Cavallaro, Ph.D.
> ·······@mediaone.net

-- 
Le Hibou (the unhappy hacker)
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-8AD820.21474228052001@news.ne.mediaone.net>
In article <·················@enterprise.net>, Donald Fisk 
<················@enterprise.net> wrote:

>In the Shannon sense, there might be more information communicated but
>almost all of it is wasted bandwidth.   Most of the useful information I
>wanted to communicate was in Gaelic,

Most of the useful information that you were *consciously aware of* was 
in Gaelic. But all the useful social information you were unaware of was 
in your facial expression and body language.

You can't use your conscious awareness as a litmus test of what is 
"useful information" since you are demonstrably unaware of far more 
information that your brain processes than you are aware of.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: Raffael Cavallaro
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <raffael-982E99.21564028052001@news.ne.mediaone.net>
In article <·················@enterprise.net>, Donald Fisk 
<················@enterprise.net> wrote:

>Going back to Friends -- if you couldn't understand it it proves my
>point.

No, it proves that verbal semantics is a different channel than visual 
non-verbal. I can see very well what people are doing in a dubbed show. 
I can tell if they're happy, surprised, upset, sad, etc. I just can't 
understand their verbal meaning. Again, two different channels. One (the 
non-verbal) carries lots more information in a typical conversation. 
Remember, the typical human conversation is more  like "What do you want 
to do for dinner?" "Oh, I don't know," than the contents of the Common 
Lisp HyperSpec. Of course we can imagine a conversation where two people 
sit in a darkened room, and one reads the encyclopedia to the other in 
as flat a monotone as he can muster, but this, again, is not what 
typical conversations are like. Unless of course that's what your 
typical conversation is like... that would explain a fair bit.


> There's a very interesting film called "What's up, Tiger 
>Lily".   This was originally a Japanese film which flopped at the
>box office there and Woody Allen bought the rights to it.   He then
>dubbed it into English, completely changing the plot in the process.
>This, perhaps surprisingly, worked, but would have confused just
>about everyone had non-verbal communication dominated.


It wasn't "interesting" it was a very funny comedy, and most of it's 
comic effect relied on the obvious incongruity between the actions and 
body language of the Japanese actors and the obviously bogus dialog 
Woody Allen had dubbed over them. Somehow this seems to have been lost 
on you, but that's little surprise, as you're bent on denying that 
people routinely engage in social exchanges that don't rely on the 
meaning of their words.

Raf

-- 

Raffael Cavallaro, Ph.D.
·······@mediaone.net
From: glauber
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <892f97d1.0105300926.72ccfedc@posting.google.com>
Raffael Cavallaro <·······@mediaone.net> wrote in message news:<·····························@news.ne.mediaone.net>...

[... Tiger Lilly ...]

> It wasn't "interesting" it was a very funny comedy, and most of it's 
> comic effect relied on the obvious incongruity between the actions and 
> body language of the Japanese actors and the obviously bogus dialog 
> Woody Allen had dubbed over them.
[...]

I especially love the bit when somebody asks Woody if he can explain
the plot (which by then had been completely lost). He looks at the
camera and says "no!".

And of course, "two wongs don't make a wight!", which i could see
coming a mile away!

:-)

Still laughing out loud,

g
From: Donald Fisk
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <3B1568ED.999F97EA@enterprise.net>
Raffael Cavallaro wrote:

> No, it proves that verbal semantics is a different channel than visual
> non-verbal. I can see very well what people are doing in a dubbed show.
> I can tell if they're happy, surprised, upset, sad, etc. I just can't
> understand their verbal meaning. Again, two different channels. One (the
> non-verbal) carries lots more information in a typical conversation.

I'd like to see proof of this other than by repeated assertion.

> Remember, the typical human conversation is more  like "What do you want
> to do for dinner?" "Oh, I don't know," than the contents of the Common
> Lisp HyperSpec. Of course we can imagine a conversation where two people
> sit in a darkened room, and one reads the encyclopedia to the other in
> as flat a monotone as he can muster, but this, again, is not what
> typical conversations are like. Unless of course that's what your
> typical conversation is like... that would explain a fair bit.

Ah, an ad hominem attack.

> It wasn't "interesting" it was a very funny comedy, and most of it's
> comic effect relied on the obvious incongruity between the actions and
> body language of the Japanese actors and the obviously bogus dialog
> Woody Allen had dubbed over them. Somehow this seems to have been lost
> on you, but that's little surprise, as you're bent on denying that
> people routinely engage in social exchanges that don't rely on the
> meaning of their words.

I'm not denying it -- just claiming that the information conveyed
by it has a relatively low bit rate -- and I defy you to prove
otherwise.

> Raf

-- 
Le Hibou
"The reasonable man adapts himself to the world; the unreasonable
one persists in trying to adapt the world to himself. Therefore
all progress depends on the unreasonable man."  -- G.B. Shaw
From: Timothy M. Schaeffer
Subject: Re: Why Lisp languishes
Date: 
Message-ID: <9f83il$1vce$1@allnight.news.cais.net>
> I'm not denying it -- just claiming that the information conveyed
> by it has a relatively low bit rate -- and I defy you to prove
> otherwise.
>

Communication is analog; thus your perceived bit-rate depends on your
sampling rate, doesn't it?


TMS
From: Stig Hemmer
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <ekvsni3gd4g.fsf@proto.pvv.ntnu.no>
I would like to grab hold of something you wrote that nobody else
commented so far:

"John Flynn" <··········@yahoo.com.au> writes:
["Object Oriented Software Construction" by Bertrand Meyer]
> His arguments are quite convincing, for the most part. The reader is
> gradually introduced, piece by piece, to a language that is clean, spare,
> simple, object oriented, elegant, offers a particularly hassle-free
> implementation of multiple inheritance, constrained genericity, "design by
> constract" and all kinds of other facilities that help to build reliable
> software. The back of the book contains a CD demo of an Eiffel development
> environment. Software development "done right".
>
> And it is the crappiest, ugliest, most fault-ridden piece of software I have
> _ever_ used.

This is a very good anecdote, but I would like comments from others
before starting to use it...

Is it really so?

Stig Hemmer,
Jack of a Few Trades.

PS: I could have cross-posted this to comp.lang.eiffel.  I have chosen
   not to, to avoid another meaningless flame war.  Please respect that.
From: Lieven Marchand
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <m3vgmyr4ge.fsf@localhost.localdomain>
Stig Hemmer <····@pvv.ntnu.no> writes:

> I would like to grab hold of something you wrote that nobody else
> commented so far:
> 
> "John Flynn" <··········@yahoo.com.au> writes:
> ["Object Oriented Software Construction" by Bertrand Meyer]
> > His arguments are quite convincing, for the most part. The reader is
> > gradually introduced, piece by piece, to a language that is clean, spare,
> > simple, object oriented, elegant, offers a particularly hassle-free
> > implementation of multiple inheritance, constrained genericity, "design by
> > constract" and all kinds of other facilities that help to build reliable
> > software. The back of the book contains a CD demo of an Eiffel development
> > environment. Software development "done right".
> >
> > And it is the crappiest, ugliest, most fault-ridden piece of software I have
> > _ever_ used.
> 
> This is a very good anecdote, but I would like comments from others
> before starting to use it...
> 
> Is it really so?
> 
> Stig Hemmer,
> Jack of a Few Trades.
> 
> PS: I could have cross-posted this to comp.lang.eiffel.  I have chosen
>    not to, to avoid another meaningless flame war.  Please respect that.
> 

I've played a bit with it a couple of years ago when I was reading
OOSC, 2nd. Ed. It's not that bad as John says, but it's not very good
either. It compiles Eiffel to C, which is apparently a lot of work, so
it tries to cache a lot of its work, but occasionally it gets
confused. Then you have to manually delete the cache. The environment
is very different from mainstream IDE's and I didn't like it but
perhaps I haven't used it enough to start appreciate it. I've heard
the same comments about Lispworks which I like very much.

It's not very stable and most problems seem to lie in the interface
between the Eiffel parts and the native GUI/OS layer. Design by
Contract may be a good idea but neither the Windows nor the Unix
system layer is easy to wrap in such a scheme.

-- 
Lieven Marchand <···@wyrd.be>
Gla�r ok reifr skyli gumna hverr, unz sinn b��r bana.
From: Greg C
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <62340e2a.0105211115.6d4036a7@posting.google.com>
Stig Hemmer <····@pvv.ntnu.no> wrote in message news:<···············@proto.pvv.ntnu.no>...
> I would like to grab hold of something you wrote that nobody else
> commented so far:
> 
> "John Flynn" <··········@yahoo.com.au> writes:
> ["Object Oriented Software Construction" by Bertrand Meyer]
> > His arguments are quite convincing, for the most part. The reader is
> > gradually introduced, piece by piece, to a language that is clean, spare,
> > simple, object oriented, elegant, offers a particularly hassle-free
> > implementation of multiple inheritance, constrained genericity, "design by
> > constract" and all kinds of other facilities that help to build reliable
> > software. The back of the book contains a CD demo of an Eiffel development
> > environment. Software development "done right".
> >
> > And it is the crappiest, ugliest, most fault-ridden piece of software I have
> > _ever_ used.
> 
> This is a very good anecdote, but I would like comments from others
> before starting to use it...
> 
> Is it really so?
> 
> Stig Hemmer,
> Jack of a Few Trades.

I've been using ISE's version 4.5 for the last six months or so. I'd
consider it an improvement over what is delivered with OOSC2. IMHO,
there are four ugly parts to dealing with it.

1) The lack of current documentation. This makes it difficult to learn
the non-standard interface of the environment.
2) It's reliance on third party C compilers. I think this contributes
enormously to the perceived instability of the product. If the
teensiest little thing goes wrong in the integration of the two
compilers, life gets nasty.
3) Some "standard" features are just not available. For example,
debugging capabilities are limited.
4) Each program generates a huge amount of compiler data. If this data
gets corrupted or out of sync, things can get nasty again.

The IDE _is_ eclectic, no doubt about that. But once you get used to
it, it can be very easy to use and I often wish I could get the same
features in other environments (VC++ class browsing is absolutely
brain-dead by comparison.) Also, version 4.5 provides full command
line support for all of its tools, so if you want do everything
outside of the IDE, you are free to do so. There are a couple of Emacs
bindings for Eiffel available.

And I keep hearing that the Next Release Due Out Any Month Now is a
vast improvement. Better (ie "complete") documentation, native
compilation for Eiffel#/.Net, a more fully-featured IDE, significant
improvements in compilation time/overhead. So ISE is gradually closing
the gap between the real and the ideal.
From: John Flynn
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <eynO6.2189$Ld4.97153@ozemail.com.au>
"Greg C" <······@yahoo.com> wrote in message
·································@posting.google.com...

> I've been using ISE's version 4.5 for the last six months or so. I'd
> consider it an improvement over what is delivered with OOSC2. IMHO,
> there are four ugly parts to dealing with it.
>
> 1) The lack of current documentation. This makes it difficult to learn
> the non-standard interface of the environment.
> 2) It's reliance on third party C compilers. I think this contributes
> enormously to the perceived instability of the product. If the
> teensiest little thing goes wrong in the integration of the two
> compilers, life gets nasty.
> 3) Some "standard" features are just not available. For example,
> debugging capabilities are limited.
> 4) Each program generates a huge amount of compiler data. If this data
> gets corrupted or out of sync, things can get nasty again.
>
> The IDE _is_ eclectic, no doubt about that. But once you get used to
> it, it can be very easy to use  [...]

You haven't mentioned the best part yet ;-)

This should give Stig some idea of why I found it to be (almost
unbelievably) lousy. (c.July, 1997?).

Imagine you've bought a new text editor. It handles only one language, but
that's OK - one language is all you need right now.

You're a little surprised that it doesn't seem to be aware of the syntax of
the language. But that's OK too. Real men don't use colour syntax
highlighting. They _do_ indent their code though, and this editor just
doesn't know how to do it.

Well, that's OK too. You just have to hit the tab key more often than you'd
like to. Not a big deal. Maybe when you get sufficiently irritated, you can
write a quick macro that handles the indenting. Bind it to a convenient key
combo like M-j, and all is well.

But you're out of luck, because there's no support for macros or scripting
of any kind. Nor regular expressions, nor anything else you might find in a
programmer's editor. (By now you're starting to wonder what _does_ it do? Is
it just a bare window?).

But what the hell. As long as you can enter plain text into a window, that's
all you really need, right?

Well, unfortunately you can't even do _that_ !

When you get to the bottom of a page, the damn thing doesn't know how to
scroll down! It lets you continue typing lines in the dark until you pick up
the mouse and manually scroll down to see what you've done!

By now you're starting to wonder how they actually implemented this thing.
If they'd just plonked a stock standard Windows edit control into a frame
window (not even a line of code necessary), it would have had automatic
scrolling capabilities. So, what could they have they done to fuck it up so
badly?

Well, it seems they've used a high quality portable GUI library, so that it
will behave exactly the same way on Unix.

Because portability is a Good Thing.

Some time after this, I noticed on ISE's web site an advertisement for a new
version of the product. Among other things, it boasted : "AUTOMATIC
SCROLLING". (So, provided the scrolling was implemented correctly, it should
now have risen to the lofty heights of 'notepad').

If this had been the work of my 15 year old niece, say, her first attempt at
a building a portable text editor, I'd still think it was a poor effort -
because she'd have had to go out of her way to make it so bad.

But when you consider that it's not the work of your niece, but the work of
a world renowed and highly esteemed expert who makes his living authoring
tomes of software engineering lore, lecturing business IT professionals on
quality software engineering techniques, warning them to "beware of C
hackers" because they don't understand OO (and therefore can't write good
modern software), warning managers that languages like Lisp and Smalltalk
aren't up to scratch in today's world because they are not 'type safe' -
then it goes beyond disappointing. It deserves tar and feathers.

This experience with the editor might have caused me to view the rest of the
environment with a jaundiced eye. I was not favourably impressed.

The user interface was indeed eclectic (and a trifle eccentric), but I
didn't mind that. Parts of it struck me as more gimmicky than genuinely
useful (eg. dragging objects up to toolbar buttons instead of just clicking
the toolbar buttons with an object highlighted?), but on the whole it didn't
bother me. (I was hoping to like it, believe me. I _really_ liked the Eiffel
language, and still find a lot of Meyer's ideas quite sound in theory ...).

What DID bother me was that the whole environment seemed to TRY to thwart
the user at every stage. (And everybody I spoke to at my university had the
same impression).

I have only a vague memory now, but I seem to recall having to exit, restart
and touch the "Ace" file every time I started a new project in order to have
it recognise the WEL libraries, even though they were already present in the
Ace file when I elected to create a new Windows project. (I maybe don't
remember the details correctly, but I remember the irritation and hassle
very clearly).

Beyond the fiddly annoyances in getting started, I remember a lot of
inexplicable hangs during compilation, and generally sluggish performance
all round. I often had to kill the process, restart and try again. Very hit
and miss. (Never experienced anything like this with gcc, though if you
believe what you read in OOSC2, this ought to be close to impossible).

So I was not impressed with the product at all. I even installed it several
more times in the weeks ahead, because I couldn't quite believe my first
impressions. But it didn't get any better. It sucked every time.

NB: I don't blame Eiffel for the poor quality of the product, and I
certainly don't believe that Windows and Unix are somehow incompatible with
'Design by Contract'. I think the most likley explanation is that it was
written by a bunch of "software engineers" who were concerned about writing
clean code according to their doctrine, but not very concerned about their
users' experience.

Stig: if you're still reading, I don't see any possible legitimate use of my
opinion. I would strongly recommend that nobody take my word for it. Anyone
who's interested in ISE Eiffel can download a time limited version of the
product (with four years worth of enhancements!). Please find out for
yourselves whether it's any good or not.
From: Greg C
Subject: Re: Philosophical Bee in my Bonnet
Date: 
Message-ID: <62340e2a.0106041523.1ce96bb9@posting.google.com>
"John Flynn" <··········@yahoo.com.au> wrote in message news:<····················@ozemail.com.au>...
> "Greg C" <······@yahoo.com> wrote in message
> ·································@posting.google.com...
> 
> > I've been using ISE's version 4.5 for the last six months or so. I'd
> > consider it an improvement over what is delivered with OOSC2. IMHO,
> > there are four ugly parts to dealing with it.
> >
[...]
> 
> You haven't mentioned the best part yet ;-)
> 
> This should give Stig some idea of why I found it to be (almost
> unbelievably) lousy. (c.July, 1997?).
> 
> Imagine you've bought a new text editor. It handles only one language, but
> that's OK - one language is all you need right now.
> 
> You're a little surprised that it doesn't seem to be aware of the syntax of
> the language. But that's OK too. Real men don't use colour syntax
> highlighting. They _do_ indent their code though, and this editor just
> doesn't know how to do it.
[...]

Version 4.5 still doesn't auto-indent while you are typing, but you
can pretty print files and get colored syntax highlighting, and once a
class is syntactically correct, all the references to other classes
become hyperlinks that you can click on to bring up a window with that
class.

The pretty print functionality can reformat one class, or you can
point it to a cluster (if I had documentation, I might be able to get
it to everything in a project -- Oh well.)

Perhaps the greatest "improvement" with 4.5? The compiler can be
completely driven from the command line! So if you really want to, you
can set up your own IDE, say inside Emacs, and never look at the ISE
GUI tools.

So yes, older versions were scruddy, yet the newer versions are a
great improvement, and if you believe the rumors about 5.0 and/or 4.6,
the product will continue to improve.

> Stig: if you're still reading, I don't see any possible legitimate use of my
> opinion. I would strongly recommend that nobody take my word for it. Anyone
> who's interested in ISE Eiffel can download a time limited version of the
> product (with four years worth of enhancements!). Please find out for
> yourselves whether it's any good or not.

I've heard rumors that a new version is due out this month. Might be
worth waiting a couple weeks to see if that's true before downloading
something.

Greg