From: Chris Perkins
Subject: hostory of AI Winter?
Date: 
Message-ID: <6cb6c81f.0304141423.76dd6197@posting.google.com>
I was re-reading PAIP this weekend, admiring Prolog, and looking over
the TI Explorer manuals posted on LemonOdor, when I got to wondering 
"What happened?"  "How did a language like Lisp, with such abstractive
power and productivity fall into disfavor, or passed by?"

Did Lisp simply never gain a significant 'mindshare'?  Or did it once
have it and then lost it?  If so, how?

I did not study computers in college (in the 80's) where I might have
had an introduction.  I am mostly self taught and all I knew about
Lisp until two years ago was that it was for AI and had lots of
parentheses.

I'm sure many of you have your own Lisp history synopses.  Care to
share?

From: Jochen Schneider
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <uhe902di4.fsf@isg.cs.uni-magdeburg.de>
There is a text that might interest you:

Eve M. Phillips, "If It Works, It's Not AI: A Commercial Look at
Artificial Intelligence Startups". MSc thesis done under the
supervision of Patrick Winston.
<http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/ai-business.pdf>

        Jochen
From: Pascal Costanza
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <b7guv6$h48$1@f1node01.rhrz.uni-bonn.de>
Chris Perkins wrote:
> I was re-reading PAIP this weekend, admiring Prolog, and looking over
> the TI Explorer manuals posted on LemonOdor, when I got to wondering 
> "What happened?"  "How did a language like Lisp, with such abstractive
> power and productivity fall into disfavor, or passed by?"

My recent "theory" about this goes like this: Lisp doesn't fit on slides.

Lisp's power comes from the fact that it is a highly dynamic language. 
It's hard to understand many of its features when you actually haven't 
experienced them. It's really hard to explain dynamics on slides and on 
paper (articles and books).

I am convinced that static features are preferred over dynamic features, 
because you can easily explain static features on slides and on paper, 
such that even non-programmers believe they understand them. For 
example, static type systems, especially when they use explicit types. 
UML diagrams. Certain software development processes. You get the idea.

Many decisions about languages and tools are made by people who don't 
actually write programs. The recent trend towards scripting languages is 
a clear sign that programmers want something different. The term 
"scripting languages" is a clever invention because it downplays the 
importance of their features. "Of course, we use a serious language for 
developing our components, we just use a bunch of scripts to glue them 
together."

AspectJ is another such clever invention. Aspect-oriented programming 
takes the ideas of metaobject protocols and turns them into something 
static, into something that fits on slides.

To paraphrase what Richard Gabriel said at a conference a while ago: 
It's time to vote with our feet.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Gorbag
Subject: Re: history of AI Winter?
Date: 
Message-ID: <BAC17C92.3128%gorbagNOSPAM@NOSPAMmac.com>
On 4/15/03 5:46 AM, in article ············@f1node01.rhrz.uni-bonn.de,
"Pascal Costanza" <········@web.de> wrote:

> Chris Perkins wrote:
>> I was re-reading PAIP this weekend, admiring Prolog, and looking over
>> the TI Explorer manuals posted on LemonOdor, when I got to wondering
>> "What happened?"  "How did a language like Lisp, with such abstractive
>> power and productivity fall into disfavor, or passed by?"
> 
> My recent "theory" about this goes like this: Lisp doesn't fit on slides.
> 

I think AI winter had very little to do with Lisp per se, and more to do
with government funding, certain personalities overblowing expectations, and
making folks believe that all this wonderful stuff was intimately tied with
the power of Lisp, PROLOG, and other "AI languages". But believe you me,
Lisp DID "fit on the slides." (DARPA would not have taken such an interest
in Common Lisp if they didn't think they were going to get a substantial
benefit in integrating the output from all the university AI efforts.)

Consider that the end of the Japanese fifth generation project came about in
the late 80s. This project was very much about AI and high productivity
tools like Lisp and PROLOG. The very threat that the Japanese might beat us
to critical applications of these technologies caused the government to fund
very expensive research centers like MCC in Austin, Texas (full disclaimer:
I worked there for a time until they shut down their software programs). MCC
was originally a very high overhead operation, full of folks whom I will not
name many of whom made large overblown claims as to what they would be able
to do in 10 years (the length of the primary MCC funding). Large
corporations (like GE, etc.) bought $1M shares in the enterprise, funding
the obscene overhead rates of 12-1(!!) just to make these researchers happy.
(While I was not there at the time, I was regaled with tales of wine at
lunch, waiters, free popcorn and soda all day long, "Fahita Fridays" etc.
Not to mention relocation packages that included assistance from the
Governor's office!) Of course, MCC was probably only the most egregious
example, and there were pockets of hyperbole from Boston to Stanford, all
eager to suck at the government teat.

And believe me, all these guys were very very public about how languages
like Lisp were going to make these very hard problems (anyone remember the
MIT summer vision project?) obscenely easy.

Needless to say, things didn't work out that way.

Had folks been more realistic (and I know there were a few sober minds at
the time who were trying to be realistic) there may not have been such a
pump of funds, but the area would not have been set back 10 years either
during the dump. The largest problem was probably not so much that Lisp is
slow, but that the algorithms of the time were not practical on the hardware
of the time. For instance, parsers would typically take anywhere from hours
to days to process a sentence (with the exception of the Tomita parser, but
this was the first in a series of 90% coverage parsers which folks at the
time were not ready to countenance). Researchers using Lisp would extol the
benefits of rapid prototyping with the essential mantra that "first we have
to figure out how to do it at all, and we can figure out how to do it fast
later". True, but beside the point if you are fundamentally in marketing
mode and need to deliver on the hype. Many programs that were incredibly
slow at the time with some performance work and still in Lisp will run in
"near real-time" on today's hardware. For instance, parsers that might have
taken hours will now run in less than a second using modern techniques
(everything is not a property) on fast Xeon machines, or even current
generation SPARCstations.

So, Lisp and PROLOG got dragged into AI winter because they were so closely
associated with the hype. Companies like Symbolics and TI benefited from all
the corporate and university interest in launching projects to "keep up with
the Japanese", use expert systems like KEE that ran tolerably slowly on the
lisp hardware because soon "everyone would be a knowledge engineer" and
"programming was obsolete" etc. These companies went public while the hype
was still hot (though interestingly enough, had probably already peaked).
When the pendulum swung, those who had tied their fortunes to the fad du
jour were caught in the backdraft.

Some folks who were fortunate enough to be associated with pockets of work
that did not overly hype their wares were effected slightly less harshly.
Substantial portions of this work began to concentrate on efforts that were
tied to practical problems where actual results could be shown. Until the
mid 90s, it was common to build entire NLP systems, for instance, that could
only actually handle one particular scripted interaction of text, with the
purpose of showing how a technique could handle some obscure use of
language. In the mid 90s (and I am proud to say that I was associated with
this change), the emphasis became one of general coverage with less perfect
results - we could now actually start to measure coverage, effectiveness,
etc. which was impossible (or more accurately nil) in the old way of
building systems. 

As a result, we are only now starting to see enough momentum that folks
outside of AAAI conferences are beginning to sense that AI is starting to
deliver. A major credit is the application of AI to systems that a lot of
folks get exposure to, such as games. Because about 1/2 the retrenchment has
come from folks who are more interested in engineering than in scientific AI
(that is, they are more interested in creating systems with particular
features or behaviors than in exploring the mechanisms of cognition), most,
though not all of the work being done today is in non-AI languages. This is
especially true for work that has concentrated on more mathematical or
statistical approaches, such as machine vision, machine learning, and even
recently parsing. Universities also are finding their talent pool is filled
with graduate students who know languages like Java or C++, and this has
also pushed a new wave tendency for systems to be implemented in these
languages. Lisp is not dead in AI, but it tends to be concentrated in a few
areas, and even these are changing over to other languages. I expect Lisp to
continue to be used but it will not grow as fast as the area (it will
continue to lose market share). Even when I continue to use it in projects,
I usually have to sell it as an "direct executable specification language"
so there is some hope that the effort can be "thrown over the wall" at some
point.

One short perspective from the trenches,
Brad Miller
From: Kenny Tilton
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3E9CA231.9040400@nyc.rr.com>
Very interesting stuff. Someone should write a serious history of this 
language. i think an oral history would be a blast, since so many of the 
pioneers are still with us. any writers out there?

Gorbag wrote:
> On 4/15/03 5:46 AM, in article ············@f1node01.rhrz.uni-bonn.de,
> "Pascal Costanza" <········@web.de> wrote:
> 
> 
>>Chris Perkins wrote:
>>
>>>I was re-reading PAIP this weekend, admiring Prolog, and looking over
>>>the TI Explorer manuals posted on LemonOdor, when I got to wondering
>>>"What happened?"  "How did a language like Lisp, with such abstractive
>>>power and productivity fall into disfavor, or passed by?"
>>
>>My recent "theory" about this goes like this: Lisp doesn't fit on slides.
>>
> 
> 
> I think AI winter had very little to do with Lisp per se, and more to do
> with government funding, certain personalities overblowing expectations, and
> making folks believe that all this wonderful stuff was intimately tied with
> the power of Lisp, PROLOG, and other "AI languages". But believe you me,
> Lisp DID "fit on the slides." (DARPA would not have taken such an interest
> in Common Lisp if they didn't think they were going to get a substantial
> benefit in integrating the output from all the university AI efforts.)
> 
> Consider that the end of the Japanese fifth generation project came about in
> the late 80s. This project was very much about AI and high productivity
> tools like Lisp and PROLOG. The very threat that the Japanese might beat us
> to critical applications of these technologies caused the government to fund
> very expensive research centers like MCC in Austin, Texas (full disclaimer:
> I worked there for a time until they shut down their software programs). MCC
> was originally a very high overhead operation, full of folks whom I will not
> name many of whom made large overblown claims as to what they would be able
> to do in 10 years (the length of the primary MCC funding). Large
> corporations (like GE, etc.) bought $1M shares in the enterprise, funding
> the obscene overhead rates of 12-1(!!) just to make these researchers happy.
> (While I was not there at the time, I was regaled with tales of wine at
> lunch, waiters, free popcorn and soda all day long, "Fahita Fridays" etc.
> Not to mention relocation packages that included assistance from the
> Governor's office!) Of course, MCC was probably only the most egregious
> example, and there were pockets of hyperbole from Boston to Stanford, all
> eager to suck at the government teat.
> 
> And believe me, all these guys were very very public about how languages
> like Lisp were going to make these very hard problems (anyone remember the
> MIT summer vision project?) obscenely easy.
> 
> Needless to say, things didn't work out that way.
> 
> Had folks been more realistic (and I know there were a few sober minds at
> the time who were trying to be realistic) there may not have been such a
> pump of funds, but the area would not have been set back 10 years either
> during the dump. The largest problem was probably not so much that Lisp is
> slow,...

<gasp!> Not "was slow"? Did they have decent compilers back then?

... but that the algorithms of the time were not practical on the hardware
> of the time. For instance, parsers would typically take anywhere from hours
> to days to process a sentence (with the exception of the Tomita parser, but
> this was the first in a series of 90% coverage parsers which folks at the
> time were not ready to countenance). Researchers using Lisp would extol the
> benefits of rapid prototyping with the essential mantra that "first we have
> to figure out how to do it at all, and we can figure out how to do it fast
> later". True, but beside the point if you are fundamentally in marketing
> mode and need to deliver on the hype. Many programs that were incredibly
> slow at the time with some performance work and still in Lisp will run in
> "near real-time" on today's hardware. For instance, parsers that might have
> taken hours will now run in less than a second using modern techniques
> (everything is not a property) on fast Xeon machines, or even current
> generation SPARCstations.
> 
> So, Lisp and PROLOG got dragged into AI winter because they were so closely
> associated with the hype. Companies like Symbolics and TI benefited from all
> the corporate and university interest in launching projects to "keep up with
> the Japanese", use expert systems like KEE that ran tolerably slowly on the
> lisp hardware because soon "everyone would be a knowledge engineer" and
> "programming was obsolete" etc. These companies went public while the hype
> was still hot (though interestingly enough, had probably already peaked).
> When the pendulum swung, those who had tied their fortunes to the fad du
> jour were caught in the backdraft.
> 
> Some folks who were fortunate enough to be associated with pockets of work
> that did not overly hype their wares were effected slightly less harshly.
> Substantial portions of this work began to concentrate on efforts that were
> tied to practical problems where actual results could be shown. Until the
> mid 90s, it was common to build entire NLP systems, for instance, that could
> only actually handle one particular scripted interaction of text, with the
> purpose of showing how a technique could handle some obscure use of
> language. In the mid 90s (and I am proud to say that I was associated with
> this change), the emphasis became one of general coverage with less perfect
> results - we could now actually start to measure coverage, effectiveness,
> etc. which was impossible (or more accurately nil) in the old way of
> building systems. 
> 
> As a result, we are only now starting to see enough momentum that folks
> outside of AAAI conferences are beginning to sense that AI is starting to
> deliver. A major credit is the application of AI to systems that a lot of
> folks get exposure to, such as games. Because about 1/2 the retrenchment has
> come from folks who are more interested in engineering than in scientific AI
> (that is, they are more interested in creating systems with particular
> features or behaviors than in exploring the mechanisms of cognition)...

That is my theory on why Cells (and Garnet and COSI) rule but multi-way, 
non-deterministic constraint systems are used only in application 
niches. The simpler, one-way systems were developed in response to 
engineering demands, the multi-way systems were reaching for the stars 
in understandable enthusiasm over constraints as a paradigm.

 >, most,
> though not all of the work being done today is in non-AI languages. This is
> especially true for work that has concentrated on more mathematical or
> statistical approaches, such as machine vision, machine learning, and even
> recently parsing. Universities also are finding their talent pool is filled
> with graduate students who know languages like Java or C++, and this has
> also pushed a new wave tendency for systems to be implemented in these
> languages. Lisp is not dead in AI, but it tends to be concentrated in a few
> areas, and even these are changing over to other languages. I expect Lisp to
> continue to be used but it will not grow as fast as the area (it will
> continue to lose market share). Even when I continue to use it in projects,
> I usually have to sell it as an "direct executable specification language"
> so there is some hope that the effort can be "thrown over the wall" at some
> point.

Hey, maybe you can sell Lisp as a fast Python. :) Talk about coming in 
thru the backdoor. The funny thing is, Norvig already completed the 
C-Python-Lisp bridge with his Python-Lisp paper, and Graham just made 
the connection by talking up Lisp as the language for 2100 (paraphrasing 
somewhat <g>).

Am I the only one who sees here the ineluctable triumph of Lisp?

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Michael D. Sofka
Subject: Re: history of AI Winter?
Date: 
Message-ID: <xui1y02egtz.fsf@mintaka.cct.rpi.edu>
Kenny Tilton <·······@nyc.rr.com> writes:

> Very interesting stuff. Someone should write a serious history of this
> language. i think an oral history would be a blast, since so many of
> the pioneers are still with us. any writers out there?
> 

History of Programming Languages II included an interesting article
by Guy Steele (IIRC---I can't find my copy).

Mike

--
Michael D. Sofka             ······@rpi.edu
C&CT Sr. Systems Programmer    AFS/DFS, email, usenet, TeX, epistemology.
Rensselaer Polytechnic Institute, Troy, NY.  http://www.rpi.edu/~sofkam/
From: Arthur T. Murray
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3e9d6bef@news.victoria.tc.ca>
If said "AI Winter" ever existed, it
is thawing into Spring right now :-)

Mentifex
-- 
http://www.scn.org/~mentifex/theory5.html -- AI4U Theory of Mind;
http://www.scn.org/~mentifex/jsaimind.html -- Tutorial "Mind-1.1" 
http://www.scn.org/~mentifex/mind4th.html -- Mind.Forth Robot AI;
http://www.scn.org/~mentifex/ai4udex.html -- Index for book: AI4U
From: Erann Gat
Subject: Re: history of AI Winter?
Date: 
Message-ID: <gat-1604031005540001@k-137-79-50-101.jpl.nasa.gov>
In article <················@nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

> Very interesting stuff. Someone should write a serious history of this 
> language. i think an oral history would be a blast, since so many of the 
> pioneers are still with us. any writers out there?

www.dreamsongs.com/NewFiles/Hopl2.pdf

E.
From: Paolo Amoroso
Subject: Re: history of AI Winter?
Date: 
Message-ID: <yH2dPrelu00TpOmJy6A4PVc2zN+r@4ax.com>
On Wed, 16 Apr 2003 00:14:31 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:

> Very interesting stuff. Someone should write a serious history of this 
> language. i think an oral history would be a blast, since so many of the 

You might check "The Brain Makers".


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Carl Shapiro
Subject: Re: history of AI Winter?
Date: 
Message-ID: <ouy1y01eo5v.fsf@panix3.panix.com>
Paolo Amoroso <·······@mclink.it> writes:

> On Wed, 16 Apr 2003 00:14:31 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:
> 
> > Very interesting stuff. Someone should write a serious history of this 
> > language. i think an oral history would be a blast, since so many of the 
> 
> You might check "The Brain Makers".

I certainly wouldn't.  That book has a startling number of factual
errors.
From: Paolo Amoroso
Subject: Re: history of AI Winter?
Date: 
Message-ID: <sa6ePqzB5Ti3R1LjLD+hFLBlh6mu@4ax.com>
On 17 Apr 2003 00:35:56 -0400, Carl Shapiro <·············@panix.com>
wrote:

> Paolo Amoroso <·······@mclink.it> writes:
[...]
> > You might check "The Brain Makers".
> 
> I certainly wouldn't.  That book has a startling number of factual
> errors.

Could you please provide a few examples?


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Jeff Katcher
Subject: Re: history of AI Winter?
Date: 
Message-ID: <1a739260.0304181155.e03a301@posting.google.com>
Carl Shapiro <·············@panix.com> wrote in message news:<···············@panix3.panix.com>...
> Paolo Amoroso <·······@mclink.it> writes:
> 
> > On Wed, 16 Apr 2003 00:14:31 GMT, Kenny Tilton <·······@nyc.rr.com> wrote:
> > 
> > > Very interesting stuff. Someone should write a serious history of this 
> > > language. i think an oral history would be a blast, since so many of the 
> > 
> > You might check "The Brain Makers".
> 
> I certainly wouldn't.  That book has a startling number of factual
> errors.
Can you please cite examples?  I had the pleasure :) of seeing much of
what the author described from a front row seat.  My memory (and it's
been a while since I read it) tells me that it caught the piquance of
the era pretty well, certainly the people that I knew anyway.

Jeffrey Katcher
From: Paolo Amoroso
Subject: Re: history of AI Winter?
Date: 
Message-ID: <nX2dPiaVFXzJmm5jxOY3oJ6CZLA+@4ax.com>
On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <············@NOSPAMmac.com>
wrote:

> tools like Lisp and PROLOG. The very threat that the Japanese might beat us
> to critical applications of these technologies caused the government to fund
> very expensive research centers like MCC in Austin, Texas (full disclaimer:

What does MCC stand for?


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Joe Marshall
Subject: Re: history of AI Winter?
Date: 
Message-ID: <y92aw44q.fsf@ccs.neu.edu>
Paolo Amoroso <·······@mclink.it> writes:

> On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <············@NOSPAMmac.com>
> wrote:
> 
> > tools like Lisp and PROLOG. The very threat that the Japanese might beat us
> > to critical applications of these technologies caused the government to fund
> > very expensive research centers like MCC in Austin, Texas (full disclaimer:
> 
> What does MCC stand for?

Micro Computer Consortium (if I recall correctly)
From: Michael D. Kersey
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3E9E14D8.8D130D7E@hal-pc.org>
Paolo Amoroso wrote:
> 
> On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <············@NOSPAMmac.com>
> wrote:
> 
> > tools like Lisp and PROLOG. The very threat that the Japanese might beat us
> > to critical applications of these technologies caused the government to fund
> > very expensive research centers like MCC in Austin, Texas (full disclaimer:
> 
> What does MCC stand for?

Microelectronics and Computer Technology Corporation.
http://www.mcc.com/ is the URL of what remains.
From: Gorbag
Subject: Re: history of AI Winter?
Date: 
Message-ID: <BAC47B20.3310%gorbagNOSPAM@NOSPAMmac.com>
On 4/16/03 9:07 AM, in article ····························@4ax.com, "Paolo
Amoroso" <·······@mclink.it> wrote:

> On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <············@NOSPAMmac.com>
> wrote:
> 
>> tools like Lisp and PROLOG. The very threat that the Japanese might beat us
>> to critical applications of these technologies caused the government to fund
>> very expensive research centers like MCC in Austin, Texas (full disclaimer:
> 
> What does MCC stand for?

Well, it's a lawyer and a lightbulb now, but you can still see for yourself:

http://www.mcc.com

For those of you too lazy to click through,

Microelectronics and Computer Technology Corporation.

For a short while, I think it was called MCTC, but they shortened it to MCC
since they liked it better.
> 
> 
> Paolo
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0304181311.11af9e3f@posting.google.com>
Gorbag <············@NOSPAMmac.com> wrote in message news:<··························@NOSPAMmac.com>...
> I think AI winter had very little to do with Lisp per se, and more to do
> with government funding, certain personalities overblowing expectations, and
> making folks believe that all this wonderful stuff was intimately tied with
> the power of Lisp, PROLOG, and other "AI languages". But believe you me,
> Lisp DID "fit on the slides." (DARPA would not have taken such an interest
> in Common Lisp if they didn't think they were going to get a substantial
> benefit in integrating the output from all the university AI efforts.)

In my opinion the relative slow rate of development in AI research
should not be attributed to any particular programming system. As a
matter of fact, when I look at programming languages today, I think
LISP was the first real programming language and it was C-like
languages which made it harder to write complex software! Today, IMHO,
languages such as ocaml are bound to close the gap between the
theoretical mindset a scientist must accumulate and the practical
creativity that is the mark of a good programmer. (Though, it should
be said that there is no such thing as a "perfect programming
language".)

After all, without the proper algorithm no language can help you
realize it. However, an increased effectiveness in putting theory into
functional systems will speed up development cycles. My opinion is
that this cycle right now is so long that no single research group can
see to their real goals. It might be just about 3-4 years to realize a
simple idea, which is the average duration a graduate student will
stay at a university. We are probably being prevented by the finite
bounds of human patience and ambition :) A physicist friend told me
that very few physicists can be in full command of both general
relativity theory and quantum theory at the same time. This might be
what happens when you push to the limits.

As you have guessed, I will say that the real problem is a theoretical
crisis. Many young researchers seem to think that when we give
something a snappy name like genetic algorithms or neural networks, it
ought to be the ultimate theoretical treatment in AI or that when we
can program robots to do simple tasks we are actually making science.
That is, however, ultimately wrong. And it is exactly this kind of
"Starvation of Ideas" that led to the AI winter. There was simply not
enough innovation. We lacked, I think, the pursuit of higher order
intelligence problems. We were stuck with easy problems and a few
inefficient and low quality solutions that have no chance of scaling
up.

In my opinion, however, the rebirth of machine learning research is a
good direction. A lot of people are now trying to find hard problems
and objectively evaluating a range of methods. I think we have finally
gotten rid of "connectionist" and "symbolic" camps; which will turn
out to be the most insensible distinction in a scientific community
decades later. The mind is computational or not. That simple. And if
it is computational, there are computations that _must_ be
characterized,  of course that characterization is independent of a
particular architecture.

I actually have a "slingshot argument" about this issue, but it's
going to take some time before I can put it into words.

Thanks,

__
Eray Ozkural
From: Michael Schuerig
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b7pu15$u11$06$1@news.t-online.com>
Eray Ozkural  exa wrote:

> And it is exactly this kind of
> "Starvation of Ideas" that led to the AI winter. There was simply not
> enough innovation. We lacked, I think, the pursuit of higher order
> intelligence problems. We were stuck with easy problems and a few
> inefficient and low quality solutions that have no chance of scaling
> up.

I'm a complete outsider to the recent history of AI, just sitting on the
fence and reading. The impression I've got is that AI as a research
program has been creating results at a steady pace. In my opinion, most
of it has nothing to do with intelligence/cognition, but that's another
matter entirely.

The real problem, from my limited point of view, was that there were a
few people who made completely overblown promises -- "there are now
machines that think...", "in 10 years..." -- and another few who were
all to willing to believe the hype. It's astonishing it took reality so
long until biting back.

Incidentally, I seem to remember reading (in this group?) that AI
logistics software saved more money during Desert Storm than DARPA had
ever spent on AI research. Can anyone confirm or disprove this claim?
How's the balance for civilian uses of AI?

Michael

-- 
Michael Schuerig                All good people read good books
···············@acm.org         Now your conscience is clear
http://www.schuerig.de/michael/ --Tanita Tikaram, "Twist In My Sobriety"
From: M H
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b7r1st$8im$02$1@news.t-online.com>
Michael Schuerig wrote:
> The real problem, from my limited point of view, was that there were a
> few people who made completely overblown promises -- "there are now
> machines that think...", "in 10 years..." -- and another few who were
> all to willing to believe the hype. It's astonishing it took reality so
> long until biting back.

The real problem was that 20 years ago people did not know how difficult 
  AI problems really are.  In AI simple approaches often work for simple 
examples.  But when it comes to real-world settings with noisy, large 
datasets these approaches break down.  It took some time to discover 
that and to come up with an alternative.  In many AI fields this 
alternative was the probabilistic treatment of AI problems.  If you are 
interested you can clearly see this paradigm shift when comparing 
Norvigs PAIP to his "AI - A Modern Approach".

Interestingly, this shift in concepts also induced a shift in the tools 
researcher use for programming.  I would _guess_ (I don't have any hard 
data here) that Matlab and C++ now are the most frequently used language 
for research in vision, speech, machine learning.

Matthias
From: Michael Schuerig
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b7r2kq$kik$00$1@news.t-online.com>
M H wrote:

> The real problem was that 20 years ago people did not know how
> difficult
>   AI problems really are.  In AI simple approaches often work for
>   simple
> examples.  But when it comes to real-world settings with noisy, large
> datasets these approaches break down.  It took some time to discover
> that and to come up with an alternative.  In many AI fields this
> alternative was the probabilistic treatment of AI problems.  If you
> are interested you can clearly see this paradigm shift when comparing
> Norvigs PAIP to his "AI - A Modern Approach".

Indeed, PAIP is a great study of historically significant AI programs,
reduced to their core. Still, even in the heyday of Logic Theorist and
General Problem Solver, even with a most optimistic outlook,
extravagant claims about intelligent computers were utterly unfounded.
I don't believe at all, that only with hindsight one can see that.

Michael

-- 
Michael Schuerig                       They tell you that the darkness
···············@acm.org                      is a blessing in disguise.
http://www.schuerig.de/michael/           --Janis Ian, "From Me To You"
From: M H
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b7rdc0$dc8$07$1@news.t-online.com>
Michael Schuerig wrote:
> Indeed, PAIP is a great study of historically significant AI programs,
> reduced to their core. Still, even in the heyday of Logic Theorist and
> General Problem Solver, even with a most optimistic outlook,
> extravagant claims about intelligent computers were utterly unfounded.
> I don't believe at all, that only with hindsight one can see that.

In the fields I cited (esp. vision and speech recognition) it is 
regularly difficult to explain to non-experts why computers should have 
difficulties solve tasks which are so simple even a three-year-old can 
do them effortlessly!

And how could one have known in advance that not logic and rule 
induction but probability theory and graphs would become the tools to 
build the most successful expert systems with?

If the researchers in AI had had the right concepts in the early 80s 
where would we be today, taking into account all that funding AI has 
received?  Can you tell?  I can't.

Matthias
From: Bulent Murtezaoglu
Subject: Re: history of AI Winter?
Date: 
Message-ID: <87u1cuacro.fsf@acm.org>
>>>>> "MH" == M H <M> writes:
[...]
    MH> And how could one have known in advance that not logic and
    MH> rule induction but probability theory and graphs would become
    MH> the tools to build the most successful expert systems with?
[...]

I think we knew that all along?  It isn't immediately obvious to me
that if the funding and efforts were steered away from purely symbolic
systems to hybrid probabilistic ones, say, right after Mycin, a
remarkably different sequence of events/accomplishments would have
unfolded.  Direct funding would have been no substitute for the
explosion in inexpensive computing power partially brought about by
completely different set of market forces on top of Moore's law.

The market will eventually shake the anti-AI hype, and instead try to
pick and choose between good and bad prospects w/o knee-jerk
rejection.  There is good technology out there and competent people to
apply it.  Whether it is called AI or not is immaterial as far as the
development is concerned.  While it pretty clear that the party has
been over for a while, we are not left with just a hangover.  That is, 
the endevour hasn't been pointless and a total loss.  

cheers,

BM


  
From: Paul Wallich
Subject: Re: history of AI Winter?
Date: 
Message-ID: <pw-AA0354.09035319042003@reader1.panix.com>
In article <··············@acm.org>, Bulent Murtezaoglu <··@acm.org> 
wrote:

> >>>>> "MH" == M H <M> writes:
> [...]
>     MH> And how could one have known in advance that not logic and
>     MH> rule induction but probability theory and graphs would become
>     MH> the tools to build the most successful expert systems with?
> [...]
> 
> I think we knew that all along?  It isn't immediately obvious to me
> that if the funding and efforts were steered away from purely symbolic
> systems to hybrid probabilistic ones, say, right after Mycin, a
> remarkably different sequence of events/accomplishments would have
> unfolded.  Direct funding would have been no substitute for the
> explosion in inexpensive computing power partially brought about by
> completely different set of market forces on top of Moore's law.
> 
> The market will eventually shake the anti-AI hype, and instead try to
> pick and choose between good and bad prospects w/o knee-jerk
> rejection.  There is good technology out there and competent people to
> apply it.  Whether it is called AI or not is immaterial as far as the
> development is concerned.  While it pretty clear that the party has
> been over for a while, we are not left with just a hangover.  That is, 
> the endevour hasn't been pointless and a total loss.  

What is remarkable to me is how well some of the early research ideas 
have held up, if only because there haven't been a lot of new ones to 
replace them. Even in cases where Moore's Law has made an enormous 
difference to the kinds of tradeoffs programmers must make, the general 
algorithms were explored back in the mid-80s. (In the past few years 
there has been some new movement perhaps.)

One thing that concerns me, though, is the apparent increase in the 
opacity of much of the work being done now. Statistical recognizers and 
memory-based systems can give awfully good performance on test sets (and 
sometimes real-world data) but by leaving the "knowledge extraction" to 
systems where the meaning of extracted features is unclear, you run all 
kinds of applications risks. This argument has been made and lost 
before, of course, but that's part of the general trend for the 
most-computable solution to win.

paul
From: Frank A. Adrian
Subject: Re: history of AI Winter?
Date: 
Message-ID: <WMfoa.15$Qa5.34265@news.uswest.net>
Paul Wallich wrote:

> One thing that concerns me, though, is the apparent increase in the
> opacity of much of the work being done now. Statistical recognizers and
> memory-based systems can give awfully good performance on test sets (and
> sometimes real-world data) but by leaving the "knowledge extraction" to
> systems where the meaning of extracted features is unclear, you run all
> kinds of applications risks. This argument has been made and lost
> before, of course, but that's part of the general trend for the
> most-computable solution to win.

Even human intelligence has application risks.  The real issue for AI is not 
whether you can eliminate application risk, but whether you can expand the 
range of correct operation to human limits with similar (or lesser) risk 
attributes over the domain of the given application and expand the 
operation to a wide enough set of domains.

Also, the less opacity a system has, the more automaton-like the system 
looks.  At some point it no longer looks like AI, it's just programming.  
An interesting question is whether or not you can bound the risk while 
still providing the "surprising behaviors" within those bounds that seem to 
characterize intelligence.  Or is there an uncertainty principal at work 
that says that unexpected behavior - and thus intelligence - is tied to 
risk in an inherent way such that, once the risk is bounded, we no longer 
see intelligence and, conversely, if we want intelligence, we need to put 
up with the risk that the intelligence will not always behave optimally or 
even correctly.

It is a conundrum...

faa
From: Paul Wallich
Subject: Re: history of AI Winter?
Date: 
Message-ID: <pw-EEB233.20345319042003@reader1.panix.com>
In article <··················@news.uswest.net>,
 "Frank A. Adrian" <·······@ancar.org> wrote:

> Paul Wallich wrote:
> 
> > One thing that concerns me, though, is the apparent increase in the
> > opacity of much of the work being done now. Statistical recognizers and
> > memory-based systems can give awfully good performance on test sets (and
> > sometimes real-world data) but by leaving the "knowledge extraction" to
> > systems where the meaning of extracted features is unclear, you run all
> > kinds of applications risks. This argument has been made and lost
> > before, of course, but that's part of the general trend for the
> > most-computable solution to win.
> 
> Even human intelligence has application risks.  The real issue for AI is not 
> whether you can eliminate application risk, but whether you can expand the 
> range of correct operation to human limits with similar (or lesser) risk 
> attributes over the domain of the given application and expand the 
> operation to a wide enough set of domains.

In a lot of cases, you don't even have to do that -- it may be 
sufficient in an overall system context to have an "AI" with much worse 
than human performance, but at much lower cost or with much higher 
throughput, as long as there's a human supervisor. But by applications 
risks, I'm talking more about the overall system than just about the AI 
part (more below)
 
> Also, the less opacity a system has, the more automaton-like the system 
> looks.  At some point it no longer looks like AI, it's just programming.  
> An interesting question is whether or not you can bound the risk while 
> still providing the "surprising behaviors" within those bounds that seem to 
> characterize intelligence.  Or is there an uncertainty principal at work 
> that says that unexpected behavior - and thus intelligence - is tied to 
> risk in an inherent way such that, once the risk is bounded, we no longer 
> see intelligence and, conversely, if we want intelligence, we need to put 
> up with the risk that the intelligence will not always behave optimally or 
> even correctly.

At its simplest, that's Minsky's speech from '84 or so about how as soon 
as something starts working reliably, it ceases to be admitted as AI. I 
think in general, you're right, but with an important caveat about what 
we mean by opacity. The hallmark of many operations of human 
intelligence is that they can't necessarily be predicted, but they can 
be followed.

I don't think the current generation of statistically-based (for lack of 
a better term) AI systems is terribly well-designed in terms of letting 
people explore the semantic implications of the features that get 
extracted. And that opacity in (what passes for) 
knowledge-representation and conclusion-drawing is where the excess risk 
lies. Unless your training data is really good and really representative 
of the universe of interest, you're going to get results that make Parry 
vs Eliza look robust, but you're not going to know it until fairly late 
in the game, and you're going to have a heck of a time figuring out what 
went wrong.

Of course, most of what one hear from the outside are the horror 
stories, so I'm probably biased.

paul
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0304221216.24258e59@posting.google.com>
Paul Wallich <··@panix.com> wrote in message news:<························@reader1.panix.com>...
> I don't think the current generation of statistically-based (for lack of 
> a better term) AI systems is terribly well-designed in terms of letting 
> people explore the semantic implications of the features that get 
> extracted. And that opacity in (what passes for) 
> knowledge-representation and conclusion-drawing is where the excess risk 
> lies. Unless your training data is really good and really representative 
> of the universe of interest, you're going to get results that make Parry 
> vs Eliza look robust, but you're not going to know it until fairly late 
> in the game, and you're going to have a heck of a time figuring out what 
> went wrong.

This is the argument for the "symbolic camp". I want to point out that
this distinction is too vague to be useful. There is essentially no
difference between
   1) C4.5
   2) A naive bayesian learning algorithm
   3) An ANN learning algorithm
   4) Another LISP code that has the same goal!!!

I have an argument that shows, in theory, there is no useful
distinction to be made of the "symbolic-ity" of the models employed.
As a matter of fact a DT is just as "opaque" as an ANN if you think
about it carefully. What happens when you increase the number of input
dimensions to 1000 and training instances to 1M? What happens when you
design by hand an XOR circuit with ANN?

__
Eray Ozkural
Bilkent Univ. CS Dept. Miserable PhD student
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-A90933.15542922042003@news.vanderbilt.edu>
In article <····························@posting.google.com>,
 ·····@bilkent.edu.tr (Eray Ozkural  exa) wrote:

>There is essentially no
>difference between
>   1) C4.5
>   2) A naive bayesian learning algorithm
>   3) An ANN learning algorithm
>   4) Another LISP code that has the same goal!!!
>
>I have an argument that shows, in theory, there is no useful
>distinction to be made of the "symbolic-ity" of the models employed.

Are you aware of papers that establish the approximate
equivalence of (1), (2), and (3)?  I know there were some
empirical comparisons of ID3 and ANNs in the late 1980s
but have lost track of the literature.  Any recent references
would be much appreciated.

>As a matter of fact a DT is just as "opaque" as an ANN if you think
>about it carefully. What happens when you increase the number of input
>dimensions to 1000 and training instances to 1M? What happens when you
>design by hand an XOR circuit with ANN?

Maybe I'm not thinking "carefully" enough.  Isn't this just
the claim that applying any learning algorithm to a really
really really complex domain will result in a nearly
uninterpretable "solution"?
From: Paul Wallich
Subject: Re: history of AI Winter?
Date: 
Message-ID: <pw-45A375.10410623042003@reader1.panix.com>
In article <····························@posting.google.com>,
 ·····@bilkent.edu.tr (Eray Ozkural  exa) wrote:

> Paul Wallich <··@panix.com> wrote in message 
> news:<························@reader1.panix.com>...
> > I don't think the current generation of statistically-based (for lack of 
> > a better term) AI systems is terribly well-designed in terms of letting 
> > people explore the semantic implications of the features that get 
> > extracted. And that opacity in (what passes for) 
> > knowledge-representation and conclusion-drawing is where the excess risk 
> > lies. Unless your training data is really good and really representative 
> > of the universe of interest, you're going to get results that make Parry 
> > vs Eliza look robust, but you're not going to know it until fairly late 
> > in the game, and you're going to have a heck of a time figuring out what 
> > went wrong.
> 
> This is the argument for the "symbolic camp". I want to point out that
> this distinction is too vague to be useful. There is essentially no
> difference between
>    1) C4.5
>    2) A naive bayesian learning algorithm
>    3) An ANN learning algorithm
>    4) Another LISP code that has the same goal!!!
> 
> I have an argument that shows, in theory, there is no useful
> distinction to be made of the "symbolic-ity" of the models employed.
> As a matter of fact a DT is just as "opaque" as an ANN if you think
> about it carefully. What happens when you increase the number of input
> dimensions to 1000 and training instances to 1M? What happens when you
> design by hand an XOR circuit with ANN?

Thanks for rediscovering Turing-equivalence ;-)

In practice, however, people who use different kinds of programming 
paradigms tend to expect different things from them. So although 
"symbolic" representations can be at least as opaque as artificial 
neural nets (I've seen some beautiful explications of why certain 
systems have high-weight connections in particular places) there's a 
strong tendency toward more black-boxness in the more traditionally 
numeric-intensive computational methods. Combine that with hidden 
regularities in your training sets, and you can get some really 
spectacular failures -- especially in cases where your statistical 
algorithms appear to be doing best in finding simple classifications.
(Historical examples include the tank-recognizer that zeroed in on 
picture quality and sun angle, or the mortgage assistant that found race 
to be the crucial classifier for its training set.) The dog's breakfast 
that is off-axis face recognition is probably a good current example.

The whole thing reminds me of the brief period when straight simulated 
annealing was the hottest thing in chip layout. An acquaintance 
remarked, "It's really a great technique if you know nothing at all 
about the problem you're trying to solve."

paul
From: M H
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b86ecj$cfg$06$1@news.t-online.com>
Paul Wallich wrote:
> In practice, however, people who use different kinds of programming 
> paradigms tend to expect different things from them. So although 
> "symbolic" representations can be at least as opaque as artificial 
> neural nets (I've seen some beautiful explications of why certain 
> systems have high-weight connections in particular places) there's a 
> strong tendency toward more black-boxness in the more traditionally 
> numeric-intensive computational methods. Combine that with hidden 
> regularities in your training sets, and you can get some really 
> spectacular failures -- especially in cases where your statistical 
> algorithms appear to be doing best in finding simple classifications.

For some statistical classifiers there are upper bounds on the 
generalization error you can derive.  If you can't derive any useful 
bounds you can estimate the generalization error using, e.g., some sort 
of crossvalidation.  Assuming that your training data samples the true 
distribution generating your data you will be able to give accurate 
bounds on the errors you are to expect when your system is employed. 
(If you don't have properly sampled training data you shouldn't rely on 
machine learning in the first place.)

Note that interpretability and classification performance are two 
different goals and often you only need one.  The human visual system is 
an example of a classifier which shows excellent performance but really 
bad interpretability.  Of course, this may improve as science proceeds.

Similarly, many recent spam mail filters are based on a grossly 
simplified statistical model of language.  Although their classification 
decisions are not easy to understand these systems perform better than 
their older keyword-regexp-based counterparts.  Most people will just be 
interested in a good recognition rate.  They won't care that regexps are 
so much easier to read.

Matthias
From: Paul Wallich
Subject: Re: history of AI Winter?
Date: 
Message-ID: <pw-167B21.13202623042003@reader1.panix.com>
In article <···············@news.t-online.com>, M H <··@nospam.org> 
wrote:

> For some statistical classifiers there are upper bounds on the 
> generalization error you can derive.  If you can't derive any useful 
> bounds you can estimate the generalization error using, e.g., some sort 
> of crossvalidation.  Assuming that your training data samples the true 
> distribution generating your data you will be able to give accurate 
> bounds on the errors you are to expect when your system is employed. 

and the kicker:

> (If you don't have properly sampled training data you shouldn't rely on 
> machine learning in the first place.)

Perhaps you would be good enough to go and chisel this line into the 
foreheads of research directors and program managers around the world?

Especially in some of the intelligence-related areas where machine 
learning is currently being touted, the quality of some of the training 
data would set fire to your hair.

Of course, this issue is really just another version of the "If you know 
how to do it, it's not AI" problem -- the real challenge is developing 
techniques that work (or that fail sensibly) for really lousy, 
incomplete and nonrepresentative training sets.

paul
From: Bulent Murtezaoglu
Subject: Re: history of AI Winter?
Date: 
Message-ID: <87lly0a0s8.fsf@acm.org>
>>>>> "PW" == Paul Wallich <··@panix.com> writes:
[...]
    PW> Of course, this issue is really just another version of the
    PW> "If you know how to do it, it's not AI" problem -- the real
    PW> challenge is developing techniques that work (or that fail
    PW> sensibly) for really lousy, incomplete and nonrepresentative
    PW> training sets.

[I have been away from the field a long time so grains of salt are
indicated but] 'Lousy incomplete and ronrepresentative' training sets
and a general algorithm will necessarily give you something that
either does the wrong thing or is indecisive most of the time.  If you
want to infer structures that are obscured or non-existent in the
training set you'll just have to bias the system in some other way
outside of your learning algorithm.  You can't have it both ways.
Think of the extreme case where 'lousy, incomplete and
ronrepresentative' means random data.  If you feed the program random
data as the training set and it still does the right thing for your
purposes, something outside of the learning algorithm must be at work.

IMHO but not too H, I'd be surprised and maybe excited if this were
not so.

cheers,

BM
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305011534.73775c65@posting.google.com>
Paul Wallich <··@panix.com> wrote in message news:<························@reader1.panix.com>...
> 
> Thanks for rediscovering Turing-equivalence ;-)
> 

I'm taking that as a humorous remark. :>

However, it doesn't detract from my argument. I argue not for the
computational equivalence of four methods. (I leave it as an exercise
to the reader if the models they produce are computationally
equivalent [seriously]. Hint: BP learning and LISP program can produce
computationally equivalent models) I argue that there is NO useful
qualitative distinction to be made among these wildly different ranks
of learning methods. The instances of the argument I gave clearly
shows the validity of the argument schema in both directions.

> In practice, however, people who use different kinds of programming 
> paradigms tend to expect different things from them. So although 
> "symbolic" representations can be at least as opaque as artificial 
> neural nets (I've seen some beautiful explications of why certain 
> systems have high-weight connections in particular places) there's a 
> strong tendency toward more black-boxness in the more traditionally 
> numeric-intensive computational methods. Combine that with hidden 
> regularities in your training sets, and you can get some really 
> spectacular failures -- especially in cases where your statistical 
> algorithms appear to be doing best in finding simple classifications.
> (Historical examples include the tank-recognizer that zeroed in on 
> picture quality and sun angle, or the mortgage assistant that found race 
> to be the crucial classifier for its training set.) The dog's breakfast 
> that is off-axis face recognition is probably a good current example.
> 

As you may have recognized I argue against this argument which I find
useless. Essentially, the "comprehensibility" or "communicability" of
the results is another problem, which should not be taken as a general
machine learning problem. However, in literature that is seen as part
of KDD process of which a data mining or machine learning algorithm
comprises another step. For instance the overall system must be able
to present the intermediate results in a graphical and concise form to
the user, say a scientist who's trying to understand what's inside a
petabyte dataset.

Can this influence the choice of algorithm?

The answer is YES, but that cannot be used to disprove my argument. By
nature, the algorithm can be either a "connectionist" OR "symbolic"
one. _Depends on the application_. Maybe I should say that twice or
thrice.

Usually one will have to apply quite sophisticated
visualization/summarization/fuzzification algorithms to make sense of
the output of the data mining step, whatever kind of model it is. The
key here is thinking BIG. Number of training instances > 10^7. Number
of dimensions > 10^3. One should not assume that a human can make
sense of a functional model derived from such data.

> The whole thing reminds me of the brief period when straight simulated 
> annealing was the hottest thing in chip layout. An acquaintance 
> remarked, "It's really a great technique if you know nothing at all 
> about the problem you're trying to solve."

Now I see a true programmer :) In practice only carefully crafted
algorithms can find near-optimal solutions in VLSI layout which has
also given rise to state-of-the art graph and hypergraph partitioning
algorithms. Similar remarks apply nearly to all "really hard"
problems.

SA was investigated but I think quickly abandoned by the VLSI
community. Actually I'm having to explain why SA should not be seen as
a generally applicable method to a not-too-clueful student right now
:/

Happy hacking,

__
Eray Ozkural
From: Tom Osborn
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3eb20001_1@news.iprimus.com.au>
My view (having been there at the time) is that three things went wrong.

[Generalisation warning - all comments below have exceptions].

Firstly academic AI which didn't SCALE well was offered as the direction to
head, rather than tackling the problem of scale. Many MS student "trained"
to build limited domain, limited robustness "Expert Systems" and then 
were employed as ES experts. 

Secondly, the methodologies didn't transfer from narrow, limited domains
very well at all. CBR was an attempt to address that, but it was a version
of amateur tinkering for a fair while. Statistics and ML had more to offer.
AI supported by decision theory and cognitive modelling also.

Thirdly, the marketplace (IT manager/buyers/users) were skeptical, defensive
and ignorant. They often still are. It's an "evolutionary" thing, like a Peter
Principle about "ideas that can work, and [therefore] we are prepared to
adopt them". Hype from AI did not help. SQL _could_ be written to
support ANY kinds of management decisions and insights discovery
(etc), but that's a fucking lot of non-transparent SQL. 

Unfortunately "Prolog as a database query language" was killed by the
LISP/Prolog divide (or US/European divide). LISP fostered "algorithm
thinking" (bounded search, side effects, etc), while Prolog fostered
"data semantics and requirements thinking". I think Prolog was a better
bet for the long term...  [This paragraph may cause flames. It is my opinion
having been involves in many "camps" over the years. Feel free to defend
alternative opinions, rather than attack me...].

Lastly, the symbolic vs NN/statistical/maths/decision theory WAR was
very dumb indeed.

Tom.
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <AFDizQA4Pjs+Ewwp@longley.demon.co.uk>
In article <··········@news.iprimus.com.au>, Tom Osborn <·······@DELETE_
CAPS.nuix.com.au> writes
>
>
>My view (having been there at the time) is that three things went wrong.
>
>[Generalisation warning - all comments below have exceptions].
>
>Firstly academic AI which didn't SCALE well was offered as the direction to
>head, rather than tackling the problem of scale. Many MS student "trained"
>to build limited domain, limited robustness "Expert Systems" and then 
>were employed as ES experts. 
>
>Secondly, the methodologies didn't transfer from narrow, limited domains
>very well at all. CBR was an attempt to address that, but it was a version
>of amateur tinkering for a fair while. Statistics and ML had more to offer.
>AI supported by decision theory and cognitive modelling also.
>
>Thirdly, the marketplace (IT manager/buyers/users) were skeptical, defensive
>and ignorant. They often still are. It's an "evolutionary" thing, like a Peter
>Principle about "ideas that can work, and [therefore] we are prepared to
>adopt them". Hype from AI did not help. SQL _could_ be written to
>support ANY kinds of management decisions and insights discovery
>(etc), but that's a fucking lot of non-transparent SQL. 
>
>Unfortunately "Prolog as a database query language" was killed by the
>LISP/Prolog divide (or US/European divide). LISP fostered "algorithm
>thinking" (bounded search, side effects, etc), while Prolog fostered
>"data semantics and requirements thinking". I think Prolog was a better
>bet for the long term...  [This paragraph may cause flames. It is my opinion
>having been involves in many "camps" over the years. Feel free to defend
>alternative opinions, rather than attack me...].
>
>Lastly, the symbolic vs NN/statistical/maths/decision theory WAR was
>very dumb indeed.
>
>Tom.


There's another possibility - namely that those pursuing "AI" had
misconceived the nature of human "intelligence" and/or skills. More
fundamentally, they had misconceived the nature of the psychological.

-- 
David Longley
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305021155.677a578e@posting.google.com>
Greetings,

David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> In article <··········@news.iprimus.com.au>, Tom Osborn <·······@DELETE_
> >of amateur tinkering for a fair while. Statistics and ML had more to offer.
> >AI supported by decision theory and cognitive modelling also.
> 
> There's another possibility - namely that those pursuing "AI" had
> misconceived the nature of human "intelligence" and/or skills. More
> fundamentally, they had misconceived the nature of the psychological.

David's eloquence brings hope that civilized discussion might exist.

I would like to point out that Tom essentially addresses your well
articulated concern with the phrase "AI supported by decision theory
and cognitive modelling".

Indeed, it is unimaginable to me how an AI that has not mathematical
basis of any sort or a correct understanding of the requirements of an
advanced psychology such as found in human beings could be successful.

The nature of the psychological is so oft-overlooked that I am most of
the time staring awestruck at the arbitrary omission of prominent
psychological factors in AI research.

It gives me a feeling of loneliness and hopelessness that I can't
begin to describe. It's I think the same feeling when you ask a
cosmologist "Is the universe infinite?" and find out that he has never
bothered to think about it!!! What blasphemy!!

Assume you have a learning algorithm that can recognize faces. Also
assume that you have a learning algorithm that can recognize speech.
Not my classical example, but what good are these algorithms when they
cannot be bound by an autonomous system that will use such abilities
as a basis in subsequent cognitive tasks?

I am not ably expressing my deep concerns. I know how to recognize a
phone. I know how to talk. But I also know how to pick up the handset
and dial a number. I can combine all of these abilities in a conscious
way.  The truth is, I have learnt all of these, none of these could be
inscribed in genes. And only a company of clownery and mockery of
naive evolutionary and behaviorist theorists would be sufficiently
uninitiated to dismiss these concerns.

I appreciate David's comments very much.

Regards,

__
Eray Ozkural <erayo at cs.bilkent.edu.tr>
CS Dept. , Bilkent University, Ankara
From: Gorbag
Subject: Re: history of AI Winter?
Date: 
Message-ID: <BAD84C66.3EE6%gorbagNOSPAM@NOSPAMmac.com>
On 5/2/03 12:55 PM, in article
····························@posting.google.com, "Eray Ozkural  exa"
<·····@bilkent.edu.tr> wrote:

> Greetings,
> 
> David Longley <·····@longley.demon.co.uk> wrote in message
> news:<················@longley.demon.co.uk>...
>> In article <··········@news.iprimus.com.au>, Tom Osborn <·······@DELETE_
>>> of amateur tinkering for a fair while. Statistics and ML had more to offer.
>>> AI supported by decision theory and cognitive modelling also.
>> 
>> There's another possibility - namely that those pursuing "AI" had
>> misconceived the nature of human "intelligence" and/or skills. More
>> fundamentally, they had misconceived the nature of the psychological.
> 
> David's eloquence brings hope that civilized discussion might exist.
> 
> I would like to point out that Tom essentially addresses your well
> articulated concern with the phrase "AI supported by decision theory
> and cognitive modelling".
> 
> Indeed, it is unimaginable to me how an AI that has not mathematical
> basis of any sort or a correct understanding of the requirements of an
> advanced psychology such as found in human beings could be successful.
> 
> The nature of the psychological is so oft-overlooked that I am most of
> the time staring awestruck at the arbitrary omission of prominent
> psychological factors in AI research.

I'm not sure where this is coming from; AI has had a tremendous influence on
psychology (cognitively plausible models, e.g. SOAR and ACT-R), and vice
versa (Piaget, behaviorists, etc.). The literature is replete with
references in either direction.

Arguing that there is no formal mathematical model of cognition is of course
true; that is in some sense the goal, not the starting point.

You also need to define which part of AI you are involved in. Some AI folks
do AI to have a better idea of what goes on inside of humans, but many
others are interested either in artifacts that can do things we'd call
intelligent if an animal (or human) did them, and don't care HOW they do
them, or they are interested in some philosophical approach that is not
always possible to be reduced through empirical experiment. (Arguing, for
instance, that human intelligence is just a happenstance instance, and not
the only possible implementation of intelligence; further many things humans
do are not really intelligent either, so why would you want to emulate
them?) Stumbling onto intelligence through simulacrum (learning) or
happenstance (evolution) doesn't really tell you anything about
*intelligence*, only something that seems to be intelligent when you observe
it, but you cannot be sure. (And this is the foundation of the "war" between
the NN and symbolic crowd, a thread that continues to exist today because it
has not been resolved - you will never be able to tell WHY a NN does what it
does, or make predictions about what it will (not) do next. This is
particularly a problem with non-supervised systems like Reinforcement
Learning). Arguing you have the same problems with humans is beside the
point, most AI folks want to improve on humans, not recreate them.

These folks are interested in intelligence in the abstract, and psychology
doesn't enter into it. So you have to be careful which school of thought you
are doing your reading in before you start to criticize. That humans can
talk and answer a phone is interesting, but intelligent systems may not need
to have phones to talk to each-other. The design of the phone itself is far
less than optimal (consider the phone number for instance). So that there
are major branches of AI research doesn't care about the skills to answer
phones is not really surprising.
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305030609.59a14169@posting.google.com>
Gorbag <············@NOSPAMmac.com> wrote in message news:<··························@NOSPAMmac.com>...
> On 5/2/03 12:55 PM, in article
> ····························@posting.google.com, "Eray Ozkural  exa"
> <·····@bilkent.edu.tr> wrote:
> 
> I'm not sure where this is coming from; AI has had a tremendous influence on
> psychology (cognitively plausible models, e.g. SOAR and ACT-R), and vice
> versa (Piaget, behaviorists, etc.). The literature is replete with
> references in either direction.
> 
> Arguing that there is no formal mathematical model of cognition is of course
> true; that is in some sense the goal, not the starting point.
> 

Evidently David argues for a much stronger position than I would
admit. I think the mind is essentially computational, ie. as can be
seen in my recent "protocol stack theory of mind" argument. What I do
argue for is that there are psychological factors that we must attend
to.

> You also need to define which part of AI you are involved in. Some AI folks
> do AI to have a better idea of what goes on inside of humans, but many
> others are interested either in artifacts that can do things we'd call
> intelligent if an animal (or human) did them, and don't care HOW they do
> them, or they are interested in some philosophical approach that is not
> always possible to be reduced through empirical experiment. 

I am interested in all of these. :) I think I've done everything from
studying linguistics, philosophy of mind/language to implementing
classification/clustering algorithms, to data mining... My feeling is
that without philosophy one is wandering in the dark and without
computational theories and empirical study one is lost in the
profileration of the impossible.


> (Arguing, for
> instance, that human intelligence is just a happenstance instance, and not
> the only possible implementation of intelligence; further many things humans
> do are not really intelligent either, so why would you want to emulate
> them?)

To such a thing I would say "yes" since the spectrum of life on our
planet shows remarkable how diverse intelligence can be.

> Stumbling onto intelligence through simulacrum (learning) or
> happenstance (evolution) doesn't really tell you anything about
> *intelligence*, only something that seems to be intelligent when you observe
> it, but you cannot be sure.

That I would kindly object to. The evolution argument I agree with, as
stated in my other posts. However, simulacra and learning are not
synonymous. Let me talk layer-wise. Interestingly, my layer argument
helps me answer almost any philosophical question! When one simulates
the physical layer, all he gets is a simulation of physics like CFD.
That is not in itself sufficient to arrive at a theory of mind.
However, if one attains a deliberate simulation of a key layer:
computation by writing algorithms that in turn simulate the functional
layer, he will have effectively obtained a transformation that will
have produced an entire mind that is _understood_ both in the mental
and the physical....

Since simulating learning roughly corresponds to "writing algorithms
to replace the computational layer" I think you're making a mistake
here. If you had said "simulation of the physical operation of the
brain only" I would agree 100%.

>  (And this is the foundation of the "war" between
> the NN and symbolic crowd, a thread that continues to exist today because it
> has not been resolved - you will never be able to tell WHY a NN does what it
> does, or make predictions about what it will (not) do next. This is
> particularly a problem with non-supervised systems like Reinforcement
> Learning). Arguing you have the same problems with humans is beside the
> point, most AI folks want to improve on humans, not recreate them.
> 

I think that foundation is obsolete and at best an unrewarding
discussion. Basically, I think NN crowd have proved themselves to be
unscientific by asserting allegiance to a certain biological
structure, and symbolic crowd have done likewise by asserting
allegiance to certain mathematical formalisms. I don't think that is
unlike Chomsky's volumes full of bigotry.

> These folks are interested in intelligence in the abstract, and psychology
> doesn't enter into it. So you have to be careful which school of thought you
> are doing your reading in before you start to criticize. That humans can
> talk and answer a phone is interesting, but intelligent systems may not need
> to have phones to talk to each-other. The design of the phone itself is far
> less than optimal (consider the phone number for instance). So that there
> are major branches of AI research doesn't care about the skills to answer
> phones is not really surprising.

I don't think you have precisely understood what I meant. I formulated
a hard learning problem that nobody has ever formulated. I am saying
that the learning problems in real life do not fit the basic problem
descriptions in machine learning. I don't know, yet, how to resolve
this apparent discrepancy but it is there.

Another example. I stare at a scene and I see objects. Things I may
not have seen before. I'm looking at a table and seeing things:
unsupervised learning. I can tell one object apart from other. The
problem is: what do I do this with this information? I feel that I
have some other abilities, learnt traits, that help me decide what to
do with this information. For instance, there is a paper, there is
some drawing on it, partially obscured by another paper. I notice a
drawing on the paper. Next, I take the surface of the paper as its own
domain. Since the paper is partially obscured, I lift the obscuring
paper. Geometric drawings and symbols are revealed. I recognize three
distinct drawings on the paper. The drawings are geometric figures. On
the right is crude approximations of circles and line segments. On the
left a similar figure, this time some points have been marked with
symbols such as a_i. Just on top of it, I see a triangle and another
inside it, three rays meeting at a point, separating the vertices of
the triangles. Apparently, it's a Voronoi diagram that is left from
last day's homework. The geometric drawings were part of a proof I was
trying to form.

In the most basic form my argument is that without an architecture
that puts them into proper use a learning algorithm is "ungrounded".
But you need to have the correct architecture. Simply stacking data
and algorithms won't help!!! That's where our CS mindset gets us into
a mighty trap. Look at the above example. There are several elements
of uttermost importance that no system can accomplish:
1) I can choose among data
2) I can choose among algorithms
3) I can choose output
4) I can combine algorithms to arrive at compositional solutions
5) I can learn methods
6) I know what a learnt method is and when to apply it
7) I know what I recognize
...

Among countless other things that we ignore completely in the
framework of learning. Had I said how surprised I was when I
understood how lacking our theories were? I would have to write a book
to tell them all.

Cheers,

__
Eray Ozkural
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <6fgMFQA015s+EwXO@longley.demon.co.uk>
In article <··························@NOSPAMmac.com>, Gorbag
<············@NOSPAMmac.com> writes
>On 5/2/03 12:55 PM, in article
>····························@posting.google.com, "Eray Ozkural  exa"
><·····@bilkent.edu.tr> wrote:
>
>> Greetings,
>> 
>> David Longley <·····@longley.demon.co.uk> wrote in message
>> news:<················@longley.demon.co.uk>...
>>> In article <··········@news.iprimus.com.au>, Tom Osborn <·······@DELETE_
>>>> of amateur tinkering for a fair while. Statistics and ML had more to offer.
>>>> AI supported by decision theory and cognitive modelling also.
>>> 
>>> There's another possibility - namely that those pursuing "AI" had
>>> misconceived the nature of human "intelligence" and/or skills. More
>>> fundamentally, they had misconceived the nature of the psychological.
>> 
>> David's eloquence brings hope that civilized discussion might exist.
>> 
>> I would like to point out that Tom essentially addresses your well
>> articulated concern with the phrase "AI supported by decision theory
>> and cognitive modelling".
>> 
>> Indeed, it is unimaginable to me how an AI that has not mathematical
>> basis of any sort or a correct understanding of the requirements of an
>> advanced psychology such as found in human beings could be successful.
>> 
>> The nature of the psychological is so oft-overlooked that I am most of
>> the time staring awestruck at the arbitrary omission of prominent
>> psychological factors in AI research.
>
>I'm not sure where this is coming from; AI has had a tremendous influence on
>psychology (cognitively plausible models, e.g. SOAR and ACT-R), and vice
>versa (Piaget, behaviorists, etc.). The literature is replete with
>references in either direction.
>
>Arguing that there is no formal mathematical model of cognition is of course
>true; that is in some sense the goal, not the starting point.
>
>You also need to define which part of AI you are involved in. Some AI folks
>do AI to have a better idea of what goes on inside of humans, but many
>others are interested either in artifacts that can do things we'd call
>intelligent if an animal (or human) did them, and don't care HOW they do
>them, or they are interested in some philosophical approach that is not
>always possible to be reduced through empirical experiment. (Arguing, for
>instance, that human intelligence is just a happenstance instance, and not
>the only possible implementation of intelligence; further many things humans
>do are not really intelligent either, so why would you want to emulate
>them?) Stumbling onto intelligence through simulacrum (learning) or
>happenstance (evolution) doesn't really tell you anything about
>*intelligence*, only something that seems to be intelligent when you observe
>it, but you cannot be sure. (And this is the foundation of the "war" between
>the NN and symbolic crowd, a thread that continues to exist today because it
>has not been resolved - you will never be able to tell WHY a NN does what it
>does, or make predictions about what it will (not) do next. This is
>particularly a problem with non-supervised systems like Reinforcement
>Learning). Arguing you have the same problems with humans is beside the
>point, most AI folks want to improve on humans, not recreate them.
>
>These folks are interested in intelligence in the abstract, and psychology
>doesn't enter into it. So you have to be careful which school of thought you
>are doing your reading in before you start to criticize. That humans can
>talk and answer a phone is interesting, but intelligent systems may not need
>to have phones to talk to each-other. The design of the phone itself is far
>less than optimal (consider the phone number for instance). So that there
>are major branches of AI research doesn't care about the skills to answer
>phones is not really surprising.
>

There are *profoundly* difficult issues here - and to appreciate just
how profound they are I suggest those interested look at how McCarthy
has tried to come to grips with the fundamentals in some of his very
recent work. Personally I don't find what he has done helpful, but I do
respect that he (like some others) has taken the problem seriously. 

In my view, this has a lot to do with the nature and composition of
testable statements and how these are a function of reliable logical
quantification (something I explicated here at length some years ago in
the context of science and the intensional/extensional dynamic).

Psychological terms are not just indeterminate from this perspective,
they are elements of a modus vivendi which is progressively being
replaced by an expanding web of scientific 'belief'. There can be little
doubt that science as a means of prediction and control does far better
than the corpus of folk psychological and folk physical lore which most
of us resort to in the absence of formal education. What can be
productively doubted is whether "AI" is really anything other than
engineering - anything more than an attempt to make some areas of
engineering appear more "sexy" or meritorious though a misguided
association with the "psychological".

This is hype.
-- 
David Longley
From: Acme Debugging
Subject: Re: history of AI Winter?
Date: 
Message-ID: <35fae540.0305030639.722b2836@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> In article <··························@NOSPAMmac.com>, Gorbag
> <············@NOSPAMmac.com> writes
> >On 5/2/03 12:55 PM, in article
> >····························@posting.google.com, "Eray Ozkural  exa"
> ><·····@bilkent.edu.tr> wrote:
 
> >> The nature of the psychological is so oft-overlooked that I am most of
> >> the time staring awestruck at the arbitrary omission of prominent
> >> psychological factors in AI research.
> >
> >You also need to define which part of AI you are involved in. Some AI folks
> >do AI to have a better idea of what goes on inside of humans, but many
> >others are interested either in artifacts that can do things we'd call
> >intelligent if an animal (or human) did them, and don't care HOW they do
> >them, or they are interested in some philosophical approach that is not
> >always possible to be reduced through empirical experiment. (Arguing, for

> > <snip>

> >These folks are interested in intelligence in the abstract, and psychology
> >doesn't enter into it. So you have to be careful which school of thought you
> >are doing your reading in before you start to criticize. <snip>
 
> There are *profoundly* difficult issues here

What is profound? That some people place greater value on AI
simulating humans and thus seeking descriptions of the mind, while
others place greater value on AI as intelligence in the abstract
requiring little or no psychology? I see that as profoundly simple, a
question of values, of personal choice. Not to mention application
(One wishes for the best chess program possible, not one that plays
like the average human). One can try to project one's values on
others, in fact people do this incessantly, however a newsgroup is
about the last place on Earth one should expect to succeed. Read some
political groups. I rest my case.
 
> In my view, this has a lot to do with the nature and composition of
> testable statements and how these are a function of reliable logical
> quantification (something I explicated here at length some years ago in
> the context of science and the intensional/extensional dynamic).

Your statement is indeed profound, I subscribe to it 100%, I am
"awestruck" that many do not. But I don't see how it applies here. One
would be arguing, "Your values should be the same as mine." While
possibly true in an objective sense, chances are almost non-existent
that it could be objective in any argument between two people in most
places. Exceptions are like parent-child, but that certainly doesn't
apply in newsgroups and any other intellectual setting I can think of,
save faith-based groups.
 
> Psychological terms are not just indeterminate from this perspective,
> they are elements of a modus vivendi which is progressively being
> replaced by an expanding web of scientific 'belief'.

Though faith in empirical "tests" as above are certainly a "belief," I
think this belief is assumed in this newsgroup. At least, I do not
find arguments intentionally based on faith. If so, I think one should
make that declaration at the top of any message in all-caps. Of course
we all find arguments unintentionally based on faith everywhere.
Unobjectivity, the subconscious logic/fact snatcher, the "blind-spot"
of the mind, the paradox: "People agree they are not perfect reasoners
in general, then act as if they were in particular cases," experience
obtains undue value, "in real life all the marbles are not the same"
(statistics), experience obtains undue value, etc., etc. forever and
ever. This is the main reason I seek AI reasoning totally independent
of the human mind, and I don't care that it might not be technically
as proficient or if "semantics" might need to be "solved in
documentation." Who cares how proficient a brain reasons when the
results are so unreliable due to unobjectivity? How easy it would be
for an objective kludge to do better in many important applications!
It see no end to uses broadly beneficial to humans, perhaps as
beneficial as an AI robot that cries with you on bad hair days, etc.

> There can be little
> doubt that science as a means of prediction and control does far better
> than the corpus of folk psychological and folk physical lore which most
> of us resort to in the absence of formal education.

Psychology (popularly) has such a bad name, at least where I come from
(U.S.). It has been, and still is, misused politically and in many
other ways, has at times been shown to have little or no predictive
value above simple wisdom, and this is very legitimately part of the
resistance to new psychological ideas IMHO. A psychologist today must
inevitably overcome these mistakes and continuing misuses, though
possibly not responsible for them in any way, and hope not to add to
them. Of course regarding particular cases in this newsgroup, one can
simply choose to engage and hope to prevail. Many are usually ready
and waiting at a given time. Of course each has their own "rules of
engagement" which are usually reasonable, but granting priveleged
positions is hardly ever among them. That's the #1 source of newsgroup
frustration in my experience (and probably most places).

Larry
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <A1CivIA5JDt+Ew7D@longley.demon.co.uk>
In article <····························@posting.google.com>, Acme
Debugging <······@lycos.co.uk> writes
>David Longley <·····@longley.demon.co.uk> wrote in message news:<6fgMFQA015s+EwX
>·@longley.demon.co.uk>...
>> In article <··························@NOSPAMmac.com>, Gorbag
>> <············@NOSPAMmac.com> writes
>> >On 5/2/03 12:55 PM, in article
>> >····························@posting.google.com, "Eray Ozkural  exa"
>> ><·····@bilkent.edu.tr> wrote:
> 
>> >> The nature of the psychological is so oft-overlooked that I am most of
>> >> the time staring awestruck at the arbitrary omission of prominent
>> >> psychological factors in AI research.
>> >
>> >You also need to define which part of AI you are involved in. Some AI folks
>> >do AI to have a better idea of what goes on inside of humans, but many
>> >others are interested either in artifacts that can do things we'd call
>> >intelligent if an animal (or human) did them, and don't care HOW they do
>> >them, or they are interested in some philosophical approach that is not
>> >always possible to be reduced through empirical experiment. (Arguing, for
>
>> > <snip>
>
>> >These folks are interested in intelligence in the abstract, and psychology
>> >doesn't enter into it. So you have to be careful which school of thought you
>> >are doing your reading in before you start to criticize. <snip>
> 
>> There are *profoundly* difficult issues here
>
>What is profound? That some people place greater value on AI
>simulating humans and thus seeking descriptions of the mind, while
>others place greater value on AI as intelligence in the abstract
>requiring little or no psychology? 

No. 

What is profoundly problematic is that outside of established research
programmes (cf. "intelligence" - IQ, Inspection Time etc.) most
"psychological terms" have questionable reference. 

You'll find folk using one indeterminate term referencing another,
ostensibly for support, but in the end, the whole 'pack of cards'
constructs a very unstable structure - so unstable that it could be said
that folk who traffic in such terms ultimately don't know what they are
talking about. What they write and say may make for an interesting
(entertaining) flight of ideas, but that is perhaps all.

What many folk interested in AI tend to overlook is that the
"intelligent" behaviour they are so keen to model has itself been
acquired as algorithmic (extensional) skills. Where it is cost effective
to replace these skills with engineered alternatives this is generally
done. Recent years have seen quite a panic in middle management because
of the fall in cost of ICT. There's no need for the concept of AI, maybe
some compassion for those who feel 'de-skilled' - and no doubt such folk
really do lament that what they once considered worthy of admiration has
been reduced to a relatively "dumb", "mindless" but very reliable and
efficient set of operations executed by a system of algorithms. This
will continue - luddites notwithstanding.

People make the mistake of thinking that because there are artificial
'hearts', 'lungs', 'limbs' etc - that it makes sense to hanker after
artificial 'intelligence'. 

David Longley
From: Acme Debugging
Subject: Re: history of AI Winter?
Date: 
Message-ID: <35fae540.0305032220.3e2dbd80@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> In article <····························@posting.google.com>, Acme
> Debugging <······@lycos.co.uk> writes

> >What is profound? That some people place greater value on AI
> >simulating humans and thus seeking descriptions of the mind, while
> >others place greater value on AI as intelligence in the abstract
> >requiring little or no psychology? 
> 
> No. 
> 
> What is profoundly problematic is that outside of established research
> programmes (cf. "intelligence" - IQ, Inspection Time etc.) most
> "psychological terms" have questionable reference. 

I am able to agree with respect to IQ, otherwise unqualified to
remark. But we now seem to be in agreement on all else.

Thanks,

Larry
 
> You'll find folk using one indeterminate term referencing another,
> ostensibly for support, but in the end, the whole 'pack of cards'
> constructs a very unstable structure - so unstable that it could be said
> that folk who traffic in such terms ultimately don't know what they are
> talking about. What they write and say may make for an interesting
> (entertaining) flight of ideas, but that is perhaps all.
> 
> What many folk interested in AI tend to overlook is that the
> "intelligent" behaviour they are so keen to model has itself been
> acquired as algorithmic (extensional) skills. Where it is cost effective
> to replace these skills with engineered alternatives this is generally
> done. Recent years have seen quite a panic in middle management because
> of the fall in cost of ICT. There's no need for the concept of AI, maybe
> some compassion for those who feel 'de-skilled' - and no doubt such folk
> really do lament that what they once considered worthy of admiration has
> been reduced to a relatively "dumb", "mindless" but very reliable and
> efficient set of operations executed by a system of algorithms. This
> will continue - luddites notwithstanding.
> 
> People make the mistake of thinking that because there are artificial
> 'hearts', 'lungs', 'limbs' etc - that it makes sense to hanker after
> artificial 'intelligence'. 
> 
> David Longley
From: Eric Smith
Subject: Re: history of AI Winter?
Date: 
Message-ID: <ceb68bd9.0305040230.2aab4d36@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...

> People make the mistake of thinking that because there are artificial
> 'hearts', 'lungs', 'limbs' etc - that it makes sense to hanker after
> artificial 'intelligence'. 

To prove that artificial intelligence alone is not a worthwhile goal,
consider a machine so intelligent that it finds us too dull and
refuses to communicate with us or work for us.  We might be reduced to
wondering whether it's really intelligent at all.

But if we focus on the synergy of human and machine working together,
without worrying about whether the machine alone is actually
intelligent, the overall development of machine intelligence might
proceed much faster.

People who use Lisp are interested in software development, so a
natural area for us to focus on is improving software development. 
Software development clearly benefits from intelligence and
machine/human synergy.  If we just continue on the path of improving
that synergy, we have an excellent chance of reaching the ultimate
goals of AI sooner than anyone else, even if partly by accident.
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305040928.3b8a4ef@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> What many folk interested in AI tend to overlook is that the
> "intelligent" behaviour they are so keen to model has itself been
> acquired as algorithmic (extensional) skills. Where it is cost effective
> to replace these skills with engineered alternatives this is generally
> done. Recent years have seen quite a panic in middle management because
> of the fall in cost of ICT. There's no need for the concept of AI, maybe
> some compassion for those who feel 'de-skilled' - and no doubt such folk
> really do lament that what they once considered worthy of admiration has
> been reduced to a relatively "dumb", "mindless" but very reliable and
> efficient set of operations executed by a system of algorithms. This
> will continue - luddites notwithstanding.
> 
> People make the mistake of thinking that because there are artificial
> 'hearts', 'lungs', 'limbs' etc - that it makes sense to hanker after
> artificial 'intelligence'. 

These two paragraphs sound very confused and misplaced to me.

Especially the opening sentence:

> What many folk interested in AI tend to overlook is that the
> "intelligent" behaviour they are so keen to model has itself been
> acquired as algorithmic (extensional) skills. Where it is cost effective

This is wrong. Twice. AI people (great many of them) think mind is
computational therefore it has a host of algoritmic elements. First
mistake. Second, you are presupposing algorithmic is the same thing as
being *extensional*? Sorry but you are making a very inappropriate
leap of thought that takes you into a contradictory place. I would
like to think you know what "extensional" and "intensional" mean.
Maybe you should be careful about mathematical and philosophical terms
as you are careful with psychological terms. When I commended on your
appreciation of correct understanding of psychology I didn't mean
mathematics and philosophy has no use in building AI or understanding
the mind. On the contrary! I cannot think how it would be possible
otherwise. I also didn't refer to wishy-washy holistic theories of any
sort, not all psychological theories should be taken too seriously
either....

The rest of the quotes seem more like an "unstable set of facts",
especially the one about "artificial hearts" and AI. That sentence is
horribly non-sense. Sorry.

__
Eray Ozkural
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <Lb$A0HANBtt+EwZc@longley.demon.co.uk>
In article <···························@posting.google.com>, Eray
Ozkural  exa <·····@bilkent.edu.tr> writes
>David Longley <·····@longley.demon.co.uk> wrote in message news:<A1CivIA5JDt+Ew7
>·@longley.demon.co.uk>...
>> What many folk interested in AI tend to overlook is that the
>> "intelligent" behaviour they are so keen to model has itself been
>> acquired as algorithmic (extensional) skills. Where it is cost effective
>> to replace these skills with engineered alternatives this is generally
>> done. Recent years have seen quite a panic in middle management because
>> of the fall in cost of ICT. There's no need for the concept of AI, maybe
>> some compassion for those who feel 'de-skilled' - and no doubt such folk
>> really do lament that what they once considered worthy of admiration has
>> been reduced to a relatively "dumb", "mindless" but very reliable and
>> efficient set of operations executed by a system of algorithms. This
>> will continue - luddites notwithstanding.
>> 
>> People make the mistake of thinking that because there are artificial
>> 'hearts', 'lungs', 'limbs' etc - that it makes sense to hanker after
>> artificial 'intelligence'. 
>
>These two paragraphs sound very confused and misplaced to me.

Perhaps you've misunderstood them?

>
>Especially the opening sentence:
>
>> What many folk interested in AI tend to overlook is that the
>> "intelligent" behaviour they are so keen to model has itself been
>> acquired as algorithmic (extensional) skills. Where it is cost effective
>

The key is to appreciate that there is no clearly defined referent for
"intelligent" behaviour (as opposed to any other type of behaviour".

>This is wrong. Twice. AI people (great many of them) think mind is
>computational therefore it has a host of algoritmic elements. First
>mistake. 

"Mind"? But if anything characterises mental terms it is their non-
extensionality - their resistance to existential quantification and
substitutivity of identicals. These, in my view are critical to
computation.

>Second, you are presupposing algorithmic is the same thing as
>being *extensional*? Sorry but you are making a very inappropriate
>leap of thought that takes you into a contradictory place. I would
>like to think you know what "extensional" and "intensional" mean.

And how have you gone about ascertaining the truth or falsehood of the
above? 

>Maybe you should be careful about mathematical and philosophical terms
>as you are careful with psychological terms. 

Oh dear...and the reference document is entitled "Fragments of
Behaviour: The Extensional Stance"?

>When I commended on your
>appreciation of correct understanding of psychology I didn't mean
>mathematics and philosophy has no use in building AI or understanding
>the mind. On the contrary! I cannot think how it would be possible
>otherwise. I also didn't refer to wishy-washy holistic theories of any
>sort, not all psychological theories should be taken too seriously
>either....
>

The purpose of my post was to encourage folk to appreciate that one can
harmlessly dispense with the concept of AI. That it's origin lies in a
misconception. 

>The rest of the quotes seem more like an "unstable set of facts",
>especially the one about "artificial hearts" and AI. That sentence is
>horribly non-sense. Sorry.
>
>__
>Eray Ozkural


Hopefully, the ideas may grow on you <g>...
-- 
David Longley
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305060638.2d274258@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> The purpose of my post was to encourage folk to appreciate that one can
> harmlessly dispense with the concept of AI. That it's origin lies in a
> misconception. 

I don't think we can reduce in *that* direction :)

Perhaps there is hope that we can analyze your above statement
further.

What is the philosophical position behind the argument? That is, how
could somebody claim that when I build a mind that is worthy of the
noun in all its glory, it will not be a mind? Ignoring the theoretical
contributions of such an empirical effort, let us say I was trying to
write a program but I made a typo. It started a chain reaction of
accidents that gave rise to a mind on the supercomputer I was running
the code on. Impossible? Not in my thought experiment. It is perfectly
possible. Let's name that "a discovery". I don't know how the code
really works but somehow it took some data and then bootstrapped
itself from it and became a mind. Who knows, maybe it arose from
something as simple as matrix multiplication. (We can't really know
how because we don't have the exact theory yet!!!!) In my view, that
creation can be called "AI". If it's man made and if we can
demonstrate its intelligence!!

It is true that such a demonstration has not been made yet! However,
that does not falsify the thought experiments, or the endless
possibilities in which such a moment might be realized.

Or do you dispense with the notion of "mind" altogether? I am only
trying to understand your position.

Regards,

__
Eray Ozkural
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <Lb9AUDAEA9t+Ew5B@longley.demon.co.uk>
In article <····························@posting.google.com>, Eray
Ozkural  exa <·····@bilkent.edu.tr> writes
>David Longley <·····@longley.demon.co.uk> wrote in message news:<Lb$A0HANBtt+EwZ
>·@longley.demon.co.uk>...
>> The purpose of my post was to encourage folk to appreciate that one can
>> harmlessly dispense with the concept of AI. That it's origin lies in a
>> misconception. 
>
>I don't think we can reduce in *that* direction :)
>
>Perhaps there is hope that we can analyze your above statement
>further.
>
>What is the philosophical position behind the argument?

Quinean - specifically, enlightened empiricism (logical positivism
without the two dogmas).


> That is, how
>could somebody claim that when I build a mind that is worthy of the
>noun in all its glory, it will not be a mind? Ignoring the theoretical
>contributions of such an empirical effort, let us say I was trying to
>write a program but I made a typo. It started a chain reaction of
>accidents that gave rise to a mind on the supercomputer I was running
>the code on. Impossible? Not in my thought experiment. 

The problem with these sorts of "thought experiments" is that they rest
on subjunctive conditionals which are intensional, ie non-truth-
functional. One can say/imagine/think just about anything one likes
under such conditions - sadly, their non truth functionality also means
that they are not hypotheticals in any useful sense.

>It is perfectly
>possible. Let's name that "a discovery". I don't know how the code
>really works but somehow it took some data and then bootstrapped
>itself from it and became a mind. Who knows, maybe it arose from
>something as simple as matrix multiplication. (We can't really know
>how because we don't have the exact theory yet!!!!) In my view, that
>creation can be called "AI". If it's man made and if we can
>demonstrate its intelligence!!

From my (austere) stance, unless ones' 'statements' are testable they
just don't have any scientific meaning/value. They may have value in an
artistic sense - as in science fiction, literature etc. The latter have
their place - but not in science/technology.

>
>It is true that such a demonstration has not been made yet! However,
>that does not falsify the thought experiments, or the endless
>possibilities in which such a moment might be realized.

Or maybe not (cf. my second paragraph).

>
>Or do you dispense with the notion of "mind" altogether? I am only
>trying to understand your position.

Well, rather like Skinner, not so much "altogether" (but pretty close) -
it's just that for science and technology there are identifiable
limitations of method. Good behavioural science eschews mental terms for
reasons I hope I've explicated elsewhere. They are perhaps best
conceived as elements of an (older) alternative theory - widely called
'folk psychology' (which is quite sufficient for most everyday purposes
- most of the time.....). 

-- 
David Longley
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305070858.7c32f6dd@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> In article <····························@posting.google.com>, Eray
> Ozkural  exa <·····@bilkent.edu.tr> writes
> >
> >What is the philosophical position behind the argument?
> 
> Quinean - specifically, enlightened empiricism (logical positivism
> without the two dogmas).
> 

That's very interesting. I never thought you could "quine" an entire
branch of computer science that has solid material to fill a
graduate-level textbook or two full of mathematical theories and such
non-trivial methods.

I think with your method I could "quine" entire fields of science and
philosophy, too. No?

With your permission I will now try to "quine" database research. I
will start with "data warehousing term is an oxymoron.....".Then it
will be trivial to show that all of "computer science" is an oxymoron,
surely some name with "science" in it can't be science, no? Hmmm but
what about those that have "x science" in latin?

If I succeed in that, I would like to move on to Logic from which I
can hopefully eliminate all of mathematics and then with no ground
left for theories entire physics. Then we will have no science to
worry about.

Finally relieved,

__
Eray Ozkural
Miserable PhD student
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <Di6SqeAVPUu+EwTZ@longley.demon.co.uk>
In article <····························@posting.google.com>, Eray
Ozkural  exa <·····@bilkent.edu.tr> writes
>David Longley <·····@longley.demon.co.uk> wrote in message news:<Lb9AUDAEA9t+Ew5
>·@longley.demon.co.uk>...
>> In article <····························@posting.google.com>, Eray
>> Ozkural  exa <·····@bilkent.edu.tr> writes
>> >
>> >What is the philosophical position behind the argument?
>> 
>> Quinean - specifically, enlightened empiricism (logical positivism
>> without the two dogmas).
>> 
>
>That's very interesting. I never thought you could "quine" an entire
>branch of computer science that has solid material to fill a
>graduate-level textbook or two full of mathematical theories and such
>non-trivial methods.

You asked for the philosophical position. Have you read "Two Dogmas of
Empiricism"?

You may find that a lot of "Cognitive Science" may be less attractive if
you read the above and its developments. (You'd get the gist if you read
"Fragments" - it has a lot in common at root with Minsky's "SoM").
>
>I think with your method I could "quine" entire fields of science and
>philosophy, too. No?

Doesn't make a lot of sense no.

>
>With your permission I will now try to "quine" database research. I
>will start with "data warehousing term is an oxymoron.....".Then it
>will be trivial to show that all of "computer science" is an oxymoron,
>surely some name with "science" in it can't be science, no? Hmmm but
>what about those that have "x science" in latin?
>
Data warehousing - hmmmm... In case of fire and theft? Seems pretty
practical. I believe some of our banks over here rent entire floors of
buildings to warehouse their data, or run mirrored backups. Is there now
a science of this?

Computer Science may well be oxymoronic - it all depends on what you
subsume under it.

I just hope Cisco don't run any more of their courses for too long -
getting "qualified" seems such an ephemeral achievement.

>If I succeed in that, I would like to move on to Logic from which I
>can hopefully eliminate all of mathematics and then with no ground
>left for theories entire physics. Then we will have no science to
>worry about.
>
>Finally relieved,
>
>__
>Eray Ozkural
>Miserable PhD student

You seem to have gone into a loop..

-- 
David Longley
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305080522.5ff9e3c3@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> 
> You asked for the philosophical position. Have you read "Two Dogmas of
> Empiricism"?

It looks like a nice paper that I should be reading as soon as I have
some measurable amount of time.

Cheers,

__
Eray
From: Gorbag
Subject: Re: history of AI Winter?
Date: 
Message-ID: <BAD95330.3F49%gorbagNOSPAM@NOSPAMmac.com>
On 5/3/03 3:44 AM, in article ················@longley.demon.co.uk, "David
Longley" <·····@longley.demon.co.uk> wrote:

> What can be
> productively doubted is whether "AI" is really anything other than
> engineering - anything more than an attempt to make some areas of
> engineering appear more "sexy" or meritorious though a misguided
> association with the "psychological".

I don't understand the negative connotation of your comment; if science is
the study of the observed universe and coming to grips with some descriptive
models that can predict what we will see, and engineering is starting with
some goal behavior and creating a system which produces that behavior, then
indeed, most of AI is about engineering not science. One can say the same
about most of "computer science" or indeed almost anything that can get you
an actual job. There is no need to be pejorative about it. But I don't think
most of AI is unnecessarily more associated with psychology than many other
things. Any technology that has some human interfaces (including, e.g., the
design of programming languages or the design of roadways) by a right ought
to have some concern for the actual users of the technology and design
accordingly. I don't think AI is, or holds itself to be special in this
regard. (Perhaps special only in that some of the work tries to make the
model explicit and have the program reason about the model itself, rather
than implicit in terms of rules of thumb for design by a human designer).
But it is such reasoning about the models that does make AI somewhat a
different branch of engineering than, say, programming language design.

And it is this association with the "psychological" through models that
gives certain techniques their raison d'etre. If a program solves problems
in a way similar to a human thinks about how they solve problems; if a
program can then address language usage (at the pragmatics level) in a way
similar to the way a human uses language (e.g., in terms of argumentation
structure, not just word senses), then we will presumably have the tools to
build systems and services that are easier for humans to use. It is not
necessary that the mechanisms be the same as the wetware we employ, what is
key is only that a lay human's introspection of their own problem solving
methodology is modeled. (This is still an open problem AFAIK).

I don't think such interfaces will fall out of "standard engineering
methodologies" e.g., capability maturity models pushed by SEI at CMU, or
pair programming, etc.
From: Christopher C. Stacy
Subject: Re: history of AI Winter?
Date: 
Message-ID: <u65or21rv.fsf@dtpq.com>
>>>>> On Sat, 03 May 2003 11:06:24 -0700, Gorbag  ("Gorbag") writes:

 Gorbag> On 5/3/03 3:44 AM, in article ················@longley.demon.co.uk, "David
 Gorbag> Longley" <·····@longley.demon.co.uk> wrote:

 >> What can be
 >> productively doubted is whether "AI" is really anything other than
 >> engineering - anything more than an attempt to make some areas of
 >> engineering appear more "sexy" or meritorious though a misguided
 >> association with the "psychological".

 Gorbag> I don't understand the negative connotation of your comment; if science is
 Gorbag> the study of the observed universe and coming to grips with some descriptive
 Gorbag> models that can predict what we will see, and engineering is starting with
 Gorbag> some goal behavior and creating a system which produces that behavior, then
 Gorbag> indeed, most of AI is about engineering not science. One can say the same
 Gorbag> about most of "computer science" or indeed almost anything that can get you

Any field with the word "science" in it, isn't science.
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-763BE9.16211903052003@news.vanderbilt.edu>
In article <·············@dtpq.com>,
 ······@dtpq.com (Christopher C. Stacy) wrote:

>Any field with the word "science" in it, isn't science.

This sentence, it isn't true.

The Christian Science Monitor

Library Science

Science Magazine (http://www.sciencemag.org/)

Scientology

Creation Science
From: BK
Subject: Re: history of AI Winter?
Date: 
Message-ID: <39d9c156.0305032039.545ceec@posting.google.com>
Gorbag <············@NOSPAMmac.com> wrote ...

> ... if science is the study of the observed universe and coming to grips
> with some descriptive models that can predict what we will see,

I would also include "if not predict than at least *explain* what we
see".

> and engineering is starting with some goal behavior and creating a system
> which produces that behavior, then indeed, most of AI is about engineering
> not science.

Who cares whether AI is science or "just engineering" while the
remainder of the software field doesn't even qualify to carry the name
"engineering" anymore?!

> One can say the same about most of "computer science"

Really?

Isn't it that what one can say of most of "computer science" is not
even engineering, at least for most software related areas?!


"Descriptive models" are very hardly used at all, at least not in a
way that they would have any effect, and the average software engineer
is less skilled at "predicting what we will see" than the average
psychic with a cristal ball.

This becomes painfully apparent when something goes wrong. In any
*real* engineering field the average engineer skilled in the art will
be able to analyse the average problem, figure out what went wrong and
devise a plan what to do in order to fix the problem.

Not so in the field of "software engineering". Average problems are
widely accepted as being "too expensive to fix" because "software is
so flexible, so difficult to predict" and "practically impossible to
fix".

The average "expert advice" is widely known to be "go back to square
one and start over" which can be in various forms from "restart and
try again" to "reinstall and try again".

To add insult to injury, the overwhelming majority of those "software
engineers" don't even seem to care. Instead they hurl insults at
anybody who has the courage to ask the painful question what happened
to "engineering". No pride.

Software *engineering* ??? The term itself has become a paradoxon.

Perhaps AI needs to advertise as "true engineering" in order to
distance itself from the unscientific and un-engineering culture that
has crept up in the remainder of most of "computer science". Science
or not, as long as it remains true and good engineering it will set
itself apart already.

rgds
bk
From: Tom Osborn
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3eb76f4b$1_1@news.iprimus.com.au>
"Gorbag" <············@NOSPAMmac.com> wrote in message ·······························@NOSPAMmac.com...
>... Stumbling onto intelligence through simulacrum (learning) or
> happenstance (evolution) doesn't really tell you anything about
> *intelligence*, only something that seems to be intelligent when you observe
> it, but you cannot be sure. (And this is the foundation of the "war" between
> the NN and symbolic crowd, a thread that continues to exist today because it
> has not been resolved - you will never be able to tell WHY a NN does what it
> does, or make predictions about what it will (not) do next. 

? It's still a silly war.

Explanations in a rule based system or other symbolic/declarative system are
THOSE explanations because other explanations are not able to be produced
by the system (unsupported by "facts", "rules", etc). That's explanation qua
explanation. My great^n grandfather knew that, and he knew less than
any of you guys. His limited knowledge didn't stop him making
conclusions, and sometimes he was right, too.

NN or Bayesian net makes its prediction as some kind of quantitative
compression of data it has been exposed to. One can "go back" from
the prediction to the data instances which lead to it being learned (or
at least which are consistent with it now). 

Maybe time to talk to a real psychologist about how humans explain 
their cognitive processes. They tell stories...

Tom.
From: Tom Osborn
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3eb76bc1$1_1@news.iprimus.com.au>
"David Longley" <·····@longley.demon.co.uk> wrote in message ·····················@longley.demon.co.uk...
> In article <··········@news.iprimus.com.au>, Tom Osborn <·······@DELETE_
> CAPS.nuix.com.au> writes
> >
> >My view (having been there at the time) is that three things went wrong.
> > ...
> There's another possibility - namely that those pursuing "AI" had
> misconceived the nature of human "intelligence" and/or skills. More
> fundamentally, they had misconceived the nature of the psychological.

Misconceptions, false starts, and delusions about psychological processes 
(the whole HIP era if you like) were (and are) a large part of the game.
There ain't no progress without it. What works in one decade is rubbish
in the next (often). Everyone used to be excited by computer chess and
games trees...  Eliza was a hot chick!

Tom.
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <sROpmDAIn3t+Ew88@longley.demon.co.uk>
In article <············@news.iprimus.com.au>, Tom Osborn <·······@DELET
E_CAPS.nuix.com.au> writes
>
>
>"David Longley" <·····@longley.demon.co.uk> wrote in message news:AFDizQA4Pjs+Ew
>··@longley.demon.co.uk...
>> In article <··········@news.iprimus.com.au>, Tom Osborn <·······@DELETE_
>> CAPS.nuix.com.au> writes
>> >
>> >My view (having been there at the time) is that three things went wrong.
>> > ...
>> There's another possibility - namely that those pursuing "AI" had
>> misconceived the nature of human "intelligence" and/or skills. More
>> fundamentally, they had misconceived the nature of the psychological.
>
>Misconceptions, false starts, and delusions about psychological processes 
>(the whole HIP era if you like) were (and are) a large part of the game.
>There ain't no progress without it. What works in one decade is rubbish
>in the next (often). Everyone used to be excited by computer chess and
>games trees...  Eliza was a hot chick!
>
>Tom.

Don't get me wrong - I'm not knocking the *work* done by those who have
endeavoured to engineer under the auspices of "AI" - there's no reason
why some of the products should not prove to be useful or inspirational
(I'm sure ELIZA proved, in many cases as effective as Rogerian
Counselling <g>). "Systems Engineering" or whatever I have no trouble
with. But so many of the 'interesting' developments in AI have been
computational versions of the heuristics which folk believe they use
when a little application of science from elsewhere shows such
'heuristics' to be, (upon reflection, unsurprisingly) deserving their
name (*heuristics*). 

What we should have learned by now is: "Don't send a man to do a
machine's job" - and what this really means.

It doesn't mean that there is mileage in AI - just science and
engineering.
 
-- 
David Longley
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305060645.3499673d@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> It doesn't mean that there is mileage in AI - just science and
> engineering.


Yes, however psychology also claiming to be a branch of science there
are no necessary borders between AI and psychology.

Just like there are *no* borders between science and philosophy. What
we really are doing is making new philosophy. We will see that if we
get stuck in the mud, we can never learn to fly.

(Engineering, OTOH, is more concerned with products, marketing,
end-users, etc. than pure theoretical curiosity such as that of an AI
researcher trying to build strong AI.)

I would also like to assert confidently that most AI should be seen as
"computational psychology". With the advent of better mathematical
education, the connection will be trivially seen by many folk. So
maybe the Russians already see this as evident. :)

Cheers,

__
Eray Ozkural
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <U7OHsHAKe9t+Ew5A@longley.demon.co.uk>
In article <····························@posting.google.com>, Eray
Ozkural  exa <·····@bilkent.edu.tr> writes
>David Longley <·····@longley.demon.co.uk> wrote in message news:<sROpmDAIn3t+Ew8
>·@longley.demon.co.uk>...
>> It doesn't mean that there is mileage in AI - just science and
>> engineering.
>
>
>Yes, however psychology also claiming to be a branch of science there
>are no necessary borders between AI and psychology.
>

There's no disputing that some areas of psychology are scientific. The
technologies produced are reliable and well established. Much of this is
derived from those working in the experimental analysis of behavior.

There is psychology as "Cognitive Science" and psychology as
"Behavioural Science" - and there's a very important difference between
the two (I recommend Skinner's diatribes here - see his last books in
particular). 

>Just like there are *no* borders between science and philosophy. What
>we really are doing is making new philosophy. We will see that if we
>get stuck in the mud, we can never learn to fly.

I agree that there's no hard and fast border between some aspects of
philosophy and science - but .... that all depends on what precisely one
does here. Much of what one does as a theoretician in science could be
said to be indistinguishable from how one behaves in certain areas of
philosophy (e.g. mathematical logic, logic per se, or even theoretical
physics). Where that distinction doesn't matter is where the predicates
are *predicates*. On the other hand, where one is likely to drift off
dangerously is where one mistakes mental terms for predicates.


>
>(Engineering, OTOH, is more concerned with products, marketing,
>end-users, etc. than pure theoretical curiosity such as that of an AI
>researcher trying to build strong AI.)

I suspect we are giving "engineering" a different scope.

>
>I would also like to assert confidently that most AI should be seen as
>"computational psychology". With the advent of better mathematical
>education, the connection will be trivially seen by many folk. So
>maybe the Russians already see this as evident. :)
>
>Cheers,
>
>__
>Eray Ozkural

That could of course be so - as computational psychology may be just as
chimerical. My point being that 'computational psychology' is an
oxymoron. 'Computational behavioural science' on the other hand is just
something of a tautology.
-- 
David Longley
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305061334.78cd96d@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> 
> I suspect we are giving "engineering" a different scope.


To be more accurate. Engineering is mostly "applied science", but AI
research does have a large quotient in "pure science" of CS which is
more concerned with theoretical entities such as time/space bounds,
algorithmic complexity, proofs, etc.

Regards,

__
Eray Ozkural
les miserable phd stu.
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305061336.28e7899a@posting.google.com>
David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> 
> That could of course be so - as computational psychology may be just as
> chimerical. My point being that 'computational psychology' is an
> oxymoron. 'Computational behavioural science' on the other hand is just
> something of a tautology.

Excuse me but from this point of view, I still cannot deduce what your
answer would be to the question of mind, namely "What is a mind?"

Regards,

__
Eray Ozkural
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <RtXm5JAgYLu+EwAE@longley.demon.co.uk>
In article <····························@posting.google.com>, Eray
Ozkural  exa <·····@bilkent.edu.tr> writes
>David Longley <·····@longley.demon.co.uk> wrote in message 
>news:<U7OHsHAKe9t+Ew5
>·@longley.demon.co.uk>...
>> 
>> That could of course be so - as computational psychology may be just as
>> chimerical. My point being that 'computational psychology' is an
>> oxymoron. 'Computational behavioural science' on the other hand is just
>> something of a tautology.
>
>Excuse me but from this point of view, I still cannot deduce what your
>answer would be to the question of mind, namely "What is a mind?"
>

A largely mistaken way of trying to make sense of behaviour. A widely
shared modus vivendi, rooted in ordinary language. 

"Fragments of Behaviour: The Extensional Stance" 
(http://longley.demon.co.uk/Frag.htm) is an attempt to outline why the
above is so and how to go about practising an alternative in an applied
field. It's the theoretical part of a 12 volume system specification
called "PROBE: A System Specification for PROfiling BEhaviour".


-- 
David Longley
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-00217F.09341507052003@news.vanderbilt.edu>
In article <················@longley.demon.co.uk>,
 David Longley <·····@longley.demon.co.uk> wrote:

>In article <····························@posting.google.com>, Eray
>Ozkural  exa <·····@bilkent.edu.tr> writes
>>
>>Excuse me but from this point of view, I still cannot deduce what your
>>answer would be to the question of mind, namely "What is a mind?"
>>
>
>A largely mistaken way of trying to make sense of behaviour. A widely
>shared modus vivendi, rooted in ordinary language. 

I thought Chomsky knocked the stuffing out of behaviorism back
in 1959?!

>"Fragments of Behaviour: The Extensional Stance" 
>(http://longley.demon.co.uk/Frag.htm) is an attempt to outline why the
>above is so and how to go about practising an alternative in an applied
>field. It's the theoretical part of a 12 volume system specification
>called "PROBE: A System Specification for PROfiling BEhaviour".

Ah, I see.  I'll keep my eyes open for the Time Life
pleather edition and disply it prominently on my
bookshelf between Will Durant's History of Philosophy
and the collected works of James Michener.

On a serious note: Who is considered the most famous
British behaviorist?  The only names that come to my
mind are American, but I know very little of the
history of this school of psychology.
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <i1gqPUA8sTu+EwGg@longley.demon.co.uk>
In article <··························@news.vanderbilt.edu>, sv0f
<····@vanderbilt.edu> writes
>In article <················@longley.demon.co.uk>,
> David Longley <·····@longley.demon.co.uk> wrote:
>
>>In article <····························@posting.google.com>, Eray
>>Ozkural  exa <·····@bilkent.edu.tr> writes
>>>
>>>Excuse me but from this point of view, I still cannot deduce what your
>>>answer would be to the question of mind, namely "What is a mind?"
>>>
>>
>>A largely mistaken way of trying to make sense of behaviour. A widely
>>shared modus vivendi, rooted in ordinary language. 
>
>I thought Chomsky knocked the stuffing out of behaviorism back
>in 1959?!


An urban myth. Chomsky didn't understand what he was criticizing and
those in the experimental analysis of behaviour tradition didn't think
his critique was really worthy of rebuttal. It does take some time and
study to learn what radical behaviourism is all about (try doing
neuroscience research without it for example - along with classical
conditioning paradigms (also hopelessly misunderstood by most non-
specialists) this work really is where serious research is still done.
There are one or two responses to Chomsky - have a look at
MacCorquodale's (1970) 'On Chomsky's review of Skinner's VERBAL
BEHAVIOR' journal of the Experimental Analysis of Behavior, 13, pp
83-99. Or better still, read some of Skinner's work first hand (few do).
One has to be prepared to relinquish some of the hold which folk
psychology has on one to appreciate the merits of the work - although a
casual reading of Skinner's "Beyond Freedom and Dignity" has enlightened
many. 

To dismiss 'behaviourism' on the basis of what one has heard Chomsky
said, is to dismiss over a dozen scientific journals which regularly
publish research work, several since the turn of the century - you'll
also find the technology at work in articles in journals within
neuroscience, pharmacology, zoology and so on.

The main point to appreciate is that those working in behavioural
science are generally not very interested in responding to the sorts of
popularist things that Chomsky etc have had to say (this tends to be
true of many researchers in many areas from my experience). 

The so-called "cognitive revolution" of the late 50s and early 60s is
also widely misunderstood - even within psychology. As I've endeavoured
to explain in "Fragments", this was not a return of the mind/mental -
but a shift in what was studied - 'heuristics and their biases' to use a
70s title.

A full understanding of the nature of the 'cognitive revolution' would
also help in understanding why I assert that naive "AI" is misguided. 


>
>>"Fragments of Behaviour: The Extensional Stance" 
>>(http://longley.demon.co.uk/Frag.htm) is an attempt to outline why the
>>above is so and how to go about practising an alternative in an applied
>>field. It's the theoretical part of a 12 volume system specification
>>called "PROBE: A System Specification for PROfiling BEhaviour".
>
>Ah, I see.  I'll keep my eyes open for the Time Life
>pleather edition and disply it prominently on my
>bookshelf between Will Durant's History of Philosophy
>and the collected works of James Michener.

Well, "Fragments.." itself is only 100 odd pages, and is available on
the web as the file referenced above - the rest is basically a networked
DBMS. 

>
>On a serious note: Who is considered the most famous
>British behaviorist?  The only names that come to my
>mind are American, but I know very little of the
>history of this school of psychology.

Famous to who? Within psychology lots of names will be recognised - but
it isn't something which lends itself to popularization too readily. You
may have heard of Eysenck, Gray, Broadbent, Mackintosh. Behaviourism
really cashes out as empiricism - with some care as to which terms one
is prepared to consider predicates.

-- 
David Longley
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-3BC95E.15400307052003@news.vanderbilt.edu>
In article <················@longley.demon.co.uk>,
 David Longley <·····@longley.demon.co.uk> wrote:

>There are one or two responses to Chomsky - have a look at
>MacCorquodale's (1970) 'On Chomsky's review of Skinner's VERBAL
>BEHAVIOR' journal of the Experimental Analysis of Behavior, 13, pp
>83-99.

I've heard this reference names before.  I'll put it on my
list of papers to dig up and read.

>Or better still, read some of Skinner's work first hand (few do).
>One has to be prepared to relinquish some of the hold which folk
>psychology has on one to appreciate the merits of the work - although a
>casual reading of Skinner's "Beyond Freedom and Dignity" has enlightened
>many.

Thanks, but I'll pass.  I've read some Skinner as well as
some interviews with the man.  It wasn't my cup of tea.
There are a lot of other schools of psychology whose
classics I expect to read before I get around to Skinner,
Watson, and the rest of the behaviorists.  Time is limited,
Boring's (1950) history of the field is encyclopedic, and one
must make choices...

>To dismiss 'behaviourism' on the basis of what one has heard Chomsky
>said, is to dismiss over a dozen scientific journals which regularly
>publish research work, several since the turn of the century

I'm curious.  You dismiss cognitive psychology and its academic
associates as patently false.  Yet there exist many scientific
journals that publish the fruits of their research.  Doesn't
this make them legitimate?

>you'll
>also find the technology at work in articles in journals within
>neuroscience, pharmacology, zoology and so on.

From what I understand, this is where behaviorism shines --
in certain kinds of applied settings.  Even fans of cognitive
psychology, the mind, and the like must grant this.

>The main point to appreciate is that those working in behavioural
>science are generally not very interested in responding to the sorts of
>popularist things that Chomsky etc have had to say (this tends to be
>true of many researchers in many areas from my experience).

Chomsky and others have written scholarly critiques of
behaviorism as well, whatever you think of his review
of "Verbal Behavior".  For example, does anyone think
behaviorist principles can explain complex syntactic
phenomena such as long-distance dependencies?

>The so-called "cognitive revolution" of the late 50s and early 60s is
>also widely misunderstood - even within psychology. As I've endeavoured
>to explain in "Fragments", this was not a return of the mind/mental -
>but a shift in what was studied - 'heuristics and their biases' to use a
>70s title.

The cognitive revolution is more than just Simon, Tversky,
and Kahneman.

I think you're using these terms -- "heuristics", "biases"
-- because they sound informal, and rhetorically this your
point that "cognition", "mind", and the like are soft-headed
folk terms not appropriate for scientific discussions.

(And I note, for those who value prestige prizes such as
the Nobel, that Kahneman just won the Nobel in economics
for his work on "heuristics and their biases", and of
course Simon won long ago for equally cognitive work.)

>Famous to who? Within psychology lots of names will be recognised - but
>it isn't something which lends itself to popularization too readily. You
>may have heard of Eysenck, Gray, Broadbent, Mackintosh.

Sure, I've heard of Eysenck (clever Hans?) and even read
Broadbent.  His information-theory-inspired 1958 book is
typically cited as a foundational publication of the
cognitive revolution.  I confess this is his only work I
know.  Didn't know he was considered a behaviorist in
other circles.

>Behaviourism
>really cashes out as empiricism - with some care as to which terms one
>is prepared to consider predicates.

Behaviorism is credited with introducing experimental rigor
to psychology.  It did its job in this regard.  But
psychology has been rigorous in this sense for over half
a century now.  Most psychologists, especially the
experimentalists of each school (including cognition), view
themselves as empiricists.  They're the children of analytic
philosophy, of the logical positivists and their revision
by Popper.

Behaviorism, to an outsider, cashes out as the prohibition
of mental terms from psychological explanations, regardless
of experimental method.
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b2xUfAA7Qgu+Ewxl@longley.demon.co.uk>
In article <··························@news.vanderbilt.edu>, sv0f
<····@vanderbilt.edu> writes
>In article <················@longley.demon.co.uk>,
> David Longley <·····@longley.demon.co.uk> wrote:
>
>>There are one or two responses to Chomsky - have a look at
>>MacCorquodale's (1970) 'On Chomsky's review of Skinner's VERBAL
>>BEHAVIOR' journal of the Experimental Analysis of Behavior, 13, pp
>>83-99.
>
>I've heard this reference names before.  I'll put it on my
>list of papers to dig up and read.
>
>>Or better still, read some of Skinner's work first hand (few do).
>>One has to be prepared to relinquish some of the hold which folk
>>psychology has on one to appreciate the merits of the work - although a
>>casual reading of Skinner's "Beyond Freedom and Dignity" has enlightened
>>many.
>
>Thanks, but I'll pass.  I've read some Skinner as well as
>some interviews with the man.  It wasn't my cup of tea.
>There are a lot of other schools of psychology whose
>classics I expect to read before I get around to Skinner,
>Watson, and the rest of the behaviorists.  Time is limited,
>Boring's (1950) history of the field is encyclopedic, and one
>must make choices...
>

What many folk don't appreciate is that Skinner's work really is a
technology - and it works. One perhaps only comes to fully see its
merits when one has worked with lots of rats in Skinner Boxes. I managed
to do my undergraduate degree with very little actual exposure to this
technology. My post graduate work on the other hand required me to work
with large groups of animals in lots of Skinner boxes to get the data I
needed in conjunction with monoamine and neuropeptide research into the
substrates of learning and motivation. It was only then that I really
came to appreciate the force of Skinner's work. Having subsequently
worked in an applied area of psychology I've come to realise just how
hard behavioural science is. Folk tend to love "cognitive" talk - it
promises to explain so much but in fact doesn't - it just makes
everything *sound* like it's all under control - where in fact its just
the verbal behaviour that flows.

It's a mistake to look for interesting reads (as a Peanuts cartoon once
depicted wonderfully) - for that it's better to look to literature
proper.

>>To dismiss 'behaviourism' on the basis of what one has heard Chomsky
>>said, is to dismiss over a dozen scientific journals which regularly
>>publish research work, several since the turn of the century
>
>I'm curious.  You dismiss cognitive psychology and its academic
>associates as patently false.  Yet there exist many scientific
>journals that publish the fruits of their research.  Doesn't
>this make them legitimate?


It is difficult to get control of variables when investigating
"cognitive" processes. The research tends to rely on null hypothesis
testing which means that one attributes causal or correlative relations
when one believes one has experimental control. Inferences are made
about hypothesised mediating processes and if one is critical it all
begins to look as shaky as psychoanalysis. It reads well, it lends
itself to deceptively appealing linkages but... rival theories are
difficult to test as they don't easily generate point predictions - it's
a mess.

What I was referring to above was the extent to which one finds solid
technology reported in JEAB and other journals in that area. There's a
section of "Fragments.." which covers some of the methodological
problems which riddle much of (cognitive) psychological research. 

>
>>you'll
>>also find the technology at work in articles in journals within
>>neuroscience, pharmacology, zoology and so on.
>
>From what I understand, this is where behaviorism shines --
>in certain kinds of applied settings.  Even fans of cognitive
>psychology, the mind, and the like must grant this.

They don't alas. They rarely appreciate that in science per se, it tends
to be only behaviourism.


>
>>The main point to appreciate is that those working in behavioural
>>science are generally not very interested in responding to the sorts of
>>popularist things that Chomsky etc have had to say (this tends to be
>>true of many researchers in many areas from my experience).
>
>Chomsky and others have written scholarly critiques of
>behaviorism as well, whatever you think of his review
>of "Verbal Behavior".  For example, does anyone think
>behaviorist principles can explain complex syntactic
>phenomena such as long-distance dependencies?

There is nothing "scholarly" about Chomsky's critique. To see a
behaviourist take Chomsky on at his own game (language), see Quine's
treatment of Chomsky....

"behaviorist principles" ? ..... to Quote Quine .."in language one has
to be a behaviourist".. 
>
>>The so-called "cognitive revolution" of the late 50s and early 60s is
>>also widely misunderstood - even within psychology. As I've endeavoured
>>to explain in "Fragments", this was not a return of the mind/mental -
>>but a shift in what was studied - 'heuristics and their biases' to use a
>>70s title.
>
>The cognitive revolution is more than just Simon, Tversky,
>and Kahneman.
>

Yes, it was folk like Bruner too. The point I was making was that the
revolution was not one which abandoned behaviour - it dethroned
cognition if anything.


>I think you're using these terms -- "heuristics", "biases"
>-- because they sound informal, and rhetorically this your
>point that "cognition", "mind", and the like are soft-headed
>folk terms not appropriate for scientific discussions.

I'm using them as references to the work of Tversky, Kahneman etc
But also, Meehl, Rescorla and Wagner, Kelley (Harold), - I could go on
an list research projects from Social Psychology, Personality, Learning
Theory, Cognition and Memory, Perception....

>
>(And I note, for those who value prestige prizes such as
>the Nobel, that Kahneman just won the Nobel in economics
>for his work on "heuristics and their biases", and of
>course Simon won long ago for equally cognitive work.)

The important thing is that there are good extensional alternatives to
these intensional strategies. If one wants to build a useful system one
would not base it on these heuristics. If one wants to model the dodgy
processes which comprise human cognition -  fine - but one should be
clear that these biases are distortions. We know about biases from
empirical work in statistics - and we have routines which do better
(e.g. regression) through sampling more representatively. Surely nobody
would recommend we built decision making systems based on folk
reasoning....oh they have? ....what are they called?....Neural Networks?
...and they're as good as traders on the Dow Jones eh??..... (no wonder
the markets are going nowhere eh?<g>).

There comes a time when one has read what's in the journals (up to a
point), read what folk in different areas say about other research
programmes, red what they have said about their earlier work, and seen
and respected how young zealots come to mellow. 

In the end, having been through quite a lot over the years, I'd strongly
advise anyone in a position to ignore popular urban myths and read some
of the primary sources. Find out how a CER schedule is set up, learn
about Kamin's "blocking effect", "latent inhibition", why the Rescorla-
Wagner model was important, what "contrast effects are" etc etc.

Having said that - it is true that from the 70s onwards, work in
Learning Theory came to use a lot of "cognitive" sounding notions - or
so it seems - we've known, since Frege's days, that deductive inference
is an effective procedure. What I try to show in "Fragments" through
appeal to the empirical literature in human and animal behaviour, is t
hat there is good evidence for the limitations of the intensional - be
it a function of limited STM or other factors (a lack of neurones
connecting parts of the CNS), we do in fact operate a lot of the time as
if one hand doesn't know what the other is doing (I prefer the Oedipus -
Jocasta story - Oedipus lusted for Jocasta, but not his mother). 

>
>>Famous to who? Within psychology lots of names will be recognised - but
>>it isn't something which lends itself to popularization too readily. You
>>may have heard of Eysenck, Gray, Broadbent, Mackintosh.
>
>Sure, I've heard of Eysenck (clever Hans?) and even read
>Broadbent.  His information-theory-inspired 1958 book is
>typically cited as a foundational publication of the
>cognitive revolution.  I confess this is his only work I
>know.  Didn't know he was considered a behaviorist in
>other circles.
>

Folk often cite Neisser too

>>Behaviourism
>>really cashes out as empiricism - with some care as to which terms one
>>is prepared to consider predicates.
>
>Behaviorism is credited with introducing experimental rigor
>to psychology.  It did its job in this regard.  But
>psychology has been rigorous in this sense for over half
>a century now.  Most psychologists, especially the
>experimentalists of each school (including cognition), view
>themselves as empiricists.  They're the children of analytic
>philosophy, of the logical positivists and their revision
>by Popper.

I'm not sure that Popper really had such an influence, it was Fisher
(and his use of Null Hypothesis tests). In my view, this was bad for
psychology (see the section of "Fragments" on the place of Null
Hypothesis testing).

>
>Behaviorism, to an outsider, cashes out as the prohibition
>of mental terms from psychological explanations, regardless
>of experimental method.

It isn't far from the truth when put that way, but few understand why
Skinner and others eschew mental terms. It's a little like discouraging
PhD students from taking up bad lab practices (ie speculating rather
than measuring and data logging). Skinner's point is that mental terms
or private terms are not under the control of the verbal community to
the extent that public terms are - folk are never sure of the referents.
For Quine, such terms comprise another closed world of explanation, with
the additional problem that they are resistant to the sine qua non for
scientific - they are resistant to logical quantification and
substitutivity of identity. The outcome is the same - folk are never
sure what they are talking about. 

The alternative is painfully difficult - but then again, good science
always is.

-- 
David Longley
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-AF0270.15530309052003@news.vanderbilt.edu>
David,

I enjoyed your response.  I think some of your/behaviorism's
criticisms of the cognitive and the mental are valid, but
view them as problems to be solved rather than fatal defects.
And others I don't buy but will perhaps revisit in the future.

(And I should add I read "Fragments" several years ago when
I used to lurk on comp.ai, so I'm not totally ducking your
more detailed exposition of your arguments.)

So let me wrap up my participation in this thread with some
apropos, or at least funny, quotes I ran into just yesterday:

"The differences between a conjuror and a psychologist is
one that pulls rabbits out of a hat while the other pulls
habits out of a rat." - Anonymous

"Enmity be between you! Too soon it is for alliance. Search
along separate paths, for that is how truth comes to light."
- Freidrich von Schiller

"What is matter? - Never mind.
What is mind? - It doesn't matter."
-- Anonymous

"I hate quotations. Tell me what you know." - Ralph Waldo
Emerson
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <S0o2PaAZfCv+Ew4g@longley.demon.co.uk>
In article <··························@news.vanderbilt.edu>, sv0f
<····@vanderbilt.edu> writes
>David,
>
>I enjoyed your response.  I think some of your/behaviorism's
>criticisms of the cognitive and the mental are valid, but
>view them as problems to be solved rather than fatal defects.
>And others I don't buy but will perhaps revisit in the future.
>
>(And I should add I read "Fragments" several years ago when
>I used to lurk on comp.ai, so I'm not totally ducking your
>more detailed exposition of your arguments.)
>
>So let me wrap up my participation in this thread with some
>apropos, or at least funny, quotes I ran into just yesterday:
>
>"The differences between a conjuror and a psychologist is
>one that pulls rabbits out of a hat while the other pulls
>habits out of a rat." - Anonymous
>
>"Enmity be between you! Too soon it is for alliance. Search
>along separate paths, for that is how truth comes to light."
>- Freidrich von Schiller
>
>"What is matter? - Never mind.
>What is mind? - It doesn't matter."
>-- Anonymous
>
>"I hate quotations. Tell me what you know." - Ralph Waldo
>Emerson

Hi,

I doubt I'll be here for long - seem to have given myself RSI with all
this typing <g>.

Here's another 'been there, done that, don't bother...waste of time'.

The fatal weakness of the mental lies in our verbal behaviour - ie in
how we use language. It lies in how we behave with these linguistic
entities. As I see it, we *use* some terms (including those we call
mental or cognitive) as intensional idioms - - -  and therein lies the
problem. There is an empirically demonstrable behavioural difference in
how we use mental terms relative to non intensional terms. We don't use
them as we do extensional contexts (and I've pointed out how elsewhere).

The defining characteristic of the mental is that we don't use these
idioms the way we do extensional terms - different rules seem to apply -
we seem to have accepted that they either can't or shouldn't be so
constrained.

My guess is that as things are, we aren't scientifically or politically
sophisticated enough to live our lives otherwise - we just don't know
how to behave very well in these areas (Skinner was probably right wen
he said that these behaviours are under much weaker control of the
reinforcing community and I suspect they may account for our feelings of
freedom - and dignity of course). Without our intensional idioms life
would just be too unforgiving as we haven't got the required alternative
language (ways of behaving). I reckon that will change slightly and
gradually in all sorts of ways as we learn more through science.

In the meantime, mental life (to some extent or another) will be with us
- and psychologists will study how it operates (as a modus vivendi) - in
my view - always through reference to behaviour. 

-- 
David Longley
From: Tom Osborn
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3eb87526_1@news.iprimus.com.au>
"Eray Ozkural exa" <·····@bilkent.edu.tr> wrote in message ·································@posting.google.com...
> David Longley <·····@longley.demon.co.uk> wrote in message news:<················@longley.demon.co.uk>...
> > It doesn't mean that there is mileage in AI - just science and engineering.
> ...
> (Engineering, OTOH, is more concerned with products, marketing,
> end-users, etc. than pure theoretical curiosity such as that of an AI
> researcher trying to build strong AI.)

This is looking more interesting now than when I thought about it before.
The notion of what is _engineering_, or what is _good_ engineering 
needs more thought (at least from me). 

I'm thinking that the difference between scientific/philosophical truths and 
engineering "truths" is the difference between logical/statistical validation
for the former, and pragmatism. Extrinsically, pragmatism is simply about
have satisfactory an artefact is about meeting a "spec" or need. Intrinsically
is it far more interesting (it was the intrusion of the work _marketing_ that
triggering this thought).

Intrinsically, a well engineered artefact embodies the whole situation of the
"spec" or need. Need is something humans have or come to develop by
all manner of cognitive, physiological, social and psychological processes.
Something _well_ engineered should at least be mindful of that. 

In my younger days, I was naively antagonistic to pragmatism, but later
came to see that it embodied much more than superficial appearances 
suggest. The explanation about what a product is for, is about the
needs of the user (even beyond what the user may comprehend).
Why it works and why it is reliable comes down to things like
laws of physics and materials, managed production, and useability,
all of which can be mechanically handled (or done with some intuition).

The explanations sought by purists can be teased out of the most surprising
things.

Can an engineering factory, or a LAN, or an AI system have things like
"stance", "intention", NLP/NLG, sense of humour, etc, to the level of
a social being, or (say) attentive waitress? Answer? Well, better than
some...   [OK, tongue in cheek. If the social being is serious and
doesn't work for FoxNews, there is a level of presence I have not seen
in any AI system].

Tom.
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305070808.6371a01@posting.google.com>
"Tom Osborn" <·······@DELETE CAPS.nuix.com.au> wrote in message news:<··········@news.iprimus.com.au>...
> Can an engineering factory, or a LAN, or an AI system have things like
> "stance", "intention", NLP/NLG, sense of humour, etc, to the level of
> a social being, or (say) attentive waitress? Answer? Well, better than
> some...   [OK, tongue in cheek. If the social being is serious and
> doesn't work for FoxNews, there is a level of presence I have not seen
> in any AI system].


There is no question the "actual" AI will be engineered. There is also
no question that the mental is not unique to biology. A claim that
would be highly bio-centric, a fascistic discrimination that could
only fit the middle ages. We are in 21st century.

I think you can say it very loud!

However, with the pressure of demands and production and feasibility
the holy grail of AI is not to be found in corporate labs. It is
highly unlikely that a non-trivial research that could yield its
fruits 200 years later will be supported by any corporation that
doesn't have a huge amount of money to waste. I know, Microsoft is
like that, but even they AFAIK aren't trying to breed an AI in a
super-secret lab which they pour billions of their dollars.

That economic observation should sound more intelligible.

BTW, there is something very interesting: there is a Dr. Chandra who
is really a supercomputing specialist. :) [*] I noticed that when I
was making a research. Who knows, maybe somebody's already got a
HAL9000, you never know. (^_-)

Regards,

__
Eray Ozkural <erayo at cs.bilkent.edu.tr>

[*] Of course that's quite likely!!!!! An entertaining anectode,
anyway
From: Fred Gilham
Subject: Re: history of AI Winter?
Date: 
Message-ID: <u7u1c6g05z.fsf@snapdragon.csl.sri.com>
·····@bilkent.edu.tr (Eray Ozkural  exa) writes:
> There is no question the "actual" AI will be engineered. There is
> also no question that the mental is not unique to biology. A claim
> that would be highly bio-centric, a fascistic discrimination that
> could only fit the middle ages. We are in 21st century.

The center for the enforcement of political correctness has determined
that the above paragraph is temporo-centric.  Please consider yourself
detained by the masses for questioning.

There's a certain irony to be noted in the fact that it's also
anachronistic, but that's more an issue of, shall we say, level of
mentality, than actual political unreliability.

-- 
Fred Gilham                                         ······@csl.sri.com
In Nashville there ain't no money above the third fret. -- Jay Carlson
From: C McKew
Subject: Re: history of AI Winter?
Date: 
Message-ID: <0riua.30790$1s1.453597@newsfeeds.bigpond.com>
"Eray Ozkural exa" <·····@bilkent.edu.tr> wrote in message ································@posting.google.com...
> "Tom Osborn" <·······@DELETE CAPS.nuix.com.au> wrote in message news:<··········@news.iprimus.com.au>...
> > Can an engineering factory, or a LAN, or an AI system have things like
> > "stance", "intention", NLP/NLG, sense of humour, etc, to the level of
> > a social being, or (say) attentive waitress? Answer? Well, better than
> > some...   [OK, tongue in cheek. If the social being is serious and
> > doesn't work for FoxNews, there is a level of presence I have not seen
> > in any AI system].
>
>...
> However, with the pressure of demands and production and feasibility
> the holy grail of AI is not to be found in corporate labs. It is
> highly unlikely that a non-trivial research that could yield its
> fruits 200 years later will be supported by any corporation that
> doesn't have a huge amount of money to waste. I know, Microsoft is
> like that, but even they AFAIK aren't trying to breed an AI in a
> super-secret lab which they pour billions of their dollars.
> 
> That economic observation should sound more intelligible.

Maybe so. The path to getting things funded is based on stages and
proven returns (and the odd failure). The path to academic research
is getting people like Eray and David into one room with some creature
comforts and encouraging discussion and challenges.

I've had to sell AI solutions for the past four years or so, and it's a matter
of plugging in existing and "probably will work" AI into somebody's
business need. All the "probably will work" parts need fallback components
that WILL work, but not as well as the novel proposals. The NICE 
thing about this is that there are MANY fallback components out there...

Tom (using Cate's email account).
From: David Longley
Subject: Re: history of AI Winter?
Date: 
Message-ID: <lmXb7FAivgu+EwWn@longley.demon.co.uk>
In article <······················@newsfeeds.bigpond.com>, C McKew
<··········@yahoo.com> writes
>
>"Eray Ozkural exa" <·····@bilkent.edu.tr> wrote in message news:fa69ae35.0305070
>···········@posting.google.com...
>> "Tom Osborn" <·······@DELETE CAPS.nuix.com.au> wrote in message news:<3eb87526
>··@news.iprimus.com.au>...
>> > Can an engineering factory, or a LAN, or an AI system have things like
>> > "stance", "intention", NLP/NLG, sense of humour, etc, to the level of
>> > a social being, or (say) attentive waitress? Answer? Well, better than
>> > some...   [OK, tongue in cheek. If the social being is serious and
>> > doesn't work for FoxNews, there is a level of presence I have not seen
>> > in any AI system].
>>
>>...
>> However, with the pressure of demands and production and feasibility
>> the holy grail of AI is not to be found in corporate labs. It is
>> highly unlikely that a non-trivial research that could yield its
>> fruits 200 years later will be supported by any corporation that
>> doesn't have a huge amount of money to waste. I know, Microsoft is
>> like that, but even they AFAIK aren't trying to breed an AI in a
>> super-secret lab which they pour billions of their dollars.
>> 
>> That economic observation should sound more intelligible.
>
>Maybe so. The path to getting things funded is based on stages and
>proven returns (and the odd failure). The path to academic research
>is getting people like Eray and David into one room with some creature
>comforts and encouraging discussion and challenges.
>

The creature comforts would have to include sufficient remuneration <g>. 

It's something often overlooked, but at University etc., lecturers,
researchers etc. are normally paid to hear the "challenges" of
undergrads and postgrads.....

>I've had to sell AI solutions for the past four years or so, and it's a matter
>of plugging in existing and "probably will work" AI into somebody's
>business need. All the "probably will work" parts need fallback components
>that WILL work, but not as well as the novel proposals. The NICE 
>thing about this is that there are MANY fallback components out there...
>
>Tom (using Cate's email account).

Similarly, I had to "sell" ICT solutions for identifying and monitoring
difficult inmates for a decade or so (I was a government psychologist).
Rather than encourage folk to write reports based on how and what they
'thought', the idea was to collect enough data on inmates (such as
number of Governors reports, the intervals between them, time of day,
type, location etc, compared to norms, and relate these to other
measures - sentence type, offence, age and so on, as well as movements
(disciplinary, visits etc.). The idea being that very often, reports are
based on subjective versions of this very same data. I would argue
strongly that profiling positive attainment in conjunction with these
less frequent negative measures, with appropriate reference classes, is
a very good way of arriving at fair and representative reports.

It was only after we had generated automated reports drawing on a DBMS
that those in daily contact with inmates began to appreciate the extent
to which their own appraisals were somewhat biased and relatively ill-
informed.

All of that came after a piece of research which looked at how experts
selected candidates for special units, and how inconsistent they were
when not making their selections based on sound data.

This, incidentally, is why the applied context of what I referenced at:
http://www.longley.demon.co.uk/Frag.htm is worth wading through even if
one thinks one has little interest in it.

-- 
David Longley
From: C McKew
Subject: Re: history of AI Winter?
Date: 
Message-ID: <vuCua.31308$1s1.459459@newsfeeds.bigpond.com>
"David Longley" <·····@longley.demon.co.uk> wrote in message ·····················@longley.demon.co.uk...
> In article <······················@newsfeeds.bigpond.com>, C McKew
> <··········@yahoo.com> writes

> >Maybe so. The path to getting things funded is based on stages and
> >proven returns (and the odd failure). The path to academic research
> >is getting people like Eray and David into one room with some creature
> >comforts and encouraging discussion and challenges.
> >
> 
> The creature comforts would have to include sufficient remuneration <g>. 
> 
> It's something often overlooked, but at University etc., lecturers,
> researchers etc. are normally paid to hear the "challenges" of
> undergrads and postgrads.....

That's the ideal which some academics live up to. Doesn't always work
that way. Worst case is "closing ranks, being an authority, being lazy and
lacking curiosity". Whatever governments do, they tend to encourage the
worst case. Luckily, pride or innate curiosity or virtuous ethics or naive
commitment to science (etc) saves many academics from the fall. If they
fell, they may end up in politics (departmental or 'actual')...

> This, incidentally, is why the applied context of what I referenced at:
> http://www.longley.demon.co.uk/Frag.htm is worth wading through even if
> one thinks one has little interest in it.

Will check it out.

Tom (not Cate).
From: Jeff Caldwell
Subject: Re: history of AI Winter?
Date: 
Message-ID: <Xstsa.10446$Jf.4990716@news1.news.adelphia.net>
Tom,

I take it you exclude a Prolog compiler embedded in Lisp, as in Chapter 
12 of PAIP. Pure Prolog because it is more efficient than a Prolog 
written and embedded in Lisp, as in PAIP, or because of a deeper 
semantic difference?

Jeff

Tom Osborn wrote:
>...I think Prolog was a better
> bet for the long term...  
From: ozan s yigit
Subject: Re: history of AI Winter?
Date: 
Message-ID: <vi4r87hv0rv.fsf@blue.cs.yorku.ca>
Jeff Caldwell <·····@yahoo.com> writes [to tom osborn]:

> I take it you exclude a Prolog compiler embedded in Lisp, as in
> Chapter 12 of PAIP.

actually there have been more interesting couplings of the two, eg. dorai's
schelog, and another scheme+prolog implementation at indiana. i suspect that
this ends up being an academic exercise with negligible value in production
use. the two can play together, but it always feels forced, substandard.
quintus is not easy to substitute in allegro, and vice versa.

oz

> 
> Tom Osborn wrote:
> >...I think Prolog was a better
> > bet for the long term...
> 

---
practically no other tree in the forest looked so tree-like as this tree.
	-- terry pratchett
From: Christopher Browne
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b8ttao$dk1jk$1@ID-125932.news.dfncis.de>
Quoth Jeff Caldwell <·····@yahoo.com>:
> Tom Osborn wrote:
>>...I think Prolog was a better
>> bet for the long term...
>
> I take it you exclude a Prolog compiler embedded in Lisp, as in
> Chapter 12 of PAIP. Pure Prolog because it is more efficient than a
> Prolog written and embedded in Lisp, as in PAIP, or because of a
> deeper semantic difference?

The point would be that implementing systems using Prolog involves
mostly writing code that declares intent rather than declaring how to
compute things.

In the long run (of course, Keynes observes "we're all dead"...), the
hope would be for Prolog system environments to get increasingly
efficient at picking different computational techniques, in much the
way that SQL DBMSes have gradually gotten better at tuning queries.

It may run into limits as to how much fancier the back end can get at
"tuning," and fruitful counterarguments would also fall out of claims
amounting to "Prolog isn't expressive enough to describe what we want
to describe."

Those all seem to be somewhat tenable positions...
-- 
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me
http://cbbrowne.com/info/prolog.html
"What is piracy? Piracy is the act of stealing an artist's work without
any intention of paying for it. I'm not talking about Napster-type
software.  I'm talking about major label recording contracts." 
-- Courtney Love, Salon.com, June 14, 2000
From: Gorbag
Subject: Re: history of AI Winter?
Date: 
Message-ID: <BAD80C67.3E4F%gorbagNOSPAM@NOSPAMmac.com>
On 5/2/03 6:54 AM, in article ··············@ID-125932.news.dfncis.de,
"Christopher Browne" <········@acm.org> wrote:

> In the long run (of course, Keynes observes "we're all dead"...), the
> hope would be for Prolog system environments to get increasingly
> efficient at picking different computational techniques, in much the
> way that SQL DBMSes have gradually gotten better at tuning queries.

You may be able to make this argument for requirements specified in formal
logic, but not in PROLOG. PROLOG uses SLD resolution, which makes it more a
programming language with a veneer of logic. There are no choices as to
resolution techniques. (A programmer can depend on the clauses being
resolved in a certain order, permitting use of CUT for instance.)

Of course, there were a lot of systems competing with PROLOG that had
different (or even formally unspecified) resolution techniques so a
programmer could NOT count on clause ordering. I am just arguing that these
systems were not PROLOG.
From: Tom Osborn
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3eb76af3$1_1@news.iprimus.com.au>
"Jeff Caldwell" <·····@yahoo.com> wrote in message ···························@news1.news.adelphia.net...
> Tom,
> 
> I take it you exclude a Prolog compiler embedded in Lisp, as in Chapter 
> 12 of PAIP. Pure Prolog because it is more efficient than a Prolog 
> written and embedded in Lisp, as in PAIP, or because of a deeper 
> semantic difference?
> Jeff

Because of the way LISP programmers think differently from Prolog
programmers... Not quite a semantic difference, but a conceptual
utility difference [and not just losing all the parens]. 

Tom.
 
> Tom Osborn wrote:
> >...I think Prolog was a better
> > bet for the long term...  
> 
From: Acme Debugging
Subject: Re: history of AI Winter?
Date: 
Message-ID: <35fae540.0305081142.6118628e@posting.google.com>
"Tom Osborn" <·······@DELETE CAPS.nuix.com.au> wrote in message news:<············@news.iprimus.com.au>...
> 
> Because of the way LISP programmers think differently from Prolog
> programmers... Not quite a semantic difference, but a conceptual
> utility difference [and not just losing all the parens]. 

Hi Tom. Are you saying that an AI programming language tends to
classify one's thinking in some way? "Conceptual difference" would
qualify, but don't understand "utility."
 
I would nominate you for a co-moderator. <g>
Know you are too busy.

Larry
From: Tom Osborn
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3ebf3462_1@news.iprimus.com.au>
"Acme Debugging" <······@lycos.co.uk> wrote in message
·································@posting.google.com...
> "Tom Osborn" <·······@DELETE CAPS.nuix.com.au> wrote in message
news:<············@news.iprimus.com.au>...
> >
> > Because of the way LISP programmers think differently from Prolog
> > programmers... Not quite a semantic difference, but a conceptual
> > utility difference [and not just losing all the parens].
>
> Hi Tom. Are you saying that an AI programming language tends to
> classify one's thinking in some way? "Conceptual difference" would
> qualify, but don't understand "utility."

I meant that the tools you use affect the way you approach things, even
conceptually. I've taught people who were Prologist and who were
LISPers. I've taught Miranda (a Haskell-esque thing) and even use
awk myself sometimes. BIG designs in LISP or Prolog are often
not much different, but there seems to be a hacker syndrome in
much of AI making it work.

If I talk roughly, I'd say that LISP has more to do with manipulating
general containers (fighting storage), while Prolog is more about
exploiting the logical relationships (which, of course is a storage
issue as well)...

> I would nominate you for a co-moderator. <g>

But I'm not really a moderate person... :-)

> Know you are too busy.

Even moreso today...

> Larry
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0305021151.6e06906f@posting.google.com>
"Tom Osborn" <·······@DELETE CAPS.nuix.com.au> wrote in message news:<··········@news.iprimus.com.au>...
> Lastly, the symbolic vs NN/statistical/maths/decision theory WAR was
> very dumb indeed.


Not having the time to respond to your other comments, I must say I
wholeheartedly agree with this statement.

Regards,

__
Eray Ozkural (exa) <erayo at cs.bilkent.edu.tr>
From: Eray Ozkural  exa
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fa69ae35.0304221214.5203fd24@posting.google.com>
M H <··@nospam.org> wrote in message news:<···············@news.t-online.com>...
> In the fields I cited (esp. vision and speech recognition) it is 
> regularly difficult to explain to non-experts why computers should have 
> difficulties solve tasks which are so simple even a three-year-old can 
> do them effortlessly!

Also note how vision and speech recognition are not called AI nowadays
although speech recognition was something of a grand challenge in AI
when Space Odyssey was shot :)

That's because the focus of the research in those domais shifted from
the cognitive to (electrical!!!) engineering problems. IMHO real
speech recognition is AI-complete of course. This can easily be shown
via the dependency of semantic analysis, and thereon syntactic
analysis and thereon morphology and thereon phonology on pragmatics.
:)

Also some of the things, after they get "trivial" to understand, are
thought not to be part of AI research any more. I don't think I
perceive an alpha-beta pruning game playing algorithm as AI research,
because there is really very little left to explore using that model.
But still a search algorithm is more AI than a lot of things in
computer science, it's the first part of AIMA. :)

Thanks,

__
Eray Ozkural
From: Michael Sullivan
Subject: Re: history of AI Winter?
Date: 
Message-ID: <1fua2g2.1mwj7uu1mq4uwkN%michael@bcect.com>
Michael Schuerig <········@acm.org> wrote:

> Indeed, PAIP is a great study of historically significant AI programs,
> reduced to their core. Still, even in the heyday of Logic Theorist and
> General Problem Solver, even with a most optimistic outlook,
> extravagant claims about intelligent computers were utterly unfounded.
> I don't believe at all, that only with hindsight one can see that.

I don't believe it at all either.  I've never been an AI researcher, but
I've been interested since I can remember.  I was in college a bit fewer
than 20 years ago, and was reading and talking to people who did AI
research at the time.  I never thought those problems were easy, and the
people I was reading or talking to at the time didn't think so either.

It seems to me that the only people who thought we'd have thinking
computers by 2000, that any kind of AI-complete problem was easy, were
people who didn't really understand those problems -- lay-folk without a
real appreciation for the field.   

I think you had actual researchers who were overeager figuring they
could deliver *something useful and worth the investment* in 5-10 years
and talking in very optimistic terms, and outsiders reading that as
"robots that think in 20-30 years", not realizing that said researcher's
"something worth the investment" was about a millionth of the way down
the road to actual AI, if that.

In the 1970s Kubrick had HAL showing up in 2001, but by 1984, if you'd
asked the people I was talking to, they'd have considered 5001 a lot
more likely.


Michael
From: Gorbag
Subject: Re: history of AI Winter?
Date: 
Message-ID: <BAD6A170.3C83%gorbagNOSPAM@NOSPAMmac.com>
On 5/1/03 9:27 AM, in article ·······························@bcect.com,
"Michael Sullivan" <·······@bcect.com> wrote:

> Michael Schuerig <········@acm.org> wrote:
> 
>> Indeed, PAIP is a great study of historically significant AI programs,
>> reduced to their core. Still, even in the heyday of Logic Theorist and
>> General Problem Solver, even with a most optimistic outlook,
>> extravagant claims about intelligent computers were utterly unfounded.
>> I don't believe at all, that only with hindsight one can see that.

> It seems to me that the only people who thought we'd have thinking
> computers by 2000, that any kind of AI-complete problem was easy, were
> people who didn't really understand those problems -- lay-folk without a
> real appreciation for the field.

Demonstrably false. There are several AAAI fellows who were making wild-eyed
predictions ten or more years ago about the state of the art by the year
2000. The rest of the AAAI fellows know who these guys were, so it shouldn't
be too hard to find some names if you care to look. (I'm not putting
anyone's name into a newsgroup; I don't know where my paycheck is going to
be coming from when some email of mine is dredged up on Google).
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-5599CB.12210001052003@news.vanderbilt.edu>
In article <··························@NOSPAMmac.com>,
 Gorbag <············@NOSPAMmac.com> wrote:

>> Michael Schuerig <········@acm.org> wrote:
>> 
>>> Indeed, PAIP is a great study of historically significant AI programs,
>>> reduced to their core. Still, even in the heyday of Logic Theorist and
>>> General Problem Solver, even with a most optimistic outlook,
>>> extravagant claims about intelligent computers were utterly unfounded.
>>> I don't believe at all, that only with hindsight one can see that.
[...]
>There are several AAAI fellows who were making wild-eyed
>predictions ten or more years ago about the state of the art by the year
>2000. The rest of the AAAI fellows know who these guys were, so it shouldn't
>be too hard to find some names if you care to look.

I've lost track of why you guys are upset.

That some AI pioneers made predictions that turned out to be
optimistic?

That the early AI technologies developed by these pioneers were
obviously unsuitable, and should have been recognized as such
back in the day?

That the early AI technologies developed by these pioneer turned
out to be unsuitable, but current technologies are much better?

That AI was and will always be pseudoscience?

Something else?!
From: Arthur T. Murray
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3eb16813@news.victoria.tc.ca>
sv0f <····@vanderbilt.edu> wrote on Thu, 01 May 2003:
> [...] I've lost track of why you guys are upset.
>
> That some AI pioneers made predictions that turned out
> to be optimistic?
>
> That the early AI technologies developed by these pioneers
> were obviously unsuitable, and should have been recognized
> as such back in the day?
>
> That the early AI technologies developed by these pioneer turned
> out to be unsuitable, but current technologies are much better?

http://www.scn.org/~mentifex/jsaimind.html is a JavaScript AI.
>
> That AI was and will always be pseudoscience?
>
> Something else?!

http://www.scn.org/~mentifex/ai4udex.html is a hyperlink index
(in two directions: AI/CS/Philosophy background and AI4U pages)
to the textbook "AI4U: Mind-1.1 Programmer's Manual" (q.v.).
From: Eugene Miya
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3eba9ca8$1@news.ucsc.edu>
In article <········@news.victoria.tc.ca>,
Arthur T. Murray <·····@victoria.tc.ca> wrote:

Learn to edit down headers Arthur.

>sv0f <····@vanderbilt.edu> wrote on Thu, 01 May 2003:
>> [...] I've lost track of why you guys are upset.
>> That some AI pioneers made predictions that turned out
>> to be optimistic?
>> That the early AI technologies developed by these pioneers
>> were obviously unsuitable, and should have been recognized
>> as such back in the day?
>> That the early AI technologies developed by these pioneer turned
>> out to be unsuitable, but current technologies are much better?
>> That AI was and will always be pseudoscience?
>> Something else?!

>http://www.scn.org/~mentifex/ai4udex.html is a hyperlink index

A quote from Ted said it best:

But like so many beginning computerists,
I mistook a clear view for a short distance.
        --Ted Nelson
From: Duane Rettig
Subject: Re: history of AI Winter?
Date: 
Message-ID: <4he8e5xd0.fsf@beta.franz.com>
·······@bcect.com (Michael Sullivan) writes:

> In the 1970s Kubrick had HAL showing up in 2001, but by 1984, if you'd
> asked the people I was talking to, they'd have considered 5001 a lot
> more likely.

2001 was the time setting for the movie, but not when HAL was
"born".  I saved an old message from a colleage of mine, sent on
Jan 8, 1992, saying:

    Just read on hackers_guild that according to "2001, A Space
    Odyssey", HAL is born on 12-Jan-92, this Sunday.

So HAL was almost 9 years old when the movie was to take place...

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Tim Bradshaw
Subject: Re: history of AI Winter?
Date: 
Message-ID: <ey3llxqebu1.fsf@cley.com>
* Michael Sullivan wrote:

> It seems to me that the only people who thought we'd have thinking
> computers by 2000, that any kind of AI-complete problem was easy,
> were people who didn't really understand those problems -- lay-folk
> without a real appreciation for the field.

No.  Major people in the field made these stupid, bogus claims. 

--tim
From: Mario S. Mommer
Subject: Re: history of AI Winter?
Date: 
Message-ID: <fzr87hwzho.fsf@cupid.igpm.rwth-aachen.de>
Tim Bradshaw <···@cley.com> writes:
> * Michael Sullivan wrote:
> 
> > It seems to me that the only people who thought we'd have thinking
> > computers by 2000, that any kind of AI-complete problem was easy,
> > were people who didn't really understand those problems -- lay-folk
> > without a real appreciation for the field.
> 
> No.  Major people in the field made these stupid, bogus claims. 

True.

If someone needs a reference, you can find some quotes and names in
the chapter on the General Problem Solver in PAIP.

Mario.
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-E89AF5.10220502052003@news.vanderbilt.edu>
In article <···············@cley.com>, Tim Bradshaw <···@cley.com> 
wrote:

>Major people in the field made these stupid, bogus claims. 

IMO, major people in AI made optmistic claims, many of
which turned out to be too optimistic (hey, it was the
birth of a new field!) and some of which led to productive
research.

FWIW, the historical addendum to Newell and Simon's (1972)
"Human Problem Solving" succinctly captures the zeitgeist
in which AI came into being.  Reading it, I can imagine
being a young scholar, drunk on the sudden convergence of
a half dozen fields, and making bold predictions like
Simon's infamous 1950s utterance that a computer would
be world chess champion within ten years.  (He missed by
30 years, of course.)
From: Erann Gat
Subject: Re: history of AI Winter?
Date: 
Message-ID: <gat-0205031027020001@k-137-79-50-101.jpl.nasa.gov>
In article <··························@news.vanderbilt.edu>, sv0f
<····@vanderbilt.edu> wrote:

> Simon's infamous 1950s utterance that a computer would
> be world chess champion within ten years.  (He missed by
> 30 years, of course.)

That still makes him closer to being right than the naysayers who claimed
that a computer playing grandmaster-level chess was fundamentally
impossible.

E.
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-3F1699.14231202052003@news.vanderbilt.edu>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

>In article <··························@news.vanderbilt.edu>, sv0f
><····@vanderbilt.edu> wrote:
>
>> Simon's infamous 1950s utterance that a computer would
>> be world chess champion within ten years.  (He missed by
>> 30 years, of course.)
>
>That still makes him closer to being right than the naysayers who claimed
>that a computer playing grandmaster-level chess was fundamentally
>impossible.

Hey, I'm on your side in this one!

I'm just trying to understand the arguments of those who 
appear to (but *may* not) be claiming that AI was bunk
from the start; that this should have been obvious to
anyone with an undergraduate degree in math; that no
progress has been made; that no progress is possible;
that N incorrect claims delegitimize M correct claims
(when it is not the case that N>>M); etc.

On a related note, I'm 20 pages from the end of Feng-hsiung
Hsu's "Behind Deep Blue: Building the Computer that Defeated
the World Chess Champion".  It's been an engrossing read,
one I recommend.

It is striking that, in Hsu's words, Deep Thought and Deep
Blue essentailly implement Shannon's and Newell/Simon's
initial insights into machine chess, but in hardware.  (He
makes this comment in debunking the efforts of others to
inject more AI than is necessary into computer chess
players.)  So it appears the early AI pioneers were correct
in predicting that machines would eclipse humans in chess
and correct in the method by which computers would do so.
What/all they got wrong was *when*.
From: Erann Gat
Subject: Re: history of AI Winter?
Date: 
Message-ID: <gat-0205031441260001@k-137-79-50-101.jpl.nasa.gov>
In article <··························@news.vanderbilt.edu>, sv0f
<····@vanderbilt.edu> wrote:

> In article <····················@k-137-79-50-101.jpl.nasa.gov>,
>  ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> >In article <··························@news.vanderbilt.edu>, sv0f
> ><····@vanderbilt.edu> wrote:
> >
> >> Simon's infamous 1950s utterance that a computer would
> >> be world chess champion within ten years.  (He missed by
> >> 30 years, of course.)
> >
> >That still makes him closer to being right than the naysayers who claimed
> >that a computer playing grandmaster-level chess was fundamentally
> >impossible.
> 
> Hey, I'm on your side in this one!

Don't be so sure.

> I'm just trying to understand the arguments of those who 
> appear to (but *may* not) be claiming that AI was bunk
> from the start; that this should have been obvious to
> anyone with an undergraduate degree in math; that no
> progress has been made; that no progress is possible;
> that N incorrect claims delegitimize M correct claims
> (when it is not the case that N>>M); etc.

The problem with the claim "AI was bunk from the start" is that the term
"AI" is not well defined.  It can be taken to be a general term covering
all efforts to understand how to make an artifact that behaves
intelligently (whatever that means), or it can be taken in a narrower
sense to mean the particular efforts made by a particular group of people
at a particular time in history, typically starting with McCarthy and
Minsky and the Dartmouth Conference, peaking in the late 80's, and
on-going today under the auspices of the AAAI and other professional
societies.

On that second definition I think that there was (and still is) an awful
lot of bunk, and that certainly a lot of what's being done today under the
rubric of AI (neural nets, fuzzy logic, swarm intelligence, etc. etc.) is
easily recognizable as bunk.  How much of it was recognizable back when is
not so clear.  I think that the original AI folks had some basically sound
ideas, but they were so constrained by their available hardware that they
ended up going down some very unfruitful paths -- at least with respect to
the goals of AI.  A lot of useful discoveries having nothing to do with AI
per se were made along the way, so it wasn't a complete waste of time. 
But IMHO we're not much closer to understanding intelligence now than we
were in 1950.  And what progress we have made is largely in spite of, not
because of, those who style themselves "AI researchers".  (And I call
myself an AI researcher.)

Personally, I think that the two most significant results in AI to date
are Eliza and Google.  Eliza is usually cited as an example of how careful
you have to be when assessing whether or not something is intelligent
because Eliza seemed to be intelligent when it "obviously" wasn't.  IMO it
is far from clear that Eliza had zero intelligence.  Google is the closest
thing so far to a machine that really "understands" something (in Google's
case it "understands" the concept of "importance" or "relevance" or
"authority" or something like that) and it's really nothing more than
Eliza with a huge collaboratively-built database.  It is not at all clear
to me that "real" intelligence is not simply a matter of taking something
like that, adding a relativelty small number of clever hacks, and scaling
up.  Certainly the ability to pass the Turing Test on Usenet seems to be
within reach following this sort of strategy.  (Remember Ilias?)

> So it appears the early AI pioneers were correct
> in predicting that machines would eclipse humans in chess
> and correct in the method by which computers would do so.
> What/all they got wrong was *when*.

No, I think they got something else wrong too: they all thought that
getting a machine to win at Chess would be worthwhile because it would
give you some insight into how to get a machine to do other clever things,
like carry on a conversation.  They were wrong about that.

E.
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-99839F.14431203052003@news.vanderbilt.edu>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

>On that second definition I think that there was (and still is) an awful
>lot of bunk, and that certainly a lot of what's being done today under the
>rubric of AI (neural nets, fuzzy logic, swarm intelligence, etc. etc.) is
>easily recognizable as bunk.

I agree with you here, but I wouldn't call these ideas (and others
such as elaborate logics, statistical reasoning, and decision
theory) "bunk".  Rather, I think that they're mere engineering,
mere technology, and their pursuit has turned AI into a form
of applied mathematics.  Probably many AI researchers are happy
about this, as the definition-theorem-example ryhthym of current
AI papers fits well in the academic genre.

However, this shift has given up the "human" or "cognitive"
dimension of "Intelligence".  It's also no longer scientific
(IMO).

>But IMHO we're not much closer to understanding intelligence now than we
>were in 1950.

AI produced insights for 20 years, but then stopped.

>And what progress we have made is largely in spite of, not
>because of, those who style themselves "AI researchers".  (And I call
>myself an AI researcher.)

Note that the larger enterprise of Cognitive Science continues
to untangle the mechanisms of intelligence, albeit at the slow
pace of conventional science.

>Personally, I think that the two most significant results in AI to date
>are Eliza and Google.  Eliza is usually cited as an example of how careful
>you have to be when assessing whether or not something is intelligent
>because Eliza seemed to be intelligent when it "obviously" wasn't.  IMO it
>is far from clear that Eliza had zero intelligence.  Google is the closest
>thing so far to a machine that really "understands" something (in Google's
>case it "understands" the concept of "importance" or "relevance" or
>"authority" or something like that) and it's really nothing more than
>Eliza with a huge collaboratively-built database.  It is not at all clear
>to me that "real" intelligence is not simply a matter of taking something
>like that, adding a relativelty small number of clever hacks, and scaling
>up.  Certainly the ability to pass the Turing Test on Usenet seems to be
>within reach following this sort of strategy.  (Remember Ilias?)

Interesting.  I too believe ELIZA has been monumentally
underestimated.  It's interesting that you picked a nearly
knowledge-free system and one that's essentially pure
knowledge.

>> So it appears the early AI pioneers were correct
>> in predicting that machines would eclipse humans in chess
>> and correct in the method by which computers would do so.
>> What/all they got wrong was *when*.
>
>No, I think they got something else wrong too: they all thought that
>getting a machine to win at Chess would be worthwhile because it would
>give you some insight into how to get a machine to do other clever things,
>like carry on a conversation.  They were wrong about that.

Well, my quoted statement was purely about the chess domain.

But I do agree that the lack of transfer between domains of
intelligence has been quite surprising.  In the worst case, it
hints that intelligence is just a bunch of special cases or hacks.
Some neo-darwinians think this conclusion obvious.  I find it
pessimistic, although as you say, it is consistent with a half
century of history.
From: Michael Sullivan
Subject: Re: history of AI Winter?
Date: 
Message-ID: <1fucdsk.177ymz41s5ogg2N%michael@bcect.com>
sv0f <····@vanderbilt.edu> wrote:

> In article <····················@k-137-79-50-101.jpl.nasa.gov>,
>  ···@jpl.nasa.gov (Erann Gat) wrote:
> 
> >In article <··························@news.vanderbilt.edu>, sv0f
> ><····@vanderbilt.edu> wrote:
> >
> >> Simon's infamous 1950s utterance that a computer would
> >> be world chess champion within ten years.  (He missed by
> >> 30 years, of course.)
> >
> >That still makes him closer to being right than the naysayers who claimed
> >that a computer playing grandmaster-level chess was fundamentally
> >impossible.
> 
> Hey, I'm on your side in this one!

> I'm just trying to understand the arguments of those who 
> appear to (but *may* not) be claiming that AI was bunk
> from the start; that this should have been obvious to
> anyone with an undergraduate degree in math; that no
> progress has been made; that no progress is possible;
> that N incorrect claims delegitimize M correct claims
> (when it is not the case that N>>M); etc.

Whoa.  I've got to be one of the people you're referring to here, and I
don't think that at all.  

I think a lot of very interesting progress has been made, and I'd like
to see a lot more AI research than there is now.  But claims that strong
AI-complete problems would be solved in the near term were (and still
are) bogus.

Grandmaster level Chess doesn't appear to be a strong AI-complete
problem.  I don't think pro 9d level Go is either, and that's a *long*
way from being solved.

I'm certainly not trying to say that Ai research is pointless and that
the AI winter was entirely justified.  Only that any claims of strong-AI
being solved in my lifetime were, and still are (short of a major
unforeseen watershed), somewhere between ridiculously optimistic and
completely insane, and that it didn't take an AI researcher to see that
20 years ago.


Michael
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-47A62A.14281803052003@news.vanderbilt.edu>
In article <·······························@bcect.com>,
 ·······@bcect.com (Michael Sullivan) wrote:

>sv0f <····@vanderbilt.edu> wrote:
>> I'm just trying to understand the arguments of those who 
>> appear to (but *may* not) be claiming that AI was bunk
>> from the start; that this should have been obvious to
>> anyone with an undergraduate degree in math; that no
>> progress has been made; that no progress is possible;
>> that N incorrect claims delegitimize M correct claims
>> (when it is not the case that N>>M); etc.
>
>Whoa.  I've got to be one of the people you're referring to here, and I
>don't think that at all.  

I don't think anyone believes all these claims -- I was
just running them altogether for rhetorical purposes.

>I think a lot of very interesting progress has been made, and I'd like
>to see a lot more AI research than there is now.  But claims that strong
>AI-complete problems would be solved in the near term were (and still
>are) bogus.

What do you mean by "bogus"?  Pure snake oil -- knowingly
offered exaggerations -- or scientific predictions that
turned out to be false?

>But claims that strong
>AI-complete problems would be solved in the near term were (and still
>are) bogus.
>
>Grandmaster level Chess doesn't appear to be a strong AI-complete
>problem.  I don't think pro 9d level Go is either, and that's a *long*
>way from being solved.
>
>I'm certainly not trying to say that Ai research is pointless and that
>the AI winter was entirely justified.  Only that any claims of strong-AI
>being solved in my lifetime were, and still are (short of a major
>unforeseen watershed), somewhere between ridiculously optimistic and
>completely insane, and that it didn't take an AI researcher to see that
>20 years ago.

I don't understand some of your terms (which I recall hearing
before in philosophical discussions, so I know you're not making
them up).  Could you give me a sense of what "AI-complete",
"strong AI-complete problem", and "strong AI" mean?

Also, I think I have a different time frame than you.  The
claim that machine would soon best man in chess was made
45 years ago, at the dawn of AI, when its promise and pitfalls
were unknown quantities.  It is this claim and others of its
era that I was addressing.

I agree that any overly-optimistic proclamations made 20 years
ago, when the field had enough experience under its belt to
know the difficulty of problems such as machine vision and
discourse comprehension, were "completely insane" to use
your phrase, or scientifically irresponsible to pick up the
critique(s) offered by other poster(s).
From: Christopher Browne
Subject: Re: history of AI Winter?
Date: 
Message-ID: <b8uo9t$drber$3@ID-125932.news.dfncis.de>
In the last exciting episode, ···@jpl.nasa.gov (Erann Gat) wrote:
> In article <··························@news.vanderbilt.edu>, sv0f
> <····@vanderbilt.edu> wrote:

>> Simon's infamous 1950s utterance that a computer would be world
>> chess champion within ten years.  (He missed by 30 years, of
>> course.)

> That still makes him closer to being right than the naysayers who
> claimed that a computer playing grandmaster-level chess was
> fundamentally impossible.

.. But the nature of how they expected this to occur has changed.  It
was anticipated that the programs would be "very clever."  

They aren't; they use the notion of searching absolutely enormous
state spaces.  And the approaches wind up being quite specialized, not
much useful to solving other problems.

The expectation was that a system that could play a good game of chess
would be good for other things; such systems turn out to have
vanishingly tiny application.
-- 
(concatenate 'string "cbbrowne" ·@cbbrowne.com")
http://www.ntlug.org/~cbbrowne/
Avoid unnecessary branches.
From: Paul Wallich
Subject: Re: history of AI Winter?
Date: 
Message-ID: <pw-6E8DC4.18222702052003@reader1.panix.com>
In article <··············@ID-125932.news.dfncis.de>,
 Christopher Browne <········@acm.org> wrote:

> In the last exciting episode, ···@jpl.nasa.gov (Erann Gat) wrote:
> > In article <··························@news.vanderbilt.edu>, sv0f
> > <····@vanderbilt.edu> wrote:
> 
> >> Simon's infamous 1950s utterance that a computer would be world
> >> chess champion within ten years.  (He missed by 30 years, of
> >> course.)
> 
> > That still makes him closer to being right than the naysayers who
> > claimed that a computer playing grandmaster-level chess was
> > fundamentally impossible.
> 
> .. But the nature of how they expected this to occur has changed.  It
> was anticipated that the programs would be "very clever."  
> 
> They aren't; they use the notion of searching absolutely enormous
> state spaces.  And the approaches wind up being quite specialized, not
> much useful to solving other problems.

This isn't exactly true. Although Deep Blue et al relied on brute-force 
searches and ridiculous amounts of processing power, Deep Junior and its 
brethren run on PC-scale machines and do in fact rely on sophisticated 
evaluation. 

Obviously the precise code doesn't apply immediately to other problems, 
but that would seem to be asking a bit much -- you wouldn't want Boris 
Spassky reading your xrays either. 

I think that one of the things that went wrong with AI was an implicit 
underestimation of the amount of time and supervision that humans 
require to achieve human-level performance in a given field, much less 
in multiple fields. Perhaps only a tiny handful of systems have been 
given that kind of attention.

paul
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-B07ADF.14165703052003@news.vanderbilt.edu>
In article <························@reader1.panix.com>,
 Paul Wallich <··@panix.com> wrote:

>I think that one of the things that went wrong with AI was an implicit 
>underestimation of the amount of time and supervision that humans 
>require to achieve human-level performance in a given field, much less 
>in multiple fields. Perhaps only a tiny handful of systems have been 
>given that kind of attention.

It's funny, this is where the diffuse nature of AI becomes
problematic.  In some circles it was recognized by the early
1970s that knowledge is power, and that endowing a system
with sufficient knowledge was time and resource intensive.

Herb Simon and his collaborators (such as William Chase and
Anders Ericsson) conducted numerous studies on the number
of "chunks" of knowledge one must acquire to achieve world
class performance in a domain.  Their results showed, in
domains ranging from chess to pole-vaulting, that essentially
ten years of practicing eight hours per day were required.
Simon cites Bobby Fischer as an exception to this estimate --
he achieved a Grandmaster ranking a bit more than nine years
aftering taking up chess.  (Simon also deals with obvious
exceptions, child prodigies such as Mozart, for the
interested.)

My sense is that AI researchers who were true cognitive
scientists, and therefore studenst of linguistics,
psychology, anthropology, neuroscience, and other
related differences, understood the importance of
knowledge, learning, and development.  It was the
exclusively mathematical/engineering types who were
wrong.  Unfortunately, it's their work that fills the
pages of the field's flagship journal, "Artificial
Intelligence".
From: Kenny Tilton
Subject: Re: history of AI Winter?
Date: 
Message-ID: <3EB3251B.7080605@nyc.rr.com>
Christopher Browne wrote:
> In the last exciting episode, ···@jpl.nasa.gov (Erann Gat) wrote:
> 
>>In article <··························@news.vanderbilt.edu>, sv0f
>><····@vanderbilt.edu> wrote:
> 
> 
>>>Simon's infamous 1950s utterance that a computer would be world
>>>chess champion within ten years.  (He missed by 30 years, of
>>>course.)
>>
> 
>>That still makes him closer to being right than the naysayers who
>>claimed that a computer playing grandmaster-level chess was
>>fundamentally impossible.
> 
> 
> .. But the nature of how they expected this to occur has changed.  It
> was anticipated that the programs would be "very clever."  
> 
> They aren't; they use the notion of searching absolutely enormous
> state spaces.  

Yep, and even that is not enough, they have to bring in grandmasters to 
help with the hard-coded position evaluator. Deep Blue, by playing even 
with Kasparov, simply provides a measure of Kasparov's superiority:

    positions-examined-by-deep
    --------------------------
    positions-examined-by-gary

uh-oh. :)


-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: sv0f
Subject: Re: history of AI Winter?
Date: 
Message-ID: <none-DF8F99.14082003052003@news.vanderbilt.edu>
In article <··············@ID-125932.news.dfncis.de>,
 Christopher Browne <········@acm.org> wrote:

>> That still makes him closer to being right than the naysayers who
>> claimed that a computer playing grandmaster-level chess was
>> fundamentally impossible.
>
>.. But the nature of how they expected this to occur has changed.  It
>was anticipated that the programs would be "very clever."  
>
>They aren't; they use the notion of searching absolutely enormous
>state spaces.

I disagree.  I think from the beginning AI researchers realized
it would take plenty of knowledge (cleverness?) and plenty of search.
They simply underestimated how much of each.

>And the approaches wind up being quite specialized, not
>much useful to solving other problems.
>
>The expectation was that a system that could play a good game of chess
>would be good for other things; such systems turn out to have
>vanishingly tiny application.

I agree.  This was especially true of expert systems.  The
effort to build one was high and the result was so domain-specific
that less than was hoped (which is to say almost nothing) transfered
to other domains.
From: Michael Sullivan
Subject: Re: history of AI Winter?
Date: 
Message-ID: <1fucc3y.6zbe8e1629916N%michael@bcect.com>
sv0f <····@vanderbilt.edu> wrote:

> FWIW, the historical addendum to Newell and Simon's (1972)
> "Human Problem Solving" succinctly captures the zeitgeist
> in which AI came into being.  Reading it, I can imagine
> being a young scholar, drunk on the sudden convergence of
> a half dozen fields, and making bold predictions like
> Simon's infamous 1950s utterance that a computer would
> be world chess champion within ten years.  (He missed by
> 30 years, of course.)

I don't think we know how many years he missed by.  A computer is not
world chess champion yet.  There have been a couple well publicized
matches of a computer against the strongest human player of our time,
but the circumstances have favored the computer (Kasparov was not
allowed to study many of Deep Blue's previous games, but Deep Blue was
tuned by a team of chess experts with access to all of Kasparov's
games).

It's also well known in computer go circles that programs rate quite a
bit stronger against people who have never played them before than
against people who have, because they do not learn in the same way
humans do.  This is true of chess programs as well. There are a number
of grandmasters who have studied strong computer chess programs and
found ways to make them look a lot weaker, though none of them has been
granted a match with Deep Blue AFAIK.

So Deep Blue appears to be the strongest player in the world right now,
but we've only seen matches against *one* player, and not very many of
those.  The researchers who are responsible for it, must know that if
you made all its games public and had it play lots of different
grandmaster opponents, that it would probably look much less strong.  So
far they simply haven't agreed to let this happen.  Personally, I think
they won't until they are quite sure it will stand up to the full
scrutiny, and the reason it's not happening now is that they are not
sure.

If you entered Deep Blue into a world championship competition with the
same rules that any human would face (for instance they could be touched
by their programmers only under the same conditions a human would be
allowed to consult other people, matches would be made public record to
other contestants, etc.) -- it's by no means certain that it would win.
Until this happens and it does win, I refuse to recognize DB as "world
champion" no matter how well it does against Kasparov.

Grandmaster level play has clearly been achieved, but "world champion"
has not.


Michael
From: Michael Sullivan
Subject: Re: history of AI Winter?
Date: 
Message-ID: <1fubvzv.1akb4451csz5giN%michael@bcect.com>
Tim Bradshaw <···@cley.com> wrote:

> * Michael Sullivan wrote:

> > It seems to me that the only people who thought we'd have thinking
> > computers by 2000, that any kind of AI-complete problem was easy,
> > were people who didn't really understand those problems -- lay-folk
> > without a real appreciation for the field.

> No.  Major people in the field made these stupid, bogus claims. 

Alright, I can believe that -- I certainly wasn't reading everything and
I wouldn't say I was current in the field then or ever.

I also phrased that more sweepingly than I should have.  

What I should have said is that I don't remember taking any such claims
seriously.  Basically, I agree with Michael Scheurig that it doesn't
take hindsight to call such claims "stupid and bogus".  It didn't even
take PhD level knowledge at the time.  As an undergraduate math major
and regular basher of computer keyboards with only a lay interest in
serious AI, it was clear to me.  It's hard for me to believe that major
researchers who made such claims actually believed them, and weren't
just priming the money pump with bullshit.


Michael
From: Paolo Amoroso
Subject: Re: history of AI Winter?
Date: 
Message-ID: <CzuhPsKPQB7=CWW6rKfcgA892DH=@4ax.com>
On Sat, 19 Apr 2003 00:25:41 +0200, Michael Schuerig <········@acm.org>
wrote:

> Incidentally, I seem to remember reading (in this group?) that AI
> logistics software saved more money during Desert Storm than DARPA had
> ever spent on AI research. Can anyone confirm or disprove this claim?

I seem to have read that that software run on Symbolics Lisp Machines.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: BK
Subject: Re: history of AI Winter?
Date: 
Message-ID: <39d9c156.0305020632.18d6d7d1@posting.google.com>
Paolo Amoroso <·······@mclink.it> wrote ...

> Michael Schuerig <········@acm.org> wrote:
> 
> > Incidentally, I seem to remember reading (in this group?) that AI
> > logistics software saved more money during Desert Storm than DARPA had
> > ever spent on AI research. Can anyone confirm or disprove this claim?
> 
> I seem to have read that that software run on Symbolics Lisp Machines.


Are you talking about Ascent? Apperently they use Allegro

http://www.franz.com/success/customer_apps/scheduling/ascent.lhtml

rgds
bk
From: Fred Gilham
Subject: Re: history of AI Winter?
Date: 
Message-ID: <u78ytjhjlk.fsf@snapdragon.csl.sri.com>
Some time ago in this thread, someone commented that extravagant
claims about AI had been made.  Some said it was by people outside the
field, others said it was by researchers themselves.  I remembered
being annoyed by something I read back in the early '90s where they
claimed to have a truly intelligent system.  But I wasn't sure and
couldn't find the reference.

Finally I found it.  It is "CASSIE".  There's an article in LECTURE
NOTES IN ARTIFICIAL INTELLIGENCE #390 about it, and they describe it
as follows:

     ...To focus our thinking and our discussions, we have invented
     CASSIE, the Cognitive Agent of the SNePS system --- an
     Intelligent Entity.  (Pg. 363)

This was published in 1989.

What I remembered reading (in another article about CASSIE that I
couldn't find) was somewhat stronger than this, to the effect that now
that we have CASSIE we have machine intelligence and basically all
that's left to do is to increase the horsepower of the hardware.

Anyway what is interesting is that these guys are still in business
doing interesting work.  The SNePS system is available on line.  It's
done in Common Lisp and they also use Garnet to do some GUI stuff.  So
if anyone wants to evaluate the claims you can, just download SNePS or
read some of the papers they've done recently.

I still don't think they've got true "machine intelligence" but I do
think they've got an interesting program.

-- 
-Fred Gilham                                      ······@csl.sri.com
"In America, we have a two-party system.  There is the stupid
party. And there is the evil party. I am proud to be a member of the
stupid party.  Periodically, the two parties get together and do
something that is both stupid and evil. This is called --
bipartisanship."   --Republican congressional staffer
From: Gareth McCaughan
Subject: Re: history of AI Winter?
Date: 
Message-ID: <slrnbbgc0a.21k2.Gareth.McCaughan@g.local>
Fred Gilham wrote:

> Finally I found it.  It is "CASSIE".  There's an article in LECTURE
> NOTES IN ARTIFICIAL INTELLIGENCE #390 about it, and they describe it
> as follows:
> 
>      ...To focus our thinking and our discussions, we have invented
>      CASSIE, the Cognitive Agent of the SNePS system --- an
>      Intelligent Entity.  (Pg. 363)
...
> Anyway what is interesting is that these guys are still in business
> doing interesting work.  The SNePS system is available on line.  It's
> done in Common Lisp and they also use Garnet to do some GUI stuff.  So
> if anyone wants to evaluate the claims you can, just download SNePS or
> read some of the papers they've done recently.

I don't think you can download CASSIE, so you can't
check the claims they make about that.

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Tim Bradshaw
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <ey3aderliov.fsf@cley.com>
I think that if you want to know the history of the AI winter (rather
than what, if anything, Lisp had to do with it), then you just need to
watch XML and particularly the whole `semantic web' rubbish - history
is being rewritten as we watch.

--tim
From: Bob Bane
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <3E9C6FE0.3090201@removeme.gst.com>
Tim Bradshaw wrote:

 > I think that if you want to know the history of the AI winter (rather 
 > than what, if anything, Lisp had to do with it), then you just need to
 > watch XML and particularly the whole `semantic web' rubbish - history
 > is being rewritten as we watch.
 >

If we're lucky, we'll be talking about 'XML Winter' in a few years.  If 
we're REALLY lucky, Java will be blamed for XML Winter.

	- Bob Bane
From: Fred Gilham
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <u7k7dvo1br.fsf@snapdragon.csl.sri.com>
> If we're lucky, we'll be talking about 'XML Winter' in a few years.
> If we're REALLY lucky, Java will be blamed for XML Winter.

What scares me is that the above may happen --- and we'll wind up with
C-octothorp.  I already had the frisson-producing experience of seeing
someone ask for a way to call ACL from C-octothorp today.

BTW, some time ago I wrote an explanation of how Lisp got where it was
for a certain individual who shall remain nameless for fear that using
his name in a posting would somehow become invocation....

Here it is:

----------------------------------------
Well, I really think this has gone far enough, and the best thing to
do is to let i---- in on the REAL SECRET of Lisp.

He wrote:
> i try to assimilate the best of LISP.
> 
> and to throw away the garbage of LISP.

The REAL SECRET of Lisp is that Lisp is ALL garbage.  Yes, it's true.
It all started like this.

Back in the early '60s a bunch of mathematicians were thinking,
"Mathematicians are like farmers: they never make any money.  How can
we cash in on the computer revolution?"

So they decided to invent a computer language that they could use to
impress the government and get lots of grants.  They found this
language that had some claims to a mathematical foundation and decided
that with a little massaging they could get it to be virtually
incomprehensible except to those who were in the know.  They came up
with things like a natural language parser that could identify the
parts of speech, and promised that in a few years they'd be able to
understand Russian.  And so on.  They made a few mistakes, of course.
One famous failure was a robot-arm that was supposed to catch a tennis
ball. (The assumption was that more advanced versions might be able to
THROW a tennis ball, and perhaps even throw things like hand-grenades
and so on.  The military applications were "obvious".)  Anyway, when
they got the generals in for a demo, they threw the ball at the arm.
As it reached up to catch the ball, the Lisp control program paused to
"reclaim memory" (heh, heh, yes, I know what it was really doing),
causing the arm to miss the ball.  The generals were not impressed,
coming perilously close to penetrating the deception when the panicked
researchers actually mentioned `garbage collection'.

Nevertheless, as plots go it had a pretty nice run.  It went for about
thirty years before people in the government started to catch on.
Many of the early illuminati were able to parlay their association
with Lisp into reputations that allowed them to move on to other, more
respectable endeavors.  Some, such as John McCarthy, even won prizes.

More recently, creative members of the illuminati have sometimes taken
advantage of Lisp's impenetrability to profit from the dot-com craze.
One such person, in a clever application of `recursion theory' (heh,
heh, yes, I know), went around describing Lisp as a "programmable
programming language" and was able to make quite a nice pile of cash.

The people who post in this newsgroup consist of two kinds of people:
1) Ex-illuminati who are nostalgic for the good old days, and
2) Want-to-be illuminati who are hoping to revive and cash in on the
   plot.

That's why newcomers are treated with such disdain (we don't want
people horning in on our action), and why all attempts to make Lisp
more comprehensible to the mainstream are rejected.

At first I thought that perhaps i---- would be a worthy member of the
illuminati.  After all, few even of the early illuminati have a
writing style that gives such a tantalizing appearance of content,
while involving the reader in such mazes of bewilderment when he
attempts to actually discover that content.  (Guy Steele, for example,
actually verges on comprehensibility from time to time.)

Unfortunately, for some inexplicable reason i---- insisted upon trying
to make Lisp understandable by attempting to write reader macros that
would massage its syntax into something the average person might be
comfortable with.  Of course that would be fatal, making it clear to
everyone that Lisp was, as I said, completely without redeeming social
value.  He thus showed that he was not, in fact, worthy of being a
part of the illuminati.  Sorry, but a line has to be drawn somewhere.
Sell all the snake oil you want, but don't queer the pitch for the
rest of us.

So, i----, you are wasting your time and should probably go back to
C++ or Java, where you can get some real things accomplished.  Or
something.
----------------------------------------

-- 
Fred Gilham                                        ······@csl.sri.com
America does not know the difference between sex and money. It treats
sex like money because it treats sex as a medium of exchange, and it
treats money like sex because it expects its money to get pregnant and
reproduce.                                   --- Peter Kreeft
From: Tim Bradshaw
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <ey3u1czbaak.fsf@cley.com>
* Bob Bane wrote:

> If we're lucky, we'll be talking about 'XML Winter' in a few years.

I hope so

> If we're REALLY lucky, Java will be blamed for XML Winter.

But not this.  Java is a step up from C/C++, and not controlled by a
monopolist like C#.

--tim
From: Gabe Garza
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <87znmrdl37.fsf@ix.netcom.com>
Tim Bradshaw <···@cley.com> writes:

> * Bob Bane wrote:
> > If we're REALLY lucky, Java will be blamed for XML Winter.
> 
> But not this.  Java is a step up from C/C++, and not controlled by a
> monopolist like C#.

Sun is less evil then Microsoft like Pol Pot is less evil then
Hitler[1].

Gabe Garza

[1] There's no way a religious conversation like Sun v. Microsoft is
    going to be rationally[2] discussed.  I hereby breach rationality 
    and in doing so prematurely invoke Godwin's law--may the soul 
    of this thread rest in peace.

    [2] Traps have been set for the assassins.
From: Paul F. Dietz
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <Io6cnZC6-tOGpgCjXTWcoA@dls.net>
Gabe Garza wrote:
>  I hereby breach rationality 
>     and in doing so prematurely invoke Godwin's law--may the soul 
>     of this thread rest in peace.

Sorry -- Godwin's law doesn't take effect if you try to invoke
it deliberately.

	Paul
From: Tim Bradshaw
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <ey34r4yn169.fsf@cley.com>
* Gabe Garza wrote:

> Sun is less evil then Microsoft like Pol Pot is less evil then
> Hitler[1].

Neither is evil.  Only free software cultists talk about good and evil
in these terms, because it means they don't have to think.  They are
both just companies, trying to do what companies do, which is make a
lot of money.  Microsoft have become a monopolist as Sun would no
doubt like to do.  Monopolies are often a bad thing, but this doesn't
mean that companies should not try to become monopolies: they should,
and the legal & regulatory system should prevent them doing so -
that's what it's *for*.  There is *nothing wrong* with MS or Sun, what
is wrong is the legal / regulatory framework, which has failed the
people of the US, and also of the rest of the world so far.

Or to put it another way, if you must think in terms of good and evil:
MS's *monopoly* is evil, but MS are not.  Sun's monopoly would be evil
too, if they had one (but they don't).  A company attempting to
acquire a monopoly is not evil (unless they break the rules, which MS
may have done, of course).  Dammit, *Cley* wants a monopoly!  Am I
evil (oh, yes, I guess that's what the horns and hooves are, I'd
always wondered...).

--tim (on the internet, no one knows you're the Devil)
From: Paolo Amoroso
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <IXudPswqd2e2ZDwuvlht5XEb1ezA@4ax.com>
On 16 Apr 2003 12:15:26 +0100, Tim Bradshaw <···@cley.com> wrote:

> may have done, of course).  Dammit, *Cley* wants a monopoly!  Am I
> evil (oh, yes, I guess that's what the horns and hooves are, I'd

Since you use Lisp, you are an eval.


Paolo
-- 
Paolo Amoroso <·······@mclink.it>
From: Daniel Barlow
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <877k9ube61.fsf@noetbook.telent.net>
Paolo Amoroso <·······@mclink.it> writes:

> On 16 Apr 2003 12:15:26 +0100, Tim Bradshaw <···@cley.com> wrote:
>
>> may have done, of course).  Dammit, *Cley* wants a monopoly!  Am I
>> evil (oh, yes, I guess that's what the horns and hooves are, I'd
>
> Since you use Lisp, you are an eval.

I aspire to that state.  I guess I'll just have to apply myself.


-dan

-- 

   http://www.cliki.net/ - Link farm for free CL-on-Unix resources 
From: Joe Marshall
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <4r4yxir6.fsf@ccs.neu.edu>
Gabe Garza <·······@ix.netcom.com> writes:

> Tim Bradshaw <···@cley.com> writes:
> 
> > * Bob Bane wrote:
> > > If we're REALLY lucky, Java will be blamed for XML Winter.
> > 
> > But not this.  Java is a step up from C/C++, and not controlled by a
> > monopolist like C#.
> 
> Sun is less evil then Microsoft like Pol Pot is less evil then
> Hitler[1].

It didn't take long for this thread to mention Hitler.
From: Kenny Tilton
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <3E9CFFA0.8090403@nyc.rr.com>
Tim Bradshaw wrote:
> * Bob Bane wrote:
> 
> 
>>If we're lucky, we'll be talking about 'XML Winter' in a few years.
> 
> 
> I hope so
> 
> 
>>If we're REALLY lucky, Java will be blamed for XML Winter.
> 
> 
> But not this.  Java is a step up from C/C++, and not controlled by a
> monopolist like C#.

Oh, Christ. We need a ten-step program for Lisp Gods who are losing the 
faith, succumbing to Stockholm Syndrome like Eran, Tim and Peter, Paul 
and ****.

C'mon, when Java collapses, smothering beneath it the lingering embers 
of (its legacy) C++, it's a free-for-all between CL and ...what? Perl? 
Python? Ruby? Eiffel? Is that a fight we fear?

These ringing (not!) defenses of Java -- "A Step Up from C/C++" -- you 
/do/ know the Mark Twain line about "damning with faint praise", don't you?

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Erann Gat
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <gat-1604030856420001@192.168.1.51>
In article <················@nyc.rr.com>, Kenny Tilton
<·······@nyc.rr.com> wrote:

> Oh, Christ. We need a ten-step program for Lisp Gods who are losing the 
> faith, succumbing to Stockholm Syndrome like Eran,

This is getting surreal in so many different ways.

I'm not sure what is more bizzare: being promoted to "Lisp God", or having
people care enough about what I think that they want me to "recover" so I
can think the right things.

E.
From: Tim Bradshaw
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <ey3znmqlm8j.fsf@cley.com>
* Kenny Tilton wrote:
> Oh, Christ. We need a ten-step program for Lisp Gods who are losing
> the faith, succumbing to Stockholm Syndrome like Eran, Tim and Peter,
> Paul and ****.

Where did you get that I'm `losing the faith'.  I've been trying to
make it clear that:

   Faith has no place in these arguments (what do you think I'm trying
   to say when I talk about `free software cultists'?);

   There is a place for more than one decent language in the world,
   and CL, Java and C are among the decent languages (though I think
   that C++ is not, and C#, while it may be decent (I don't know) is
   too tainted by its origin).

I have not lost my faith in CL: I never *had* any faith in CL because
I reserve my faith for other parts of my life.  I think it's a way
cool language, and a really good solution for many problems.  But I
think there are other good languages, and I definitely do not want a
CL monoculture any more than I want a C, Java, Windows or Unix
monoculture.  Sorry about that.

--tim
From: Kenny Tilton
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <3E9D85E0.70309@nyc.rr.com>
Tim Bradshaw wrote:
> * Kenny Tilton wrote:
> 
>>Oh, Christ. We need a ten-step program for Lisp Gods who are losing
>>the faith, succumbing to Stockholm Syndrome like Eran, Tim and Peter,
>>Paul and ****.
> 
> 
> Where did you get that I'm `losing the faith'.  

A little extrapolation was required for that half-serious rant:

I figure if a Lispnik is praising Java for managing to become popular, 
then they must be cracking under the pressure of being one of the few 
albeit happpy few to dig Lisp. Likewise for, in this case, praising a 
language for being a little better than C/C++.

I sense a discouraged advocate, hence the "losing faith" poke.

-- 

  kenny tilton
  clinisys, inc
  http://www.tilton-technology.com/
  ---------------------------------------------------------------
"Everything is a cell." -- Alan Kay
From: Marco Antoniotti
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <Itlna.5$ot1.904@typhoon.nyu.edu>
Bob Bane wrote:

> Tim Bradshaw wrote:
>
>  > I think that if you want to know the history of the AI winter (rather
>  > than what, if anything, Lisp had to do with it), then you just need to
>  > watch XML and particularly the whole `semantic web' rubbish - history
>  > is being rewritten as we watch.
>  >
>
> If we're lucky, we'll be talking about 'XML Winter' in a few years.  If
> we're REALLY lucky, Java will be blamed for XML Winter.


If we are REALLY REALLY lucky, Perl and Python will get the blame.  If 
we are REALLY REALLY REALLY lucky, VB and C# will get the blame.

However,  given the state of the world these days, the most probable 
thing that will happen is that Lisp will get the blame :)

Cheers

--
Marco
From: Henrik Motakef
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <87wuhu57xf.fsf@interim.henrik-motakef.de>
Marco Antoniotti <·······@cs.nyu.edu> writes:

>> If we're lucky, we'll be talking about 'XML Winter' in a few years.  If
>> we're REALLY lucky, Java will be blamed for XML Winter.
>
> If we are REALLY REALLY lucky, Perl and Python will get the blame.  If
> we are REALLY REALLY REALLY lucky, VB and C# will get the blame.
>
> However,  given the state of the world these days, the most probable
> thing that will happen is that Lisp will get the blame :)

After all, XML is just clumsy sexprs, no?

Regards
Henrik ;-)
From: Fernando Mato Mira
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <dc2f1d1b.0304170443.d1d7609@posting.google.com>
Henrik Motakef <··············@web.de> wrote in message news:<··············@interim.henrik-motakef.de>...
> Marco Antoniotti <·······@cs.nyu.edu> writes:
> 
> >> If we're lucky, we'll be talking about 'XML Winter' in a few years.  If
> >> we're REALLY lucky, Java will be blamed for XML Winter.
> >
> > If we are REALLY REALLY lucky, Perl and Python will get the blame.  If
> > we are REALLY REALLY REALLY lucky, VB and C# will get the blame.
> >
> > However,  given the state of the world these days, the most probable
> > thing that will happen is that Lisp will get the blame :)
> 
> After all, XML is just clumsy sexprs, no?

Check this out for more proof:

http://www.datapower.com/products/xa35.html
From: Fred Gilham
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <u7adepxe1j.fsf@snapdragon.csl.sri.com>
········@acm.org (Fernando Mato Mira) wrote:
> Check this out for more proof:
> 
> http://www.datapower.com/products/xa35.html

I found the following interesting.

     Q: Which XML and XSLT processors does the XA35 use? Doesn?t the
        XA35 use open-source or third-party software?

        Absolutely not! The XA35 is purpose-built to process XML and
        XSLT using our own advanced compiler technologies. DataPower
        owns all of its patent-pending intellectual property and is
        not restricted by the development schedules of third parties.

So not being open-source seems to be a selling point???  Well, I guess
maybe they're just claiming that they don't use other people's stuff,
so they're not subject to other people's problems.

I also notice that they want to patent compilers:

     Q: What is XML Generation Three(tm) technology?
 
         XML Generation Three(tm) or XG3(tm) is a patent pending
         technology invented by DataPower to address the unique
         demands of XML and XSLT processing. It is the core technology
         within the XA35 XML Accelerator and all of DataPower's
         XML-Aware products.
 
     Q: Why is the XA35 so fast? How fast is it?

        XG3(tm) technology compiles the operations described in a
        stylesheet directly to *machine code*, the actual instructions
        executed by the target CPU. This results in order-of-magnitude
        performance advantage over java-based or other interpreter
        systems. The dynamic nature of XSL is not lost, from the
        user's perspective the output is the same --- just accelerated
        by 10X or more in most cases. It is important to note that the
        10x performance improvement the XA35 delivers is for both
        latency and throughput, the two crucial measurements of speed.

Does this give anyone else besides me a kind of feeling of impending
doom?

-- 
Fred Gilham                                        ······@csl.sri.com
A common sense interpretation of the facts suggests that a
superintellect has monkeyed with physics, as well as with chemistry
and biology, and that there are no blind forces worth speaking about
in nature. --- Fred Hoyle
From: Marc Battyani
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <CD92676068F6D45C.5EB04960BD5BCF4E.041A3C16E6726EF1@lp.airnews.net>
"Fred Gilham" <······@snapdragon.csl.sri.com> wrote
> I also notice that they want to patent compilers:
>
>      Q: What is XML Generation Three(tm) technology?
>
>          XML Generation Three(tm) or XG3(tm) is a patent pending
>          technology invented by DataPower to address the unique
>          demands of XML and XSLT processing. It is the core technology
>          within the XA35 XML Accelerator and all of DataPower's
>          XML-Aware products.
>
>      Q: Why is the XA35 so fast? How fast is it?
>
>         XG3(tm) technology compiles the operations described in a
>         stylesheet directly to *machine code*, the actual instructions
>         executed by the target CPU. This results in order-of-magnitude
>         performance advantage over java-based or other interpreter
>         systems. The dynamic nature of XSL is not lost, from the
>         user's perspective the output is the same --- just accelerated
>         by 10X or more in most cases. It is important to note that the
>         10x performance improvement the XA35 delivers is for both
>         latency and throughput, the two crucial measurements of speed.
>
> Does this give anyone else besides me a kind of feeling of impending
> doom?

The sad point is that they will surely be granted some patents for this by
the USPTO.
BTW European people should look at http://www.eurolinux.org/ to see the
current software patents status. The news are rather alarming.

Marc
From: Frank A. Adrian
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <TQfoa.17$Qa5.34664@news.uswest.net>
Fred Gilham wrote:

> Does this give anyone else besides me a kind of feeling of impending
> doom?

It gives me a feeling of impending horselaugh...

faa
From: Marco Antoniotti
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <88Ana.1$L4.96@typhoon.nyu.edu>
Fernando Mato Mira wrote:

> Henrik Motakef  wrote in message 
> news:<··············@interim.henrik-motakef.de>...
>
> >Marco Antoniotti  writes:
> >
> >
> >>>If we're lucky, we'll be talking about 'XML Winter' in a few years.  If
> >>>we're REALLY lucky, Java will be blamed for XML Winter.
> >>
> >>If we are REALLY REALLY lucky, Perl and Python will get the blame.  If
> >>we are REALLY REALLY REALLY lucky, VB and C# will get the blame.
> >>
> >>However,  given the state of the world these days, the most probable
> >>thing that will happen is that Lisp will get the blame :)
> >
> >After all, XML is just clumsy sexprs, no?
>
>
> Check this out for more proof:
>
> http://www.datapower.com/products/xa35.html


How come I have a sense of "deja vu"? :)

Cheers

--
Marco Antoniotti
From: Mario S. Mommer
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <fzistd9kvz.fsf@cupid.igpm.rwth-aachen.de>
Marco Antoniotti <·······@cs.nyu.edu> writes:
> > http://www.datapower.com/products/xa35.html
> 
> How come I have a sense of "deja vu"? :)

The first time it is tragedy, the second time, comedy.
From: Joe Marshall
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <4r4x9jkw.fsf@ccs.neu.edu>
Mario S. Mommer <········@yahoo.com> writes:

> Marco Antoniotti <·······@cs.nyu.edu> writes:
> > > http://www.datapower.com/products/xa35.html
> > 
> > How come I have a sense of "deja vu"? :)
> 
> The first time it is tragedy, the second time, comedy.

Yeah?  How about the forty-third time?
From: Bob Bane
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <3E9ED53F.6010006@removeme.gst.com>
Henrik Motakef wrote:

> Marco Antoniotti <·······@cs.nyu.edu> writes:
> 
> After all, XML is just clumsy sexprs, no?
> 



My Slashdot .sig has always been:


	To a Lisp hacker, XML is S-expressions in drag.
From: Paolo Amoroso
Subject: Re: hostory of AI Winter?
Date: 
Message-ID: <wdmbPqrIN9yS37mN3G4+nX+1+l8l@4ax.com>
On 14 Apr 2003 18:22:13 -0700, ········@medialab.com (Chris Perkins) wrote:

> I was re-reading PAIP this weekend, admiring Prolog, and looking over
> the TI Explorer manuals posted on LemonOdor, when I got to wondering 
> "What happened?"  "How did a language like Lisp, with such abstractive
> power and productivity fall into disfavor, or passed by?"

You may check the book "The Brain Makers". There are also a few papers
about Lisp and the AI winter at a Lisp Machine online museum site.


Paolo

P.S.
No references handy, I'm in a hurry, sorry.
-- 
Paolo Amoroso <·······@mclink.it>