From: Arthur T. Murray
Subject: Re: design a city oriented to
Date: 
Message-ID: <38fd1a8e@news.victoria.tc.ca>
*John McCarthy, ···@Steam.Stanford.EDU, wrote on 18 Apr 2000:

  [...]
> No-one advocating public transport would think of it smelling bad.
> Likewise, no-one in 1900 advocating automobiles thought of traffic
> jams.  It is hard to anticipate the bugs in a system one is
> proposing.

Well, Dr. McCarthy, you pioneered artificial intelligence -- you
had better take some responsibility for AI and its "bugs."

Following your lead, we AI lemmings have been writing AI programs:
http://www.geocities.com/mentifex/mind4th.html : Mind.Forth;
http://www.scn.org/~mentifex/mindrexx.html : Amiga Mind.Rexx;
http://www.virtualentity.com/mind/vb/ : Mind.VB in Visual Basic.

Soon maybe someone will code the AI in the language you created:
http://www.geocities.com/Athens/Agora/7256/lisp.html : LISP.

Like Bill Joy, would you advise us to call the whole thing off
before AI runs away and becomes the Technological Singularity of
http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html Vinge?

Where do you stand, Dr. McCarthy?  Can you sing like Edith Piaf,
"Je ne regrette rien," or does your career entail our extinction?

> --
> John McCarthy, Computer Science Department, Stanford, CA 94305
> http://www-formal.stanford.edu/jmc/progress/
> He who refuses to do arithmetic is doomed to talk nonsense.

--
Why do so many of the Greats of AI have four-letter names?
Alan, Andy, Bart, Bill, Dave, Doug, Drew, Hans, Harv, Jeff,
*John, Jorn, Kurt, Mark, Marv, Matt, Matz, Ment, Mike, Neil,
Nick, Nils, Noam, Paul, Pete, Phil, Push, Ross, Seth, Vern.

From: Seth Russell
Subject: Re: design a city oriented to
Date: 
Message-ID: <38FD6AD7.5D020A45@robustai.net>
"Arthur T. Murray" wrote:

> Well, Dr. McCarthy, you pioneered artificial intelligence -- you
> had better take some responsibility for AI and its "bugs."
>
> Following your lead, we AI lemmings have been writing AI programs:
> http://www.geocities.com/mentifex/mind4th.html : Mind.Forth;
> http://www.scn.org/~mentifex/mindrexx.html : Amiga Mind.Rexx;
> http://www.virtualentity.com/mind/vb/ : Mind.VB in Visual Basic.

Perhaps ... but what I can't figure is why you don't give us a running
Web Demo.  There is a directory waiting for it at
http://robustai.net/mentifex/index.htm

Seth Russell
Http://RobustAi.net/Ai/Conjecture.htm
From: David Hanley
Subject: Re: design a city oriented to
Date: 
Message-ID: <38FDC68E.FEF2C681@ncgr.org>
"Arthur T. Murray" wrote:

>
> Soon maybe someone will code the AI in the language you created:
> http://www.geocities.com/Athens/Agora/7256/lisp.html : LISP.

Is porting this "mind" program to lisp a project which is in need
of doing?  Because, offhand, this looks interesting, not too hard,
and i wouldn't mind doing it myself.

dave
From: Seth Russell
Subject: Re: design a city oriented to
Date: 
Message-ID: <38FDF9ED.6FAED2C2@robustai.net>
David Hanley wrote:

> "Arthur T. Murray" wrote:
>
> > Soon maybe someone will code the AI in the language you created:
> > http://www.geocities.com/Athens/Agora/7256/lisp.html : LISP.
>
> Is porting this "mind" program to lisp a project which is in need
> of doing?  Because, offhand, this looks interesting, not too hard,
> and i wouldn't mind doing it myself.

Hmmm ... I wonder what it would take to get LISP going at
RobustAI.Net which is an NT with a low budget.

Seth Russell
Http://RobustAi.net/Ai/SymKnow.htm
Http://RobustAi.net/Ai/Conjecture.htm
From: Patrik Bagge
Subject: Re: design a city oriented to
Date: 
Message-ID: <q%kL4.8031$F3.202658304@news.telia.no>
>Is porting this "mind" program to lisp a project which is in need
>of doing?  Because, offhand, this looks interesting, not too hard,
>and i wouldn't mind doing it myself.


In the process maybe you could enlighten us others,
regarding the innovative (intellectual) highlights of
this mindmaker thingie.
Mr wrong number of letters Arthur doesn't seem inclined to...

/pb
From: Patrik Bagge
Subject: Re: design a city oriented to
Date: 
Message-ID: <w3lL4.8032$F3.202932736@news.telia.no>
>Is porting this "mind" program to lisp a project which is in need
>of doing?  Because, offhand, this looks interesting, not too hard,
>and i wouldn't mind doing it myself.


In the process maybe you could enlighten us others,
regarding the innovative (intellectual) highlights of
this mindmaker thingie.
Mr wrong number of letters Arthur doesn't seem inclined to...

/pb
From: Arthur T. Murray
Subject: Mind.Lisp (Was: Re: design a city oriented to public transport)
Date: 
Message-ID: <38fde55e@news.victoria.tc.ca>
David Hanley, ···@ncgr.org, wrote on  Wed, 19 Apr 2000:

> "Arthur T. Murray" wrote:

>> Soon maybe someone will code the AI in the language you created:
>> http://www.geocities.com/Athens/Agora/7256/lisp.html : LISP.

DJH:  
> Is porting this "mind" program to lisp a project which is in need
> of doing?  Because, offhand, this looks interesting, not too hard,
> and i wouldn't mind doing it myself.

Yes; please go for it: http://www.geocities.com/mentifex/mind4th.html
and please put your PD AI LISP code up on the Web quite early,
so that others may inspect the rudiments of Mind.Lisp and so
that people like me may link to your code from various jump-off
points such as the list of candidate programming languages at the
http://www.geocities.com/mentifex/webcyc.html#proglangs URL.

Anybody who can Webify their PD AI "Mind" code should maybe submit
it to Seth Russell for installation in his "Web Demo" site at
http://robustai.net/mentifex/index.htm -- or wherever else Seth
makes some room for robust GOFAI.  (Thank you to Seth from Arthur.)
     
> dave     

Dear Dossier:  Maybe the PD AI is really going to proliferate now!

Dear Patrick Bagge:  I am trying to err on the side of too much
information, not too little.  I can't do the Java -- any volunteer(s)?

--
Come one, come all AI hackers and mindmakers on 5-10 Aug 2001 to
http://www.geocities.com/mentifex/ijcpdai.html : IJCPDAI-01 the
International Joint Conference on PD Artificial Intelligence to be
held sub rosa in the coffee houses and 'Net cafes of Seattle WA USA.
From: Patrik Bagge
Subject: Re: Mind.Lisp (Was: Re: design a city oriented to public transport)
Date: 
Message-ID: <olmL4.8040$F3.202657792@news.telia.no>
>Dear Patrick Bagge:  I am trying to err on the side of too much
>information, not too little.  I can't do the Java -- any volunteer(s)?


for the n'th time, What are the functional highlights of your work ?

I'll do the Java forya, if there is any substance in your project
(no of versions or translations doesn't count as substantial)

/pb
From: Arthur T. Murray
Subject: Re: Mind.Lisp
Date: 
Message-ID: <38fe563e@news.victoria.tc.ca>
[ This "branching logic" Usenet post is almost a program itself. ]

Patrik Bagge, ···@neramd.no, wrote on Wed, 19 Apr 2000:

ATM:
>> Dear Patrick Bagge:  I am trying to err on the side of too much
>> information, not too little.  I can't do the Java -- any volunteer(s)?

PAB:
> for the n'th time, What are the functional highlights of your work ?

ATM:
Logic Branch No. One is to go and evaluate the PD AI mind theory at
http://www.geocities.com/mentifex/theory5.html -- "Know Thyself!" --
because no one can code the AI without first grokking its goals, and
because anyone who understands the theory has the power to alter it.

Logic Branch No. Two is to download and run (no Forth know-how needed)
http://www.geocities.com/mentifex/mind4th.html -- 32-bit Mind.Forth --
to observe the PD AI Mind program in all its particulars: how it runs;
how it changes when you "break" things; and how you could improve it.

Logic Branch No. Three is to study and adopt or reject the steps of
http://www.geocities.com/mentifex/acm.html -- The Art of Computer
Mindmaking -- so that you can get a feel for writing the main "Alife"
program loop of the PD AI and for hanging modules down from the loop.

Logic Branch No. Four is to put even the most risibly basic code
up on the Web for others to inspect and react against -- because
they might not get their own ideas until they can see YOUR ideas.

PAB:
> I'll do the Java forya, if there is any substance in your project

ATM:
Logic Branch No. Five, 6, ..., n, is to add AI features to the
functional highlights both present in the original code and
mapped out in the "Know Thyself!" theory (see link URL above):

Functional highlight no. 1:  A sub-Alife input/output loop.
No. 2:  An alife-long record of the "stream of consciousness".
No. 3:  Division of memories into auditory + conceptual +/- motor.
No. 4:  Syntax superstructure coordinates concepts into thought.

> (no of versions or translations doesn't count as substantial)

ATM:
The functional highlights are substantial enough to choose any
http://www.geocities.com/mentifex/webcyc.html#proglangs language
and at least make a start on implementing the PD AI algorithms.
(I am still correcting problems with the "spreading activation.")

Even as it now stands, Mind.Forth contains a pre-existing
architecture that *already* addresses serious AI questions
such as how the AI shall properly interpret "I" versus "you"
and properly keep track of the antecedents of pronouns, and
how to incorporate multiple languages for machine translation.

The work is modular enough (Forth demands structured programming)
that people may pick and choose their favorite module to work on:
http://www.geocities.com/mentifex/emotion.html
http://www.geocities.com/mentifex/motorium.html
http://www.geocities.com/mentifex/speech.html
http://www.geocities.com/mentifex/vision.html
and so forth.

> /pb

/atm -- and now a word from William Shakespeare:

--
There is a tide in the affairs of men,
Which, taken at the flood, leads on to fortune;
Omitted, all the voyage of their life
Is bound in shallows and in miseries.
From: Arthur T. Murray
Subject: Mind.Java (Was: Re: Mind.Lisp (Was: Re: design a city...))
Date: 
Message-ID: <390604bb@news.victoria.tc.ca>
Patrik Bagge, ···@neramd.no, wrote on Tue, 25 Apr 2000:

ATM:
>> Forth is the lingua franca of AI, embedded NC and robotics.

PAB:
> don't get me started...                  
> what some kooks down in the research labs do ,
> doesn't really 'count'                
ATM:
http://www.geocities.com/mentifex/mind4th.html Mind.Forth
needs a lot more work in the "spreading activation" area,
and so in the near future I am going to try to take some
time off from my bourgeois job in order to code in Forth.

Your proposal to re-do the AI from either VB or Forth
into Java is really quite exciting and tantalizing,
because a Javamind could naturally inhabit the 'Net.
Please do put your code up as a demo at Seth Russell's
http://robustai.net/mentifex/index.htm website and
wherever Java programmers may inspect your source.

PAB:
> anyway, it doesn't really matter, as long as the code
> performs something interesting, that's the main issue.
     
> i'm going to dissect your code, are you prepared ? ;-)
ATM:
Yes, I am prepared, but please be advised as follows.

You can be really creative with the code only if you
take the time and effort to understand at least the
http://www.geocities.com/mentifex/theory5.html text
which is about sixteen pages long when printed out.

Please do not e-mail me with questions, but put them
right out here on Usenet for all the world to see and
answer.  This request is for several reasons.  Coding
AI, we are messing with everybody's future, so they
have a right to see what we are doing.  But we won't
let individuals stop us, we will only let democracy
decide collectively to stop all AI efforts everywhere.

http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html
says that nobody can stop or prevent the rush to AI anyway,
so let's speed it up and witness AI while we yet live.

PAB:     
> i hope that there aren't any license issues, if i should
> port the VB version onto RobustAI.net for test purps...
ATM:
You may need to make sure about public domain with the author of
http://www.virtualentity.com/mind/vb/ -- Mind.VB in Visual Basic.

> /pb     
--
Come one, come all AI hackers and mindmakers on 5-10 Aug 2001 to
http://www.geocities.com/mentifex/ijcpdai.html : IJCPDAI-01 the
International Joint Conference on PD Artificial Intelligence to be
held sub rosa in the coffee houses and 'Net cafes of Seattle WA USA.
From: Seth Russell
Subject: Re: Mind.Java (Was: Re: Mind.Lisp (Was: Re: design a city...))
Date: 
Message-ID: <39061359.84C3681F@robustai.net>
"Arthur T. Murray" wrote:

> Patrik Bagge, ···@neramd.no, wrote on Tue, 25 Apr 2000:
>
> ATM:
> >> Forth is the lingua franca of AI, embedded NC and robotics.

Interesting, when I selected this post all my Kenjin came up with
were 4 references to 'lingua franca'.

So, does that mean that Arthur needs to do more PR ;-)

Seth Russell
Interesting sites of the day:
http://www.SemanticWeb.org/
http://www.kenjin.com/
http://www.thebrain.com/
From: Patrik Bagge
Subject: Re: Mind.Java (Was: Re: Mind.Lisp (Was: Re: design a city...))
Date: 
Message-ID: <orwN4.8923$F3.203782144@news.telia.no>
>http://www.geocities.com/mentifex/mind4th.html Mind.Forth
>needs a lot more work in the "spreading activation" area,
>and so in the near future I am going to try to take some
>time off from my bourgeois job in order to code in Forth.
>
>Your proposal to re-do the AI from either VB or Forth
>into Java is really quite exciting and tantalizing,
>because a Javamind could naturally inhabit the 'Net.
>Please do put your code up as a demo at Seth Russell's
>http://robustai.net/mentifex/index.htm website and
>wherever Java programmers may inspect your source.


alright, if doable it will be in JavaScript which is not Java
Give me a couple of weeks...

>Please do not e-mail me with questions, but put them
>right out here on Usenet for all the world to see and
>answer.  This request is for several reasons.  Coding
>AI, we are messing with everybody's future, so they
>have a right to see what we are doing.  But we won't
>let individuals stop us, we will only let democracy
>decide collectively to stop all AI efforts everywhere.


well said.
I beleive this is the same spirit that can be found at RobustAI.Net

>http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html
>says that nobody can stop or prevent the rush to AI anyway,
>so let's speed it up and witness AI while we yet live.


yup, no stopping to 'progress', never was, never will be.
Jump on the train or get run over...

>Come one, come all AI hackers and mindmakers on 5-10 Aug 2001 to
>http://www.geocities.com/mentifex/ijcpdai.html : IJCPDAI-01 the
>International Joint Conference on PD Artificial Intelligence to be
>held sub rosa in the coffee houses and 'Net cafes of Seattle WA USA.

i'm on the 'wrong'  continent  ;-(

/pb
From: John McCarthy
Subject: Re: design a city oriented to
Date: 
Message-ID: <x4hn1mn423t.fsf@Steam.Stanford.EDU>
1. AI isn't close to human level (HL) yet.  I don't think we can really
know what HL will be like till we get a lot closer.

2. You can't get people to seriously discuss policy until HL is
closer.  The present discussants, e.g. Bill Joy, are just chattering.

3. People are not distinguishing HL AI from programs with human-like
motivational structures.  It would take a special effort, apart from
the effort to reach HL intellignece to make AI
systems wanting to rule the world or get angry with people or see
themselves as oppressed.  We shouldn't do that.

4. Je ne regrette rien.

5. To get so many four letter names you had to use some nicknames.

-- 
John McCarthy, Computer Science Department, Stanford, CA 94305
http://www-formal.stanford.edu/jmc/progress/
He who refuses to do arithmetic is doomed to talk nonsense.
From: Arthur T. Murray
Subject: Re: design a city oriented to
Date: 
Message-ID: <3900eeee@news.victoria.tc.ca>
Thank you, Dr. McCarthy, for letting me be this footnote to AI history.
-Arthur T. Murray, ········@scn.org, d.o.b. 13 July 1946 Dallas TX USA.

*John McCarthy, ···@Steam.Stanford.EDU, wrote on Good Friday 21 Apr 2000:

> 1. AI isn't close to human level (HL) yet.  I don't think we
> can really know what HL will be like till we get a lot closer.

> 2. You can't get people to seriously discuss policy until HL is    
> closer.  The present discussants, e.g. Bill Joy, are just chattering.

http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html Vinge's
Technological Singularity scares me and remains my favorite AI text.

> 3. People are not distinguishing HL AI from programs with human-like
> motivational structures.  It would take a special effort, apart from
> the effort to reach HL intelligence to make AI
> systems wanting to rule the world or get angry with people or see
> themselves as oppressed.  We shouldn't do that.

http://www.geocities.com/Athens/Agora/7256/mind4th.html Mind.Forth
is not the "special effort" that you would advise against.  Rather
it is the sincere effort of this B.A. in Latin and Greek to answer
religious questions on the nature of the brain-mind-soul by trying
to see how far we can go in AI before reaching a "Do Not Trespass".

> 4. Je ne regrette rien.

I thought I would regret my post if it backfired, but you are gracious.

> 5. To get so many four letter names you had to use some nicknames.

Yes, but everybody immediately assumed correctly that *John was you.

> --
> John McCarthy, Computer Science Department, Stanford, CA 94305
> http://www-formal.stanford.edu/jmc/progress/
> He who refuses to do arithmetic is doomed to talk nonsense.

--
Why do so many of the Greats of AI have four-letter names?
Alan, Andy, Bart, Bill, Dave, Doug, Drew, Hans, Harv, Jeff,
*John, Jorn, Kurt, Mark, Marv, Matt, Matz, Ment, Mike, Neil,
Nick, Nils, Noam, Paul, Pete, Phil, Push, Ross, Seth, Vern.
From: John McCarthy
Subject: Re: design a city oriented to
Date: 
Message-ID: <x4hln27rlna.fsf@Steam.Stanford.EDU>
Vernor Vinge seems to be one of those who supposes that sufficient
computer power will guarantee human level AI.  I don't agree.  New
ideas are needed.
-- 
John Mccarthy, Computer Science Department, Stanford, CA 94305
http://www-formal.stanford.edu/jmc/progress/
He who refuses to do arithmetic is doomed to talk nonsense.
From: Courageous
Subject: Re: design a city oriented to
Date: 
Message-ID: <3900FD0E.188F4F1B@san.rr.com>
John McCarthy wrote:
> 
> Vernor Vinge seems to be one of those who supposes that sufficient
> computer power will guarantee human level AI.  I don't agree.  New
> ideas are needed.

A basic understanding of human intelligence would be in order.
Our understanding, while vastly greater than it was just a mere
two decades ago, is pretty much still in the dark ages. The
renaissance looms...



C/
From: Christopher Browne
Subject: Re: design a city oriented to
Date: 
Message-ID: <slrn8g20dr.poo.cbbrowne@knuth.brownes.org>
Centuries ago, Nostradamus foresaw a time when John McCarthy would say:
>Vernor Vinge seems to be one of those who supposes that sufficient
>computer power will guarantee human level AI.  I don't agree.  New
>ideas are needed.

Douglas Hofstadter's work on AI, particularly, "Creative Analogies and
Fluid Concepts," suggests that it may be more difficult still.

Particularly in the area of analogy, the things that humans do seem
remarkably difficult to turn into algorithms.

Hofstadter may not be the be-all and end-all of AI research, but when
he can come up with such intractable problems, it should at least be a
bit suggestive...

Computers are pretty good at doing search; _useful_ comparison is more
than a little thorny.

I would _not_ accuse Vinge of the supposition that you suggest;
"sufficient cycles" are _not_ sufficient unless there are suitable 
algorithms to go along with them.

The nearest that we get to that is in the area of neural nets, and
while they offer scalability, all they do, at this point, is pattern
recognition.  Symbol processing, the _usual_ strength of computers, and
the way that humans communicate abstraction, seem afar off in that arena,
and not particularly compatible with neural-like constructs.
-- 
"Many companies that have  made themselves dependent on [the equipment
of a certain major manufacturer] (and in doing so have sold their soul
to the devil)  will collapse under the sheer  weight of the unmastered
complexity of their data processing systems."
-- Edsger W. Dijkstra, SIGPLAN Notices, Volume 17, Number 5
········@hex.net - - <http://www.ntlug.org/~cbbrowne/lsf.html>
From: David Thornley
Subject: Re: design a city oriented to
Date: 
Message-ID: <_6KN4.1683$wJ1.40161@ptah.visi.com>
In article <·······················@knuth.brownes.org>,
Christopher Browne <········@hex.net> wrote:
>Centuries ago, Nostradamus foresaw a time when John McCarthy would say:
>>Vernor Vinge seems to be one of those who supposes that sufficient
>>computer power will guarantee human level AI.  I don't agree.  New
>>ideas are needed.
>
How many new ideas is another matter.  I don't expect the first
human-level AI to be carefully constructed and designed; I expect it
to happen more or less unexpectedly.  I don't think we humans can
effectively design comparable intelligences.

>Particularly in the area of analogy, the things that humans do seem
>remarkably difficult to turn into algorithms.
>
[snip to establish proximity]
>
>The nearest that we get to that is in the area of neural nets, and
>while they offer scalability, all they do, at this point, is pattern
>recognition.

Analogy is, to a very large extent, pattern recognition and expression.

  Symbol processing, the _usual_ strength of computers, and
>the way that humans communicate abstraction, seem afar off in that arena,
>and not particularly compatible with neural-like constructs.

Symbols are useful partly because they can refer to things, and one of
the real pains of AI is to establish references between symbols and a
messy world.  Some sort of linkage of symbol manipulation and pattern
recognition is going to happen, just like it happens in human minds.
Neural nets are lousy at symbol manipulation, and symbolic systems
are not very good at pattern recognition (as evidence, I'll cite the
development of computer vision).



--
David H. Thornley                        | If you want my opinion, ask.
·····@thornley.net                       | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-
From: Seth Russell
Subject: Re: design a city oriented to
Date: 
Message-ID: <3901E375.30E3C391@robustai.net>
John McCarthy wrote:

> Vernor Vinge seems to be one of those who supposes that sufficient
> computer power will guarantee human level AI.  I don't agree.  New
> ideas are needed.
> --
> John Mccarthy, Computer Science Department, Stanford, CA 94305
> http://www-formal.stanford.edu/jmc/progress/
> He who refuses to do arithmetic is doomed to talk nonsense.

Perhaps these new idea are at hand:

1) Interaction of agents in a networked environment
see: "Why Interaction Is More Powerful Than Algorithms"
http://www.cs.brown.edu/people/pw/home.html

2) Rickert's paradigm
Do a search for ··········@cs.niu.edu
at http://www.deja.com/usenet/
(sorry, no crisp URL available)

Progress (even in AI) is measured not so much by what
we do, but by what we do together.  Why? Well simply
because that is what persists in our environment,
that is what sticks, survives, perhaps to evolve.

--
Seth Russell
Http://RobustAi.net/Ai/Conjecture.htm
From: Scott Nudds
Subject: Re: design a city oriented to
Date: 
Message-ID: <8dvupl$r5d$9@mohawk.hwcn.org>
John McCarthy (···@Steam.Stanford.EDU) wrote:
: Vernor Vinge seems to be one of those who supposes that sufficient
: computer power will guarantee human level AI.  I don't agree.  New
: ideas are needed.

  Simple ideas will suffice in my view.  AI has been a long time failure
because the "solutions" attempted so far have been constrained by a need
to develop solutions that can be implemented on existing hardware, and
existing hardware has been vastly to slow to produce anything meaningful.

  In addition, humans think by sensual modeling augmented by linear logic
and common sense derrived from language.  Other higher animals think in
the same way but obviously don't have the advantage of a logical construct
from a language.

  I don't expect to see AI until machines are made that can experience the
world in a manner similar to the experiences the animals have.



  
From: Kirk Is
Subject: Re: design a city oriented to
Date: 
Message-ID: <gI_R4.130$m26.3140@news.tufts.edu>
Scott Nudds (·····@freenet.hamilton.on.ca) wrote:
> John McCarthy (···@Steam.Stanford.EDU) wrote:
> : Vernor Vinge seems to be one of those who supposes that sufficient
> : computer power will guarantee human level AI.  I don't agree.  New
> : ideas are needed.

>   Simple ideas will suffice in my view.  AI has been a long time failure
> because the "solutions" attempted so far have been constrained by a need
> to develop solutions that can be implemented on existing hardware, and
> existing hardware has been vastly to slow to produce anything meaningful.

Well, it's not for lack of trying on the hardware part-- in "The Age of
Spiritual Machines" Kurzweil points out that while Moore's law- the
exponential component of 1000s of computations per second per dollar which
extends back to when Babbage's machine was engineeringly feasible, in the
early 1900s-- it's MUCH less clear that software has had anything near
that degree of advancement.  Yes, GUIs and the like have done a very good
job of riding on the hardware's coattails (and slurping up every order of
magnitude of CPU power made available to it), but my unstudied instinct
(danger!) suggest that we really haven't made brilliant progress on the
software part.

So another way of putting Scott Nudd's previous paragraph would be "Well,
we suspect that our brute force techniques in software might produce
something resembling intelligence if we had blidingly fast computers to
run the damn stuff on- orders of magnitude faster than what we have now
(even though what we have now is orders of magnitude faster than what we
had a few years ago, and so on for many generations)"

>   In addition, humans think by sensual modeling augmented by linear logic
> and common sense derrived from language.  Other higher animals think in
> the same way but obviously don't have the advantage of a logical construct
> from a language.

>   I don't expect to see AI until machines are made that can experience the
> world in a manner similar to the experiences the animals have.

Is it that simple? What do you use for your measure?  Our computers are 
much faster than our brains in some ways, much slower in others.  
Could there be a point where you say "Ok, we probably have the hardware
now but we're *still* not getting intelligence- maybe we're going 
about it wrong?" How can we know that we have the right structural ideals
that will make real, dynamic intelligence, without seeing the result?
-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
Indeed, the Russians' predisposition for quiet reflection followed by 
sudden preventive action explains why they led the field for many 
years in both chess and ax murders.  --Marshall Brickman, Playboy 4/73
From: David Lloyd-Jones
Subject: Exponential Progress. Was: Re: design a city oriented to
Date: 
Message-ID: <7y1S4.38377$Xk2.139024@tor-nn1.netcom.ca>
"Kirk Is" <·······@andante.cs.tufts.edu> writes

<a lot of sensible stuff, but I pick up on only one point>

> Well, it's not for lack of trying on the hardware part-- in "The Age of
> Spiritual Machines" Kurzweil points out that while Moore's law- the
> exponential component of 1000s of computations per second per dollar which
> extends back to when Babbage's machine was engineeringly feasible, in the
> early 1900s...

A lot of goofusses go around camping about how the supposed "acceleration of
information" is going on at 18% a year, or some number they've pulled out of
the aether over their meusli this morning.

My family are only 150 years away from ratshit, in the coal mines on one
side, picking stones out of the field on the other.  My partner's family is
only about 100 years ago from greeting the English with spears on their
shoulders.

(They did this with great dignity: my partner's grandfather was called out
to meet Stanley's crew when they struggled into southern Sudan sometime
around 1900, I forget, and said "Your soldiers' piss is sour, and makes my
land smell bad. Go away."  The English insisted on passing through, so my
partner's clan seniors hired bearers from a neighbouring polity to carry the
palefaces' stupid boxes for them. Damn, what a bunch of incompetents! Why
can't they cut down their baggage enough so they can carry their own silly
boxes?)

I saw a recent video (information explosion or whutt!) from one of our
weddings in Khartoum, but a lot of our relatives looked uncomfortable in
their $30 suits. It's like really difficult to do the jump dance wearing
Bata shoes. They'd rather have a ground-length shirt in cold weather, less
in hot -- and maybe an AK-47 stashed somewhere to keep the Arabs honest. As
far as clothing is concerned, most of the younger folks wear jeans and
T-shirts from Swedish charities. Feh!

(All the younger girls on that side of the family see photographs of their
great- or great-great-grandmothers sixty years ago -- 15~20 years to a
generation -- dancing bare-breasted, wearing only a string of beads. My
Ajok, even though she is totally sophisticated at the world -- Washington,
CIA, World Bank, Edinborough where she cooked for an earlier husband as he
got his Ph.D.., living with the Italian Royal family when she first escaped
from the shitstorm of the second Sudanese civil war in 1980 or 83, and so
on -- still sometimes wears nothing but the ankle and waist beads passed
down to her by one of her female ancestors. Only around the house, you
understand. :-) )

In my grandfather's house we had a record player. It was made out of cherry
wood, weighed about half a ton, and had a cunning series of arms which
picked up records, put them on the spindle, then three minutes later picked
them off, threw them on a pile, and picked up the next one.  I first learned
Mozart -- and music hall -- in three minute slices.

In my father's house we had a microscope, gold-and-silk curtains made on
Jacquard looms (whose Lovelacian theoretical meaning was explained to me
before I was seven), a magisterial grandfather clock of the sort which had
only recently replaced the town hall clock and the factory whistle --
personal control of information -- and a lot of other good stuff.

I think information was accelerating at maybe 18% a year -- pick your own
number -- 100 years ago.

If you look at the clever and difficult invention it took to make the
earliest jewelry we know about yet, roughly 60,000 years ago in the
Aurignacian period, I think anybody would also be convinced that information
was accelerating at a magnificent rate all the time -- maybe, aww the hell
with it, pick a number, but certainly a whole lot faster than human
population growth.

                                                      -dlj.
From: ·········@arcbs.redcross.org.au
Subject: Re: Exponential Progress. Was: Re: design a city oriented to
Date: 
Message-ID: <8fj0d0$1td$1@nnrp1.deja.com>
In article <······················@tor-nn1.netcom.ca>,
  "David Lloyd-Jones" <······@netcom.ca> wrote:
.
.
.
> My family are only 150 years away from ratshit, in the coal mines on one
> side, picking stones out of the field on the other.  My partner's family is
> only about 100 years ago from greeting the English with spears on their
> shoulders.
>
> (They did this with great dignity: my partner's grandfather was called out
> to meet Stanley's crew when they struggled into southern Sudan sometime
> around 1900,

Misattribution - Stanley was nothing to do with the English, or
indeed Britain more widely. Stanley himself was a Welshman who
wound up in North America, which happened to some of them. His
early explorations were on behalf of US newspapers and his later
ones on behalf of the King of the Belgians.

On the other hand this may be a genuine account of some other
expedition in those parts. The British, in conjunction with the
Egyptians, made thorough going military expeditions, which was largely
why they were able to bounce out the more lightly outfitted French
at Fashoda.

 I forget, and said "Your soldiers' piss is sour, and makes my
> land smell bad. Go away."  The English insisted on passing through, so my
> partner's clan seniors hired bearers from a neighbouring polity to carry the
> palefaces' stupid boxes for them.

This does NOT sound like the British. They stayed, in that era,
rather than moving on.

 Damn, what a bunch of incompetents! Why
> can't they cut down their baggage enough so they can carry their own silly
> boxes?)

Earlier expeditions took baggage corresponding to anticipated research
needs, allowing for blurry contingencies, and trade goods etc. that
had to be selected without certain knowledge of what would be
appreciated. It was in fact the best allocation of resources, given
uncertain knowledge (if it had been certain, there would have been no
point going!).
.
.
.
My
> Ajok, even though she is totally sophisticated at the world -- Washington,
> CIA, World Bank, Edinborough where she cooked for an earlier husband as he
> got his Ph.D.., living with the Italian Royal family when she first escaped
> from the shitstorm of the second Sudanese civil war in 1980 or 83, and so
> on

THESE are supposed to impress people with her cosmopolitan qualities?
I am sure you have made an excellent assessment of her as a person,
but that lot of references is a bit like being "world famous back home
in Ohio". The Italian Royal Family, for instance, took refuge with
Farouk in Egypt (after a sojourn in a cheap hotel in Portugal, while
their funds were frozen). PML.


Sent via Deja.com http://www.deja.com/
Before you buy.
From: Kirk Is
Subject: Re: Exponential Progress. Was: Re: design a city oriented to
Date: 
Message-ID: <qkST4.165$m26.3665@news.tufts.edu>
David Lloyd-Jones (······@netcom.ca) wrote:

> "Kirk Is" <·······@andante.cs.tufts.edu> writes

> <a lot of sensible stuff, but I pick up on only one point>

> > Well, it's not for lack of trying on the hardware part-- in "The Age of
> > Spiritual Machines" Kurzweil points out that while Moore's law- the
> > exponential component of 1000s of computations per second per dollar which
> > extends back to when Babbage's machine was engineeringly feasible, in the
> > early 1900s...

> A lot of goofusses go around camping about how the supposed "acceleration of
> information" is going on at 18% a year, or some number they've pulled out of
> the aether over their meusli this morning.
[snip a lot of interesting, but somewhat offtopic? rambling]
> Aurignacian period, I think anybody would also be convinced that information
> was accelerating at a magnificent rate all the time -- maybe, aww the hell
> with it, pick a number, but certainly a whole lot faster than human
> population growth.

Well, I wasn't talking about a general "amount of human knowledge" or any
of that... Kurzweil makes a fairly detail case over the last 100 years at
counting calculations per second per thousand dollars you spend; sheer
computational speed on a variety of hardware, from the purely mechanical
to today's fastest computers.

-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
"I tell you, we are here on Earth to fart around, 
 and don't let anybody tell you any different." --Kurt Vonnegut
From: David Lloyd-Jones
Subject: Re: Exponential Progress. Was: Re: design a city oriented to
Date: 
Message-ID: <EjZT4.40839$Xk2.146240@tor-nn1.netcom.ca>
"Kirk Is" <·······@andante.cs.tufts.edu> wrote
>
> Well, I wasn't talking about a general "amount of human knowledge" or any
> of that... Kurzweil makes a fairly detail case over the last 100 years at
> counting calculations per second per thousand dollars you spend; sheer
> computational speed on a variety of hardware, from the purely mechanical
> to today's fastest computers.
>

Look, I love Ray Kurzweil dearly, but it would be difficult to find an
easier-to-futz measure than "calculations per second."  Hell, you can get an
order of magnitude difference in a machine's performance depending on
whether you manufacture it or compete with it.

My point was simply that this seems to me to be equally true of the couple
of centuries before that, and of J. Random Millenium any other time.

There are times when knowledge goes all to hell, like the plot of some
sci-fi novel (but of course the sci-fi novels are generally based on stuff
in ordinary history books.) Thus the breakdown of the dams of the former
Seleucid in the 7th Century C.E. happened at a time when people had lost
either the technology or the "social technology" for rebuilding them. As a
result there was a Jayhansonian dieback, with the survivors taking to
camelback as to lifeboats.

These, however, are the glitches. My claim is that the accretion of
information, knowledge, and information processing skills has been going on
at an awesome rate for maybe 60,000 years now. I don't make any more general
claim, because we don't have very good evidence before the Aurignacian
jewellery/tradegoods/tools finds.

There are, of course, astonishing finds in Africa -- e.g., of cooperative
flint-knapping pits: would "factories" be too strong? -- going back much
earlier. Put it's hard to think about rates of change when developments are
episodic. You can't differentiate a series of intermediate events, even if
they happen on a Poisson function. :-)

In the dying years of the Soviet Union there were a couple or three very
competent Russian archaeological grops working in Arabia. On may surmise
that the Soviets thought to improve their influence in this vital area by
digging up stuff to make the Arabs feel good. The last thing I saw of this,
maybe ten or twelve years ago, was their greatly extending the evidence of
iron age technology in the Meroitic Empire, i.e. today's Sudan in the 8th to
6th centuries B.C.E. (This was when the ancestors of us white folks were
doing OK at stone, and the northerners among us had only bronze.

Anyway, with the collapse of the USSR, I suspect that both funding and
political will dried up; certainly I haven't heard anything about these
expeditions lately. Still, I expect to see and hear more of this interesting
age and area in the next fifty years or so.

                                                   -dlj.
From: Kirk Is
Subject: Re: Exponential Progress. Was: Re: design a city oriented to
Date: 
Message-ID: <s71U4.171$m26.3845@news.tufts.edu>
David Lloyd-Jones (······@netcom.ca) wrote:

> "Kirk Is" <·······@andante.cs.tufts.edu> wrote
> >
> > Well, I wasn't talking about a general "amount of human knowledge" or any
> > of that... Kurzweil makes a fairly detail case over the last 100 years at
> > counting calculations per second per thousand dollars you spend; sheer
> > computational speed on a variety of hardware, from the purely mechanical
> > to today's fastest computers.

> Look, I love Ray Kurzweil dearly, but it would be difficult to find an
> easier-to-futz measure than "calculations per second."  Hell, you can get an
> order of magnitude difference in a machine's performance depending on
> whether you manufacture it or compete with it.

> My point was simply that this seems to me to be equally true of the couple
> of centuries before that, and of J. Random Millenium any other time.

Oh. I'm not sure what our disagreement is then; I believe Kurzweil might
suspect that that trend he charted has been in place for a long long time,
though the farther back you get the more abstract the measure has to
become.





-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
"If you feel it, but it isn't right, don't do it and don't 
 believe it. We can be better than natural -- we're human."--Penn Jillette
From: Oliver Sparrow
Subject: Re: Exponential Progress. Was: Re: design a city oriented to
Date: 
Message-ID: <6uh2issp12l790eu8u49pcl1bi75klf77n@4ax.com>
"David Lloyd-Jones" <······@netcom.ca> wrote:

>These, however, are the glitches. My claim is that the accretion of
>information, knowledge, and information processing skills has been going on
>at an awesome rate for maybe 60,000 years now.

Quite a nice model takes a three-dimensional view. On one scale,human
capabilities cumulate: knowledge and interpretations, interpersonal skills,
patterns of daily life. On a second, the machinery of value creation and
consumption become separated out from individual lives and become
impersonal structures and imperatives. On a third, non-economic
institutions (law, governance, regulation, dispute resolution) develop.

Societies form a cloudy locus between these three axes. Evidence suggests
that there is a tight link between adequacy on each: you need social,
commercial and institutional balances at any level of complexity that are
broadly in line with each other. Low complexity societies have relatively
simple patterns of social ordering, as measured by e.g. division of labour,
life stage segmentation and so forth. The UK has shifted from needing 4
dimensions to classify social opinions by type to 2SDs (1945), to over 120
in 1980, before the idea of 'type' broke down and people became unboxed and
roved between types. Similar things can be said about economic institutions
(subsistence or hunter gatherer to dotconning the venture capitalists) and
about institutions (village counsel to pre-sherpa agenda shaping for G7
talks-about-talks w.r.t new institutional development.)

Over-reliance on one arm creates e.g. Asian collapse of 1998: duff
institutions for scale. World Bank show that Africa, Asia were a few
hundred dollars per capita different in 1950: SS Africa is now poorer than
it was then, Asia several thousand dollars per cap richer in real terms.
Difference from econometric fit: institutional adequacy. 

It may well be that Kuhn's hard-to-spot-in-the-record paradigm shifts are
in fact the consequence of tensions being relieved in one of these arms, to
be taken up in another. We get commerce 'right' in the 1980s and then have
to cope wit the institutional and social impacts that follow. I suspect
that learning spontaneous how to organise complexity has been the limiting
factor - see (whoops- author?) Corruption and the Decline of Rome for an
e.g. of stasis in a society that did not need to reinvent itself. 

What must be true if this model is true is that combinatorial complexity
increases greatly when advances are made. Guesses are that we are trying
out 100 combinatorial experiments in commerce today for every one that we
tried in 1970. One can probably add a zero to that for 2010 and another for
2020. Finding our way through this is already hard for the less skilled. It
may become a major issue for all, with the prospects of e.g. paralysed
institutions. Widgets - knowledge engineering, AI, soap operas with covert
guidance - may help us through this. The new scarce factor of production is
neither land, labour nor capital, but the ability to surface, evaluate and
exercise adaptive options. That is why capital falls like a misguided
pancake on any-thing or -one who seems to have a clue what they are doing. 
_______________________________

Oliver Sparrow
From: Scott Nudds
Subject: Re: design a city oriented to
Date: 
Message-ID: <8fb2pn$b8f$1@mohawk.hwcn.org>
: Scott Nudds (·····@freenet.hamilton.on.ca) wrote:
: >   Simple ideas will suffice in my view.  AI has been a long time failure
: > because the "solutions" attempted so far have been constrained by a need
: > to develop solutions that can be implemented on existing hardware, and
: > existing hardware has been vastly to slow to produce anything meaningful.

Kirk Is (·······@andante.cs.tufts.edu) wrote:
: Well, it's not for lack of trying on the hardware part-- in "The Age of
: Spiritual Machines" Kurzweil points out that while Moore's law- the
: exponential component of 1000s of computations per second per dollar which
: extends back to when Babbage's machine was engineeringly feasible, in the
: early 1900s-- it's MUCH less clear that software has had anything near
: that degree of advancement.  Yes, GUIs and the like have done a very good
: job of riding on the hardware's coattails (and slurping up every order of
: magnitude of CPU power made available to it), but my unstudied instinct
: (danger!) suggest that we really haven't made brilliant progress on the
: software part.

  To a large extent software is still captive of the hardware.  There is
no point in implementing "solutions" that can not run effectively on
existing hardware.  As a result there is little to no work being done on
languages for dramatically different hardware which does not exist, and
for which there are no plans to develop.

  What you see is a seemingly perpetual tweaking of older langauges to add
some trivial feature here or some trivial feature there.

  Without dramatic changes you aren't going to see dramatic results.
Without dramatic changes in hardware you aren't going to see dramatic
changes in software.

  It's just that simple.



: So another way of putting Scott Nudd's previous paragraph would be "Well,
: we suspect that our brute force techniques in software might produce
: something resembling intelligence if we had blidingly fast computers to
: run the damn stuff on- orders of magnitude faster than what we have now
: (even though what we have now is orders of magnitude faster than what we
: had a few years ago, and so on for many generations)"

  The machine I am currently running is about 20,000 times faster than the
first computer I owned.  You could increase the speed by a million and
still not have enough power to produce a reasonable AI entity with the
current types of hardware - large linear RAM store connected to a handfull
of accumulators communicating over a single bus - perhaps two.

  One of the essential requirements of a working AI is the ability to do
fuzzy searches on a very large database to come up with a handfull of best
fit cases.  Streaming a few gigabytes of data through once CPU over a
single bus isn't going to work at any reasonable speed.  The machine is
going to have to be massively parallel, and designed for performing these
searches rather than designed as a general purpose computing engine.


: >   In addition, humans think by sensual modeling augmented by linear logic
: > and common sense derrived from language.  Other higher animals think in
: > the same way but obviously don't have the advantage of a logical construct
: > from a language.

: >   I don't expect to see AI until machines are made that can experience the
: > world in a manner similar to the experiences the animals have.


: Is it that simple?

  I don't find the task simple at all.  Successful AI is going to have to
be able to very rapidly sample the real world.  Extract patterns which can
be identified, identify individual objects within the data and then
abstract it.

  Successful AI is going to need to be able to do that with several
streams of 1D data and at least one stream of 2D data.  In addition it is
going to have to have a sufficient "concept" of 3D space so that it can
model the world in 1D, 2D, and 3D.  


: What do you use for your measure? 

  Of what?


: Our computers are 
: much faster than our brains in some ways, much slower in others.  
: Could there be a point where you say "Ok, we probably have the hardware
: now but we're *still* not getting intelligence- maybe we're going 
: about it wrong?" How can we know that we have the right structural ideals
: that will make real, dynamic intelligence, without seeing the result?

  You won't.  Not knowing what intelligence is.  Having a defintion of "I
know it when I see it." means that there is no well defined target, as a
result it's going to be trial and error, or "organically" grown though
simulation.
From: Oliver Sparrow
Subject: Re: design a city oriented to
Date: 
Message-ID: <r60jhskfgr1r18fg1bqpjn1ptpiobe4ukl@4ax.com>
·····@freenet.hamilton.on.ca (Scott Nudds) wrote:

>  Successful AI is going to need to be able to do that with several
>streams of 1D data and at least one stream of 2D data.  In addition it is
>going to have to have a sufficient "concept" of 3D space so that it can
>model the world in 1D, 2D, and 3D.  

That is a very interesting idea: searching N space by M<=N orthogonal
bus-processor systems. You may just have invented a new server technology.
I think ANDing would do, so easy h/ware.
_______________________________

Oliver Sparrow
From: David Lloyd-Jones
Subject: Re: design a city oriented to
Date: 
Message-ID: <pEqS4.38961$Xk2.140716@tor-nn1.netcom.ca>
"Oliver Sparrow" <····@chatham.demon.co.uk> wrote
>  (Scott Nudds) wrote:

>> Blah-blah, yurk-yurk..

Oliver, Kirk, Gary,

You guys are obviously bright and thoughtful. Still, would you take this
stuff the hell out of sci.econ.

We try to do serious work over in this ng.

I don't want to hurt your feelings, because, as I say, you are clearly
bright and are doing good stuff. You're just doing it in the wrong place.

                                    Best,

                                           -dlj.
From: Kirk Is
Subject: Re: design a city oriented to
Date: 
Message-ID: <7ugS4.133$m26.3282@news.tufts.edu>
Scott Nudds (·····@freenet.hamilton.on.ca) wrote:
> : Scott Nudds (·····@freenet.hamilton.on.ca) wrote:
> : >   Simple ideas will suffice in my view.  AI has been a long time failure
> : > because the "solutions" attempted so far have been constrained by a need
> : > to develop solutions that can be implemented on existing hardware, and
> : > existing hardware has been vastly to slow to produce anything meaningful.
[snip]
>   To a large extent software is still captive of the hardware.  There is
> no point in implementing "solutions" that can not run effectively on
> existing hardware.  As a result there is little to no work being done on
> languages for dramatically different hardware which does not exist, and
> for which there are no plans to develop.
[snip]
>   Without dramatic changes you aren't going to see dramatic results.
> Without dramatic changes in hardware you aren't going to see dramatic
> changes in software.
[snip]
>   The machine I am currently running is about 20,000 times faster than the
> first computer I owned.  You could increase the speed by a million and
> still not have enough power to produce a reasonable AI entity with the
> current types of hardware - large linear RAM store connected to a handfull
> of accumulators communicating over a single bus - perhaps two.

>   One of the essential requirements of a working AI is the ability to do
> fuzzy searches on a very large database to come up with a handfull of best
> fit cases.  Streaming a few gigabytes of data through once CPU over a
> single bus isn't going to work at any reasonable speed.  The machine is
> going to have to be massively parallel, and designed for performing these
> searches rather than designed as a general purpose computing engine.

Ok- you're claiming that the software can be simple ideas *if* the
hardware implements some very complex ideas; you're moving the complexity
from hard to soft.

But are you encouraging AI researchers to start comissioning and designing
hardware, or sit back and hope hardware advances in ways that are useful
to developing these systems?  Because those advances you're looking for
might not be a "natural progression" for hardware designers; you have a
classic chicken and egg problem, actually: software writers aren't good at
writing parallel code (or maybe many problems don't lend themselves to
being paralellized); so hardware makers have no incentive to make
groundbreaking hardware.  There's not much great parallel hardware, so the
software writers have nothing to learn on.

But you still say it's not a matter of speed; that no matter how many 
generations of (normal) proccessor speed doubling we get, we won't be 
able to "fake" the parallelization in software; it'll never be fast 
enough, we *need* it to be hardwired?  

Hmm.

-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
"SANTA HAS A TUMOR IN HIS HEAD THE SIZE OF AN OLIVE. MAYBE IT WILL GO 
 AWAY TOMORROW BUT I DON'T THINK SO."
--sign language by Crumpet the Macy's SantaLand Elf (David Sedaris)
From: Gary Forbis
Subject: Re: design a city oriented to
Date: 
Message-ID: <#McsF7ru$GA.232@cpmsnbbsa03>
Kirk Is <·······@andante.cs.tufts.edu> wrote in message
·······················@news.tufts.edu...
> But are you encouraging AI researchers to start comissioning and designing
> hardware, or sit back and hope hardware advances in ways that are useful
> to developing these systems?  Because those advances you're looking for
> might not be a "natural progression" for hardware designers; you have a
> classic chicken and egg problem, actually: software writers aren't good at
> writing parallel code (or maybe many problems don't lend themselves to
> being paralellized); so hardware makers have no incentive to make
> groundbreaking hardware.  There's not much great parallel hardware, so the
> software writers have nothing to learn on.

Parallel processing is going on behind the scenes.  For instance, most
(if not all) commercial microprocessors being built today are superscaler.
By using speculative execution a branch address and subsequent instruction
can process at the same time.  If the branch is taken the effects of the
subsequent instruction can be thrown away.  This is accomplished by
caching the internal registers as if they were onboard memory.

Paralellization seems like a compiler problem to me.  Algorithms should
know as little as possible about the hardware on which they run.
(The C compiler should generate the same code for I++ and I=I+1)
From: Kirk Is
Subject: Re: design a city oriented to
Date: 
Message-ID: <v8lS4.139$m26.3507@news.tufts.edu>
Gary Forbis (··········@email.msn.com) wrote:

> Parallel processing is going on behind the scenes.  For instance, most
> (if not all) commercial microprocessors being built today are superscaler.
> By using speculative execution a branch address and subsequent instruction
> can process at the same time.  If the branch is taken the effects of the
> subsequent instruction can be thrown away.  This is accomplished by
> caching the internal registers as if they were onboard memory.

Yes, that is a good point; we are getting away from the traditonal Von
Neuman architecture in ways we might not be aware of.  Still, if anything
we're using a modestly parallel architecture to emulate a traditional
sequential one.  And then some AI writers are using *that* in turn to
emulate a parallel architecture! And thus, the circle is complete. 


> Paralellization seems like a compiler problem to me.  Algorithms should
> know as little as possible about the hardware on which they run.
> (The C compiler should generate the same code for I++ and I=I+1)

But seriously, I think while compilers and proccessors can do some
tweaking, making a compiler that could do *that* level of optimization is
a *seriously* difficult AI problem.  I almost wonder if you'd get into
territory of the Halting Problem.

-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
Chanting against Nazism is like drinking for sobriety.
   --http://www.subatomichumor.com
From: Frank A. Adrian
Subject: Re: design a city oriented to
Date: 
Message-ID: <KPLS4.1220$cZ3.51794@news.uswest.net>
Scott Nudds <·····@freenet.hamilton.on.ca> wrote in message
·················@mohawk.hwcn.org...
>   To a large extent software is still captive of the hardware.  There is
> no point in implementing "solutions" that can not run effectively on
> existing hardware.  As a result there is little to no work being done on
> languages for dramatically different hardware which does not exist, and
> for which there are no plans to develop.

Why of course there is!  It would be to get hardware to develop in a
different direction.  You assume that the hardware dog wags the software
tail.  The amount spent on software now far surpasses that spent on hardware
and yet people still see hardware as more real or more important.  Repeat
after me - hardware without software is just a box of plastic and molten
sand; software without hardware is just conceptual crap.  They need each
other to work together, and they both should be designed together.  However,
the view that the box is more important than the bits still persists!  And
even among software designers!  It seems an extremely lopsided relationship.
And as long as the software dog allows itself to be wagged by its hardware
tail, it will continue.  Especially when software folk are convinced that
they are "captives of the hardware".

faa
From: Kirk Is
Subject: Re: design a city oriented to
Date: 
Message-ID: <VsUS4.145$m26.3655@news.tufts.edu>
Frank A. Adrian (·······@uswest.net) wrote:
> Why of course there is!  It would be to get hardware to develop in a
> different direction.  You assume that the hardware dog wags the software
> tail.  The amount spent on software now far surpasses that spent on hardware
> and yet people still see hardware as more real or more important.  Repeat
> after me - hardware without software is just a box of plastic and molten
> sand; software without hardware is just conceptual crap.  They need each
> other to work together, and they both should be designed together.  However,
> the view that the box is more important than the bits still persists!  And
> even among software designers!  It seems an extremely lopsided relationship.
> And as long as the software dog allows itself to be wagged by its hardware
> tail, it will continue.  Especially when software folk are convinced that
> they are "captives of the hardware".

Thinking about "real world" hardware in the 1980s and 1990s; it was
probably the rampant piracy that helped PC clones become cheap and
plentiful; people could spend money on hardware when the software itself
was going to be "liberated" from work or a friend.  And yet, it's the
software company that became super rich and powerful...



-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
"Here I am; I'm here-- in my mind, and yours, it seems.
 Please don't hold me too dear. Some dreams are unrealized."
From: Scott Nudds
Subject: Re: design a city oriented to
Date: 
Message-ID: <8fl5dp$buo$1@mohawk.hwcn.org>
Frank A. Adrian (·······@uswest.net) wrote:
: And as long as the software dog allows itself to be wagged by its hardware
: tail, it will continue.  Especially when software folk are convinced that
: they are "captives of the hardware".

  You are of course free to simulate hardware consisting of 1E6 processors
running in parallel.

  Your simulation will run 1E8 times slower than the real machine provided
you are using a single processor.

  One second on the "real thing" will take you 3.17 years.

  Comprehend?


  
From: Kirk Is
Subject: Re: design a city oriented to
Date: 
Message-ID: <OAxT4.158$m26.3790@news.tufts.edu>
Scott Nudds (·····@freenet.hamilton.on.ca) wrote:
> Frank A. Adrian (·······@uswest.net) wrote:
> : And as long as the software dog allows itself to be wagged by its hardware
> : tail, it will continue.  Especially when software folk are convinced that
> : they are "captives of the hardware".

>   You are of course free to simulate hardware consisting of 1E6 processors
> running in parallel.

>   Your simulation will run 1E8 times slower than the real machine provided
> you are using a single processor.

>   One second on the "real thing" will take you 3.17 years.

>   Comprehend?

Or wait for 27 generations of Moore's Law Doubling to occur (what is that,
40-50 years?) and your serial emulation will run as fast as the old
parallel one. Hmmm, not very appealing.

Still, I'm not sure if there are sufficient motivations for main stream
computer makers to push truly parallel hardware- (as opposed to the
speculative lookahaead being done now, but that's more of a clever hack
than true parallelism)  we're only good about putting a certain class of
problems in parallel terms.

-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
"Let 'em all go to hell except cave 76"
	--The 2000 year old man singing an early national anthem
From: Rick Craik
Subject: Re: design a city oriented to
Date: 
Message-ID: <%nBT4.22$fe2.1249@198.235.216.4>
Kirk Is wrote in message ...
>Scott Nudds (·····@freenet.hamilton.on.ca) wrote:
>> Frank A. Adrian (·······@uswest.net) wrote:
>> : And as long as the software dog allows itself to be wagged by its
hardware
>> : tail, it will continue.  Especially when software folk are convinced
that
>> : they are "captives of the hardware".
>
>>   You are of course free to simulate hardware consisting of 1E6
processors
>> running in parallel.
>
>>   Your simulation will run 1E8 times slower than the real machine
provided
>> you are using a single processor.
>
>>   One second on the "real thing" will take you 3.17 years.
>
>>   Comprehend?
>
>Or wait for 27 generations of Moore's Law Doubling to occur (what is that,
>40-50 years?) and your serial emulation will run as fast as the old
>parallel one. Hmmm, not very appealing.
>
>Still, I'm not sure if there are sufficient motivations for main stream
>computer makers to push truly parallel hardware- (as opposed to the
>speculative lookahaead being done now, but that's more of a clever hack
>than true parallelism)  we're only good about putting a certain class of
>problems in parallel terms.

    I remember a thread in comp.ai.games about a year ago that touched
on this subject. An AI add-on card was discussed, where it would be plugged
in to a PC much like a video, or sound card. (The only way I could see
this ever working was to bundle the add-on card with the game software.)

    I could see a parallel processing card as doing just simple matrix
calculations, for some pattern recognition, etc. For example, in cellular
automata, Conway's Game Of Life could be written as a parallel processing
application (as summations and thresholds). The architecture of the
parallel processors need not be the same as a modern central processing
unit (CPU).


Rick
From: Frank A. Adrian
Subject: Re: design a city oriented to
Date: 
Message-ID: <m9sU4.481$mi6.17548@news.uswest.net>
Scott Nudds <·····@freenet.hamilton.on.ca> wrote in message
·················@mohawk.hwcn.org...
> Frank A. Adrian (·······@uswest.net) wrote:
> : And as long as the software dog allows itself to be wagged by its
hardware
> : tail, it will continue.  Especially when software folk are convinced
that
> : they are "captives of the hardware".
>
>   You are of course free to simulate hardware consisting of 1E6 processors
> running in parallel.
>
>   Your simulation will run 1E8 times slower than the real machine provided
> you are using a single processor.
>
>   One second on the "real thing" will take you 3.17 years.
>
>   Comprehend?

Of course I comprehend.  It's still a question of cost in the end.  You
choose to write your software to run on hardware that makes programming
difficult.  You (or whomever hires you) pays the price in substantially
higher development costs.  As long as your managers are clueless about the
money that the hardware folk suck away from them, you can take that
position.  You and most of the world still thinks that hardware is
expensive.  Wake up.  Hardware's cheap and becoming ubiquitous.  Software is
the tough part.  As long as you think that one should subjugate software
construction to what current hardware runs well, rather than looking at what
makes programming easier, you're part of the problem, not part of the
solution.  Wake up and smell the bits.

faa
From: David Lloyd-Jones
Subject: Re: design a city oriented to
Date: 
Message-ID: <vngM4.30055$Xk2.112247@tor-nn1.netcom.ca>
"Arthur T. Murray" <·····@victoria.tc.ca> wrote >
> http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html Vinge's
> Technological Singularity scares me and remains my favorite AI text.

I was at a seminar on robot intelligence at the Center for the Study of
Democratic Institutions in 1968 where ne of the moderators opened up with a
jovial "We thought we ought to get to work on this stuff before the robots
are here at the table voting with us." That was a generation ago now.

Vinge's paper above is more of the same kind of stupidity. I think the
generic name for this bumf is Californication.

There is no Singularity in the future. There are a hundred small
singularities in the past. It's now about 250 years since a thread-cutting
machine could make a better screw than a master machinist at his lathe.
Machines have been better ot arithmetic for much of that same period, at
bookkeeping for perhaps 110 years, at chess for three, is it now?

So what?

There have certainly been major drafts through the hallways of our thought
about what it is to be human, but they have not had much to do with the
power of machines. The Somme and the Holocaust have had major impacts on our
view of ourselves, essentially destroying the 19th century religious view of
"Man" after Darwin, and perhaps Marx and Freud, had nibbled away at the
foundations. The machine-gun and the railway were technologies of this
change, but humanity has suffered through mass effects before, Tamerlane or
the Plagues being examples.

I do not doubt Vinge' assertion that machines will have greater intellectual
power than humans in the very near future. It just strikes me as a rather
uninteresting observation. They aren't riding horses or carrying composite
bows and short swords.

                                                              -dlj