From: Scott Nudds
Subject: Re: design a city oriented to
Date: 
Message-ID: <8foh7h$4n7@james.hwcn.org>
> >   To a large extent software is still captive of the hardware.  There is
> > no point in implementing "solutions" that can not run effectively on
> > existing hardware.  As a result there is little to no work being done on
> > languages for dramatically different hardware which does not exist, and
> > for which there are no plans to develop.
> [snip]
> >   Without dramatic changes you aren't going to see dramatic results.
> > Without dramatic changes in hardware you aren't going to see dramatic
> > changes in software.
> [snip]
> >   The machine I am currently running is about 20,000 times faster than the
> > first computer I owned.  You could increase the speed by a million and
> > still not have enough power to produce a reasonable AI entity with the
> > current types of hardware - large linear RAM store connected to a handfull
> > of accumulators communicating over a single bus - perhaps two.

> >   One of the essential requirements of a working AI is the ability to do
> > fuzzy searches on a very large database to come up with a handfull of best
> > fit cases.  Streaming a few gigabytes of data through once CPU over a
> > single bus isn't going to work at any reasonable speed.  The machine is
> > going to have to be massively parallel, and designed for performing these
> > searches rather than designed as a general purpose computing engine.

(Kirk Is) wrote:
> Ok- you're claiming that the software can be simple ideas *if* the
> hardware implements some very complex ideas; you're moving the complexity
> from hard to soft.

  Neurons are not computationally complex.


(Kirk Is) wrote:
> But are you encouraging AI researchers to start comissioning and designing
> hardware, or sit back and hope hardware advances in ways that are useful
> to developing these systems?

  I recommend building specialized hardware.  Specialized hardware has
the potential of running millions of times faster than the general
purpose processors that are currently being used.  If you don't have the
ability to produce a practical solution to a problem.  If the hardware
isn't there, you aren't going to explore solutions of that type.

(Kirk Is) wrote:
> Because those advances you're looking for
> might not be a "natural progression" for hardware designers; you have a
> classic chicken and egg problem, actually: software writers aren't good at
> writing parallel code (or maybe many problems don't lend themselves to
> being paralellized); so hardware makers have no incentive to make
> groundbreaking hardware.  There's not much great parallel hardware, so the
> software writers have nothing to learn on.

  Yes.  And simple kinds of parallelism that currently exist are just
extensions of single processor methods.  There is a little bit of
difference around the edges but the paradigm is still primarily a linear
one - even on parallel systems.

  Our mind however, is not linear.  Our verbal thoughts are.  They
embody the minds formal system of logic.  However, our other methods of
thinking are generally not linear.  We think in large part by modeling
the world as we would experience it with our senses.  The method of
recall isn't linear either it's massively parallel.

(Kirk Is) wrote:
> But you still say it's not a matter of speed; that no matter how many 
> generations of (normal) proccessor speed doubling we get, we won't be 
> able to "fake" the parallelization in software; it'll never be fast 
> enough, we *need* it to be hardwired?  

  We can approximate.  1E11 neurons in the brain, 1E2 connections per
neuron, each neuron firing 1E3 times a second.   1E16 events need be
simulated per second.  With a conventional CPU this may take 100
instructions per event - say 50 cycles.   This means that the
computational frequency of an artificial mind composed of general
purpose CPU's would be 5E17 Hz. 500 million GHz.

  CPU speeds are essentially already at their limit with silicon.  There
is perhaps another order of magnitude left.  With GAs you can get
perhaps another two orders of magnitude.  Getting there will take
another 30 years.  At that time you will only need a parallel system of
500,000 CPU's to simulate the human mind.

  On the other hand, using a specialized CPU you could probably simulate
a neuron with 1,000 transistors.  So, 100,000 can be simulated per
equivalent CPU with current die and transistor sizes.  Thing is you can
do so 1 million times faster than real neurons.  So perhaps 1E11 neurons
per chip.  More realistically 1E10.  With the same improvements in
capacity mentioned above it would require 5 special purpose CPU's not
500,000 general purpose ones.


-- 
<---->
From: Kirk Is
Subject: Re: design a city oriented to
Date: 
Message-ID: <HbST4.163$m26.3665@news.tufts.edu>
Watch your followups- sci.econ has kvetched about this subthread being
there. Newsgroups trimmed.

Scott Nudds (·····@james.hwcn.org) wrote:
> (Kirk Is) wrote:
> > Ok- you're claiming that the software can be simple ideas *if* the
> > hardware implements some very complex ideas; you're moving the complexity
> > from hard to soft.

>   Neurons are not computationally complex.

No, but two things: they *are* more complex than we first assumed: there
may be important side effects (having to do with their biochemical nature)
that we don't really understand enough to emulate because we assumed that
they weren't important.  Secondly, you're making the assumption that we
know enough to get the "honkin' big bag of neurons" doing useful things.
The brain isn't a homogenous batch of neurons; there are differentiated
areas, some born that way, others formed during development, that are
probably crucial to thought in the way we understand it.  Also, you need
to be clever about the environment you use to build up your neural
network.

So we suspect the "simple methods will work", but the evidence isn't
completely there yet.


>   Our mind however, is not linear.  Our verbal thoughts are.  They
> embody the minds formal system of logic.  However, our other methods of
> thinking are generally not linear.  We think in large part by modeling
> the world as we would experience it with our senses.  The method of
> recall isn't linear either it's massively parallel.

You're right, that's one of the things I just recently got out of Dennett-
the view of rational thought as a kind of virtual linear machine running
on parallel hardware.  And Turing and Von Neuman modeled their machines on
that linear model.  Rational, step-by-step thinking- what these linear
architectures model- is very good at certain types of tasks, but doesn't
do so well with incomplete information, and is too slow and expensive for
'everyday' (well, 'everymoment') use.

[snip intersting 'back of envelope' calculations for traditional v.
parallel architecture potential speeds and requirements.]

-- 
Kirk Israel   [spamblock in effect, use ····@alienbill.com]
"I tell you, we are here on Earth to fart around, 
 and don't let anybody tell you any different." --Kurt Vonnegut