"Gary Forbis" wrote:
> Parallel processing is going on behind the scenes. For instance, most
> (if not all) commercial microprocessors being built today are superscaler.
> By using speculative execution a branch address and subsequent instruction
> can process at the same time.
But that only gets you so far. You can effectively reduce the
instruction cycle time to perhaps 1/3 of a cycle on average. You can't
just continue to increase the number of execution units to increase
speed. You pretty much bottom out at 4.
"Gary Forbis" wrote:
> Paralellization seems like a compiler problem to me. Algorithms should
> know as little as possible about the hardware on which they run.
> (The C compiler should generate the same code for I++ and I=I+1)
This is the kind of limited thinking that illustrates my point.
Forbis is thinking in terms of increment, branch, test - all traditional
components of programming.
Neurons don't branch, increment, test, multiply, etc. etc.. etc...
You can simulate neurons with branches, increments, tests, multiplies
etc. but you pay a penalty of lowered performance, in this instance
several orders of magnitude lower.
If your only tool is a hammer, then every problem appears to be a
nail.
--
<---->