From: Frank Goenninger DG1SBG
Subject: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <lzejdylt2t.fsf@pcsde001.de.goenninger.net>
Just found

http://www.xmos.com/sds-tech.pdf

True parallel threads and programmable in C. Wondering how some Lisp
implementation would do on one of these chips...

Frank

-- 

  Frank Goenninger

  frgo(at)mac(dot)com

  "Don't ask me! I haven't been reading comp.lang.lisp long enough to 
  really know ..."

From: Tim Bradshaw
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <e09ce5a0-ed81-48b5-8f6b-c2a4e59e4610@n20g2000hsh.googlegroups.com>
On Dec 7, 10:03 am, Frank Goenninger DG1SBG
>
> True parallel threads and programmable in C. Wondering how some Lisp
> implementation would do on one of these chips...
>

Why would they do any better than they do on commodity hardware?  Do
Lisp implementors really still think that special-purpose hardware is
the answer?
From: Daniel Weinreb
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <Hhc6j.13961$xB.204@trndny06>
Tim Bradshaw wrote:
> On Dec 7, 10:03 am, Frank Goenninger DG1SBG
>> True parallel threads and programmable in C. Wondering how some Lisp
>> implementation would do on one of these chips...
>>
> 
> Why would they do any better than they do on commodity hardware?  Do
> Lisp implementors really still think that special-purpose hardware is
> the answer?

It depends what the question is.

But just to run Lisp for the usual reasons that one runs Lisp?
Nope, not any more.  For various cool new ideas in system
architecture, like transactional memory and capability
architectures, though, which might possibly be part of
some future Lisp-like language (I know that's not what
you asked)...
From: Tim Bradshaw
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <a773edb8-3bb4-4afe-8255-267cadf1572f@l1g2000hsa.googlegroups.com>
On Dec 7, 1:55 pm, Daniel Weinreb <····@alum.mit.edu> wrote:
>
> But just to run Lisp for the usual reasons that one runs Lisp?
> Nope, not any more.  For various cool new ideas in system
> architecture, like transactional memory and capability
> architectures, though, which might possibly be part of
> some future Lisp-like language (I know that's not what
> you asked)...

I think my point was that those things will become commodity (or,
well, mainstream anyway), if they are useful, and if they are useful
for other languages they will be useful for Lisp too.  (For instance
Sun's Rock processor will have transactional memory, and Niagara
already has 8 cores with 4 HW threads per core, with (I am sure)
significantly more to come.  I happen to know about SPARC-family stuff
but I am sure other people are doing the same).

So I guess what I'm trying to say is: do people think that Lisp's HW
requirements are sufficiently *different*?

--tim
From: Frode Vatvedt Fjeld
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <2hbq927cbw.fsf@vserver.cs.uit.no>
Tim Bradshaw <··········@tfeb.org> writes:

> So I guess what I'm trying to say is: do people think that Lisp's HW
> requirements are sufficiently *different*?

One thing that is somewhat different (from C) is the indirect function
call that is (at least conceptually) so predominant in Lisp. Perhaps
branch-prediction could be better tuned to Lisp's needs? Since branch
prediction is a non-functional aspect of x86, perhaps we could have
"optimized for dynamic languages"-variants of these CPUs? :-)

Anyways, it'd be interesting to hear what others percieve to be
lisp-specific performance impediments in current CPUs.

-- 
Frode Vatvedt Fjeld
From: Rayiner Hashem
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <740d9d15-0807-4e85-8682-fd87beb76eb3@s19g2000prg.googlegroups.com>
> One thing that is somewhat different (from C) is the indirect function
> call that is (at least conceptually) so predominant in Lisp. Perhaps
> branch-prediction could be better tuned to Lisp's needs? Since branch
> prediction is a non-functional aspect of x86, perhaps we could have
> "optimized for dynamic languages"-variants of these CPUs? :-)

What do you mean by "non-functional"?

Indirect branches/calls are predicted on most modern x86 CPUs, using
the branch-target-buffer (BTB). If the branch predictor decides that a
branch will be taken, it will look up the branch instruction in the
BTB, finds the predicted target address, and starts fetching from
there. The predicted target address is the address which the
instruction jumped to last time. This works for both direct and
indirect branches.

The Core 2* works slightly differently. It has a special, very large,
BTB dedicated to indirect branches. Branches that regularly jump to
different targets (eg: a polymorphic call-site) are allocated multiple
entries in the BTB, and the CPU will use the recent branch history of
the program (the taken/not-taken status of the last N branches) to
attempt to recognize patterns in the indirect jump targets and predict
them correctly. Obviously, this pattern-recognition does you no good
if the jump is completely data-dependent.

*) I believe Barcelona works similarly, though apparently the indirect
BTB is much smaller (512 entries versus 8192 entries).
From: Frode Vatvedt Fjeld
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <2h3aue6zd9.fsf@vserver.cs.uit.no>
I wrote:

> > One thing that is somewhat different (from C) is the indirect function
> > call that is (at least conceptually) so predominant in Lisp. Perhaps
> > branch-prediction could be better tuned to Lisp's needs? Since branch
> > prediction is a non-functional aspect of x86, perhaps we could have
> > "optimized for dynamic languages"-variants of these CPUs? :-)

Rayiner Hashem <·······@gmail.com> writes:

> What do you mean by "non-functional"?

That the operation of branch-prediction will not have any impact on
the correctness of any program (if we disregard real-time aspects of
"correctness"), just execution speed.

> Indirect branches/calls are predicted on most modern x86 CPUs, using
> the branch-target-buffer (BTB). If the branch predictor decides that
> a branch will be taken, it will look up the branch instruction in
> the BTB, finds the predicted target address, and starts fetching
> from there. The predicted target address is the address which the
> instruction jumped to last time. This works for both direct and
> indirect branches.

Yes indeed, this sounds like a good fit for Lisp. I just got the
impression from various optimization guides (that may be outdated)
that one should avoid indirect calls at almost any cost.

> The Core 2* works slightly differently. It has a special, very
> large, BTB dedicated to indirect branches. Branches that regularly
> jump to different targets (eg: a polymorphic call-site) are
> allocated multiple entries in the BTB, and the CPU will use the
> recent branch history of the program (the taken/not-taken status of
> the last N branches) to attempt to recognize patterns in the
> indirect jump targets and predict them correctly. Obviously, this
> pattern-recognition does you no good if the jump is completely
> data-dependent.

Thank you for these explanations.

> *) I believe Barcelona works similarly, though apparently the
> indirect BTB is much smaller (512 entries versus 8192 entries).

Perhaps this could account for some of the mediocre performance that
Espen reported? I suppose Lisp would tend to have a bigger "working
set" of indirect call-sites than static software.

-- 
Frode Vatvedt Fjeld
From: Rayiner Hashem
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <fecc35db-c336-4cc8-81d9-f3dafa3e10af@s19g2000prg.googlegroups.com>
> Perhaps this could account for some of the mediocre performance that
> Espen reported? I suppose Lisp would tend to have a bigger "working
> set" of indirect call-sites than static software.

If he's seeing similar performance from K8, than the indirect branch
prediction is unlikely to be the issue. The delta is probably due to
the fact that Core 2 is just plain a better architecture than K8. For
example, it issues four instructions per cycle instead of K8's three.
It has a greater capability to reorder instructions to hide memory
latency. It does optimistic reordering of memory accesses, depending
on a fixup-unit that catches the rare cases when the reordering turns
out to be invalid. K8/K10 is a very nice design though, it's just
getting long in the tooth.
From: Espen Vestre
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <m1tzmuecb1.fsf@gazonk.netfonds.no>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:

> Anyways, it'd be interesting to hear what others percieve to be
> lisp-specific performance impediments in current CPUs.

I don't have that, but I have some real-life data points which I
thought may be interesting to a few of you: It's a very good idea to
test different x86 variants before buying. We continuosly test new
cpus (we have a few hardware junkies here ;)) with our lisp server
software (built with LW for linux), and the cpu design, especially
pipeline length, does indeed heavily influence performance. Our
current winner is the Core 2 platform, especially when running 64 bit
linux, and the next best is AMD Opteron. We've already tested AMDs new
quad-core Barcelona cpus, and they were quite disappointing, but the
absolutely worst is any P4-descendant, even the half-year old Xeons of
that breed (that would be the 5000 series if I remember their
nomenclature right). If you run your current lisp server sw on P4-type
Xeons, you may as well consider moving them to a 5-year old Pentium M
laptop, that might actually be faster :-)
-- 
  (espen)
From: David Golden
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <Fsf6j.23631$j7.445151@news.indigo.ie>
Espen Vestre wrote:

> We've already tested AMDs new 
> quad-core Barcelona cpus, and they were quite disappointing,

Hmm.  If you have done it already, presumably was from the first run of
{performance embarrassingly crippled by hardware bug or at least the
workaround for said bug} Barcelonas, though?
http://www.tech-report.net/discussions.x/13721
http://techreport.com/discussions.x/13742

Maybe Lisp could be an extra-bad load for systems with that bug
workaround enabled, too, if indeed you had the bug workaround enabled.

I did want to get a barcelona rather than an intel system -
amd still, in theory, have the better architecture.  If they could only
implement it without screwing up.   Looks like I should wait
until well into next year (probably sensible anyway...) before
upgrading...
From: Rainer Joswig
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <joswig-1DB1E6.18560307122007@news-europe.giganews.com>
In article <·····················@news.indigo.ie>,
 David Golden <············@oceanfree.net> wrote:

> Espen Vestre wrote:
> 
> > We've already tested AMDs new 
> > quad-core Barcelona cpus, and they were quite disappointing,
> 
> Hmm.  If you have done it already, presumably was from the first run of
> {performance embarrassingly crippled by hardware bug or at least the
> workaround for said bug} Barcelonas, though?
> http://www.tech-report.net/discussions.x/13721
> http://techreport.com/discussions.x/13742
> 
> Maybe Lisp could be an extra-bad load for systems with that bug
> workaround enabled, too, if indeed you had the bug workaround enabled.
> 
> I did want to get a barcelona rather than an intel system -
> amd still, in theory, have the better architecture.  If they could only
> implement it without screwing up.   Looks like I should wait
> until well into next year (probably sensible anyway...) before
> upgrading...

Another interesting question for application servers
is how many threads they can run without massive
performance degradation. The individual thread may
not be the fastest, but running several might be
as interesting.

-- 
http://lispm.dyndns.org/
From: Espen Vestre
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <m1prxie1bz.fsf@gazonk.netfonds.no>
David Golden <············@oceanfree.net> writes:

> Hmm.  If you have done it already, presumably was from the first run of
> {performance embarrassingly crippled by hardware bug or at least the
> workaround for said bug} Barcelonas, though?
> http://www.tech-report.net/discussions.x/13721
> http://techreport.com/discussions.x/13742

Ah, interesting, thanks for the links.
It didn't perform terribly, though, it was just not better per GHz (and
this was the 1.9GHz version) than 2-core Opterons, which are quite a
bit slower (per GHz) than Core 2 (at least on 64 bit linux with 64 bit
LW and with our very in-house application-derived benchmarks).
-- 
  (espen)
From: Tim Bradshaw
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <bc5bac39-05f2-4c87-9b71-99bbb8563720@s12g2000prg.googlegroups.com>
On Dec 7, 3:29 pm, Frode Vatvedt Fjeld <······@cs.uit.no> wrote:
>
> One thing that is somewhat different (from C) is the indirect function
> call that is (at least conceptually) so predominant in Lisp.

I assume that, say, Java, has the same issues here.
From: Rayiner Hashem
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <082c7345-bbfb-459a-aa77-c67bcaecbd0b@i12g2000prf.googlegroups.com>
> > One thing that is somewhat different (from C) is the indirect function
> > call that is (at least conceptually) so predominant in Lisp.
>
> I assume that, say, Java, has the same issues here.

Conceptually, yes, Java does have a similar issue, because all method
calls are by default virtual. However, the (Sun) JVM does
devirtualization at runtime, which eliminates a lot of that cost. The
(Microsoft) CLR, on the other hand, does no such optimization, and
thus does end up performing a lot of indirect calls.
From: Daniel Weinreb
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <Ytw6j.672$xd.200@trndny03>
Tim Bradshaw wrote:
> On Dec 7, 3:29 pm, Frode Vatvedt Fjeld <······@cs.uit.no> wrote:
>> One thing that is somewhat different (from C) is the indirect function
>> call that is (at least conceptually) so predominant in Lisp.
> 
> I assume that, say, Java, has the same issues here.

I think it would depend a lot on the JIT compiler. I
would guess that JIT-compiled Java does not have
this indirection.  But you'd have to look at the
generated code (somehow) to really be sure.
From: Tim Bradshaw
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <c3a84a12-42df-48a1-847a-a7e852569ca4@t1g2000pra.googlegroups.com>
On Dec 8, 12:54 pm, Daniel Weinreb <····@alum.mit.edu> wrote:

>
> I think it would depend a lot on the JIT compiler. I
> would guess that JIT-compiled Java does not have
> this indirection.  But you'd have to look at the
> generated code (somehow) to really be sure.

Well, to put it another way: if Java can elide this, then so can Lisp.
From: Daniel Weinreb
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <lnc6j.13963$xB.2421@trndny06>
Frank Goenninger DG1SBG wrote:
> Just found
> 
> http://www.xmos.com/sds-tech.pdf
> 
> True parallel threads and programmable in C. Wondering how some Lisp
> implementation would do on one of these chips...
> 
> Frank
> 

Their slides are not terribly clear about what the real
value of all this is.  Probably when you hear the
spoken lecture that goes with the slides, they must
say more about that.  I must say, I don't "get" it.
This may be my problem, not theirs.

I see that the CTO is David May, the Transputer guy.

Oh, more on their home page.  The idea is that if
you do need your own chips, they can turn it around
faster.

"XMOS has not yet publicly announced its product
plans.  However, more information is available under NDA."
So the product is not out yet.  They have their VC
financing, though.
From: Frank Goenninger DG1SBG
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <lzaboml7qh.fsf@pcsde001.de.goenninger.net>
Daniel Weinreb <···@alum.mit.edu> writes:

> Their slides are not terribly clear about what the real
> value of all this is.  

Indeed. Maybe  I just jumped on this because I did a Transputer
project in 1988... Back then, the communication links were like the
XLink Channels in their new product: Completely independant thread
communication not using CPU power. Even in 1988 I was able to scale
the Transputer-based architecture to 16 Transputers using full I/O
bandwith without slowing down any CPU task execution. 

> Probably when you hear the
> spoken lecture that goes with the slides, they must
> say more about that.  I must say, I don't "get" it.
> This may be my problem, not theirs.
>
> I see that the CTO is David May, the Transputer guy.

Hehe - see above.

>
> Oh, more on their home page.  The idea is that if
> you do need your own chips, they can turn it around
> faster.

Exactly. Get the right chip for the task - down to the core of the
chip. Reconfigering the chip for another task: Redefine the
architecture in an XML file and run the compiler again. 

Ideas that come to mind are:

* Using a Lisp implementation's profiler the Lisp implementation
  could actually tweak the chip's config file to optimize
  throughput. Optimization down to the HW level ...

* Event-driven programming: No poll/select() whatever software
  waiting for any new data or event to arrive - a thread gets
  scheduled by hardware events

* Introducing two levels of programming: One that effectively
  manipulates the HW level and above that the traditional application
  layer. Lisp seems especially suited IMO to connect these two layers
  by means of analyzing code as data and driving the H/W layer
  based on application needs - semi or fully automated

>
> "XMOS has not yet publicly announced its product
> plans.  However, more information is available under NDA."
> So the product is not out yet.  They have their VC
> financing, though.

The concept seems promising. Now we'll see how soon a Big One will buy
these guys either in order to get this under their belly or to prevent
them from going to market ...

Frank

-- 

  Frank Goenninger

  frgo(at)mac(dot)com

  "Don't ask me! I haven't been reading comp.lang.lisp long enough to 
  really know ..."
From: Tim Bradshaw
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <daee0e8b-f409-4063-b999-b602dc1e0e9c@d61g2000hsa.googlegroups.com>
On Dec 7, 5:44 pm, Frank Goenninger DG1SBG
> The concept seems promising. Now we'll see how soon a Big One will buy
> these guys either in order to get this under their belly or to prevent
> them from going to market ...
>

Looking at their notions of cost-per-part I imagine they're aiming at
the embedded market, not general purpose.  Such hardware might be very
interesting there.
From: Daniel Weinreb
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <WKw6j.1842$Bg7.1050@trndny07>
Frank Goenninger DG1SBG wrote:
> Daniel Weinreb <···@alum.mit.edu> writes:
> 
>> Their slides are not terribly clear about what the real
>> value of all this is.  
> 
> Indeed. Maybe  I just jumped on this because I did a Transputer
> project in 1988... Back then, the communication links were like the
> XLink Channels in their new product: Completely independant thread
> communication not using CPU power. Even in 1988 I was able to scale
> the Transputer-based architecture to 16 Transputers using full I/O
> bandwith without slowing down any CPU task execution. 
> 
>> Probably when you hear the
>> spoken lecture that goes with the slides, they must
>> say more about that.  I must say, I don't "get" it.
>> This may be my problem, not theirs.
>>
>> I see that the CTO is David May, the Transputer guy.
> 
> Hehe - see above.
> 
>> Oh, more on their home page.  The idea is that if
>> you do need your own chips, they can turn it around
>> faster.
> 
> Exactly. Get the right chip for the task - down to the core of the
> chip. Reconfigering the chip for another task: Redefine the
> architecture in an XML file and run the compiler again. 
> 
> Ideas that come to mind are:
> 
> * Using a Lisp implementation's profiler the Lisp implementation
>   could actually tweak the chip's config file to optimize
>   throughput. Optimization down to the HW level ...
> 
> * Event-driven programming: No poll/select() whatever software
>   waiting for any new data or event to arrive - a thread gets
>   scheduled by hardware events
> 
> * Introducing two levels of programming: One that effectively
>   manipulates the HW level and above that the traditional application
>   layer. Lisp seems especially suited IMO to connect these two layers
>   by means of analyzing code as data and driving the H/W layer
>   based on application needs - semi or fully automated
> 
>> "XMOS has not yet publicly announced its product
>> plans.  However, more information is available under NDA."
>> So the product is not out yet.  They have their VC
>> financing, though.
> 
> The concept seems promising. Now we'll see how soon a Big One will buy
> these guys either in order to get this under their belly or to prevent
> them from going to market ...
> 
> Frank
> 

The idea of having peripheral processors seems to be coming
back.  The Sony Playstation-3 has an architecture involving
one central CPU and eight peripheral CPU's that you can farm
out work to. Naturally, it's not entirely a piece of cake
to program, but you can easily see how video games would
have some hard compute work that's not too hard to do
in parallel (processor 0 renders the upper-left, processor
1 renders the upper right, etc.).

Here's a paper about doing behavioral animation on the PS-3:

http://www.research.scea.com/pscrowd/

This is by Craig Reynolds, the father of behavioral animation,
who did his original work on this when he was at Symbolics,
in Lisp, naturally.  This kind of "large agent-based crowd
simulation" was most recently and famously used in the Lord
of the Rings movies, particularly those huge orc battles.

Meanwhile, check out the Java machines from Azul:

http://www.azulsystems.com/

Some of their key personnel have experience with graphics
processors; it's primarily considered a Sun spinoff.
It's described as a "computing appliance", used to
offload your big Java tasks; it might run an application
server such as WebLogic Server.  They have a claim of
"transparency" although I don't know what that means
technically.  It is based on their own CPU, with
hardware assisted realtime GC.  Remind anyone of
Symbolics Ivory plug-in cards, such as the MacIvory?

The smallest model has 96 processing cores
and 48GB of RAM; the big one has 768 cores and 768GB of
RAM.  Let's see, how many shopping days left
before Xmas?
From: Javier
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <f5ac6a03-67ee-4082-baa2-f64570f63be4@d4g2000prg.googlegroups.com>
On 8 dic, 14:12, Daniel Weinreb <····@alum.mit.edu> wrote:

> Meanwhile, check out the Java machines from Azul:
>
> http://www.azulsystems.com/

What the hell they sell?

"Azul Systems is a global provider of enterprise server appliances
that deliver compute and memory resources as a shared network service
for transaction-intensive applications, such as those built on the
Java(tm) platform. Azul Compute Appliances enable transparent, massively
scalable infrastructure to support the business priorities of today's
most demanding enterprise environments and deliver increased
capabilities, capacity, and utilization at a fraction of the cost of
traditional computing models."

I had to read it 10 times to just having a diffuse idea of what they
offer!
From: Patrick May
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <m2myslc63e.fsf@spe.com>
Javier <·······@gmail.com> writes:

> On 8 dic, 14:12, Daniel Weinreb <····@alum.mit.edu> wrote:
>
>> Meanwhile, check out the Java machines from Azul:
>>
>> http://www.azulsystems.com/
>
> What the hell they sell?

     You wanted the next level deeper:

          http://www.azulsystems.com/products/compute_appliance.htm

Basically it's an SMP box with up to 768 processors, optimized for
Java.

Disclaimer:  My day job is with a company that may partner with Azul
to run our software on their boxes.

Regards,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          ···@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
From: Tim Bradshaw
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <8c6d2f71-1299-4429-9d32-340cfe100514@s12g2000prg.googlegroups.com>
On Dec 8, 7:57 pm, Patrick May <····@spe.com> wrote:

> Basically it's an SMP box with up to 768 processors, optimized for
> Java.
>

I had some discussion with someone who knows about these things about
whether they had any merit, in the sense of the "being optimised for
java" meaning anything.  His argument (which made sense to me) was
that it did mean something and I think they had a lot of support for
very, very large GCd heaps, which I think have quite interesting
performance characteristics.  I think the idea of these things is that
they more-or-less have a single JVM on them, so the heap would
obviously just be vast - 100s of GB maybe, and my guess is that this
is something which other large systems don't generally hit, since they
tend to be either commercial systems which are either partitioned
(usually) or are running databases which do very different memory
management, or are doing numerical stuff, when (a) they're probably
not shared memory anyway, and (b) it's all huge arrays.

These things will probably come to other systems over time - the next
generation or so of large commercial systems will have stupid numbers
of cores and memory.

--tim
From: Andreas Davour
Subject: Re: Software Designed Silicon - build your own Lisp Processor
Date: 
Message-ID: <cs9tzmtfaow.fsf@Psilocybe.Update.UU.SE>
Daniel Weinreb <···@alum.mit.edu> writes:


> The idea of having peripheral processors seems to be coming
> back.  The Sony Playstation-3 has an architecture involving
> one central CPU and eight peripheral CPU's that you can farm
> out work to. Naturally, it's not entirely a piece of cake
> to program, but you can easily see how video games would
> have some hard compute work that's not too hard to do
> in parallel (processor 0 renders the upper-left, processor
> 1 renders the upper right, etc.).

Sounds like the Amiga was once again far to futuristic.

> Meanwhile, check out the Java machines from Azul:
>
> http://www.azulsystems.com/

That website looks bogus.

/Andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?