From: Bulent Murtezaoglu
Subject: cmucl/acl Gabriel benchmarks under Linux /PPro
Date:
Message-ID: <87d8cj307m.fsf@isttest.bogus>
I thought I'd post my updated bechmark results using Thorsten
Schnier's version of Gabriel bechmark suite from
http://www.arch.su.edu.au/~thorsten/lisp/results.html
All run under kernel 2.0.3x, libc.
Acl 5b is the Allegro beta off ftp.franz.com
Cmucl 2.4.4 is off the experimental directory from www.cons.org
Cmucl 013098 is an older version with that timestamp off the same place
212 is a hacked up Pentium Pro 180 set-up with ~85Mhz bus 212Mhz core and
128Mb EDO ram.
256 is like above but with 256MHz core speed.
233 is a overclocked dual PPro with 160Mb EDO ram sitting on a 66MHz bus.
All timings under X/10-base-T NSF with a fresh Lisp. (single run)
Note that in general (especially with hash tables) main memory speed
will be significant so don't be misled by the apparent linear speedup with
core speed. FWIW, these are just benchmarks, of course.
013098-cmucl Cmucl 2.4.4 Acl 5b Acl4.3
212 256 233 233 233 233
ctak 4.91 4.06 4.48 4.47 10.120 9.576
stak 4.5 4.49 4.15 4.08 11.366 10.789
tak 7.49 5.94 6.54 6.80 5.422 5.885
takl 8.34 6.46 7.64 7.61 7.521 13.554
takr 5.33 4.34 4.91 4.91 6.060 5.614
boyer 2.62 2.27 2.56 2.54 4.169 2.059
browse 5.78 4.98 5.56 5.52 2.893 2.143
dderiv 3.95 3.33 3.72 3.67 1.715 1.332
deriv 3.0 2.55 2.88 2.83 1.465 1.062
destru 5.49 4.66 5.23 5.14 2.810 2.206
div2 3.42 2.94 3.33 3.27 1.360 0.806
recursive 4.03 3.45 3.88 3.86 1.861 1.271
fft 4.25 3.71 4.13 4.08 8.473 6.218
fprint 11.37 10.08 9.57 9.13 19.630 13.002
fread 5.3 4.25 4.64 4.48 18.735 8.884
frpoly r 15 3.1 2.59 2.90 2.89 2.580 2.382
frpoly r2 15 2.73 2.39 2.69 2.74 3.079 3.111
frpoly r3 15 4.51 3.96 4.54 0.98 1.175 0.956
puzzle 4.05 2.58 2.62 2.62 4.183 5.723
tprint 7.46 6.18 6.71 6.47 9.245 7.074
traverse 3.43 2.69 3.04 3.04 3.901 2.839
traverse-run 4.64 3.59 4.04 3.94 2.252 2.647
triang 34.74 29.02 31.58 31.90 43.806 32.334
From: Jim Veitch
Subject: Re: cmucl/acl Gabriel benchmarks under Linux /PPro
Date:
Message-ID: <357DDCC7.7E6D@franz.com>
Bulent Murtezaoglu wrote:
>
> I thought I'd post my updated bechmark results using Thorsten
> Schnier's version of Gabriel bechmark suite from
>
> http://www.arch.su.edu.au/~thorsten/lisp/results.html
I have some comments on this post. We were puzzled at this post here at
Franz since the timings looked orders of magnitude too slow and then
when we
read over Thorsten's benchmarking comments we understood. Thorsten
removes some of the declarations. In a sense, Thorsten seems to be
testing
how well each Lisp does generic, unoptimized operations, but even here
it
isn't quite clear since he runs at high speed (different Lisps make
different
assumptions at speed 3; e.g., Allegro CL ignores speed when the data
types
are not declared, opting to be safe, whereas Lucid CL did not always do
this).
Our view is that not all Lisps are the same and you will need different
declarations in different Lisps to achieve maximum speed. One can argue
it
is harder or easier to achieve the maximum speed in different Lisps, but
with Allegro CL we document carefully how to do it and explicitly
document
how we handle the speed/safety tradeoff in the presence of declarations.
Instead of presenting raw timings and comparing with CMU, instead we
will
present the ratios of the benchmarks for Allegro CL 5.0 compared to
Allegro
CL 4.3. Times are real time (including GC)
Benchmarks are run on Linux Intel machine Pentium Pro 200 MHz with 64 MB
Ram.
Acl 5 Acl4.3
ctak .041 .042
stak .067 .068
tak .007 .006
takl .045 .045
takr .017 .018
boyer .293 .298
browse .238 .234
dderiv .078 .082
deriv .068 .067
destru .016 .017
div2-iter .028 .023
div2-recur .041 .040
fft .019 .019
fprint .087 .088
fread .052 .051
frpoly r 15 .051 .052
frpoly r2 15 .341 .327
frpoly r3 15 .129 .132
puzzle .064 .064
tprint .082 .083
traverse-init .059 .058
traverse-run .271 .269
triang 1.147 1.650
To highlight the difference: for ctak, Bulent reports 10.120 on a 233
MHz machine with Allegbro CL 5.0 and we see .041 on a 200 MHz machine
with ACL 5.0 (for a factor of 248, not including the clock speeds).
In general, since the Gabriel benchmarks pre-date CLOS, Allegro CL 5.0
is
perhaps marginally faster on these benchmarks. In certain CLOS related
benchmarks, Allegro CL 5.0 is much faster than 4.3, sometimes 30% or so.
Jim Veitch.
From: ···········@alcoa.com
Subject: Re: cmucl/acl Gabriel benchmarks under Linux /PPro
Date:
Message-ID: <6lmj1s$4oi$1@nnrp1.dejanews.com>
Are your source versions of the benchmarks available somewhere? I think they
woulsd be useful examples of how to use declarations in ACL.
John Watton
ALCOA
In article <·············@franz.com>,
···@franz.com wrote:
>
> Bulent Murtezaoglu wrote:
> >
> > I thought I'd post my updated bechmark results using Thorsten
> > Schnier's version of Gabriel bechmark suite from
> >
> > http://www.arch.su.edu.au/~thorsten/lisp/results.html
>
> I have some comments on this post. We were puzzled at this post here at
> Franz since the timings looked orders of magnitude too slow and then
> when we
> read over Thorsten's benchmarking comments we understood. Thorsten
> removes some of the declarations. In a sense, Thorsten seems to be
> testing
> how well each Lisp does generic, unoptimized operations, but even here
> it
> isn't quite clear since he runs at high speed (different Lisps make
> different
> assumptions at speed 3; e.g., Allegro CL ignores speed when the data
> types
> are not declared, opting to be safe, whereas Lucid CL did not always do
> this).
>
> Our view is that not all Lisps are the same and you will need different
> declarations in different Lisps to achieve maximum speed. One can argue
> it
> is harder or easier to achieve the maximum speed in different Lisps, but
> with Allegro CL we document carefully how to do it and explicitly
> document
> how we handle the speed/safety tradeoff in the presence of declarations.
>
> Instead of presenting raw timings and comparing with CMU, instead we
> will
> present the ratios of the benchmarks for Allegro CL 5.0 compared to
> Allegro
> CL 4.3. Times are real time (including GC)
>
> Benchmarks are run on Linux Intel machine Pentium Pro 200 MHz with 64 MB
> Ram.
>
> Acl 5 Acl4.3
>
> ctak .041 .042
> stak .067 .068
> tak .007 .006
> takl .045 .045
> takr .017 .018
> boyer .293 .298
> browse .238 .234
> dderiv .078 .082
> deriv .068 .067
> destru .016 .017
> div2-iter .028 .023
> div2-recur .041 .040
> fft .019 .019
> fprint .087 .088
> fread .052 .051
> frpoly r 15 .051 .052
> frpoly r2 15 .341 .327
> frpoly r3 15 .129 .132
> puzzle .064 .064
> tprint .082 .083
> traverse-init .059 .058
> traverse-run .271 .269
> triang 1.147 1.650
>
> To highlight the difference: for ctak, Bulent reports 10.120 on a 233
> MHz machine with Allegbro CL 5.0 and we see .041 on a 200 MHz machine
> with ACL 5.0 (for a factor of 248, not including the clock speeds).
>
> In general, since the Gabriel benchmarks pre-date CLOS, Allegro CL 5.0
> is
> perhaps marginally faster on these benchmarks. In certain CLOS related
> benchmarks, Allegro CL 5.0 is much faster than 4.3, sometimes 30% or so.
>
> Jim Veitch.
>
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
Apologies if you see my response twice, the first doesn't seem to have
made it into the newsgroup (at least not my feed). And I didn't archive
it either, so this is a bit different.
Jim Veitch <···@franz.com> writes:
> Bulent Murtezaoglu wrote:
> >
> > I thought I'd post my updated bechmark results using Thorsten
> > Schnier's version of Gabriel bechmark suite from
> >
> > http://www.arch.su.edu.au/~thorsten/lisp/results.html
>
> I have some comments on this post. We were puzzled at this post here at
> Franz since the timings looked orders of magnitude too slow and then
> when we
> read over Thorsten's benchmarking comments we understood. Thorsten
> removes some of the declarations. In a sense, Thorsten seems to be
> testing
> how well each Lisp does generic, unoptimized operations, but even here
> it
Yes and no, I did remove some of the declarations, because they seemed
excessive and targeted too specifically at a particular LISP (most
declarations remain in the code). If you see orders-of-magnitude
differences, then this is because I've also upped the loopcount for a
number of tests, because they where so fast the timing would be too
susceptible to random influences. The loop counts are indicated on the
web page (and in the modified source).
I also changed the way the test are run, since I had the impression
the compiler would optimize-away the inner loop entirely in some
cases, as the result was not used (gcl's C-compiler for example).
> isn't quite clear since he runs at high speed (different Lisps make
> different
> assumptions at speed 3; e.g., Allegro CL ignores speed when the data
> types
> are not declared, opting to be safe, whereas Lucid CL did not always do
> this).
> Our view is that not all Lisps are the same and you will need different
> declarations in different Lisps to achieve maximum speed. One can argue
> it
> is harder or easier to achieve the maximum speed in different Lisps, but
> with Allegro CL we document carefully how to do it and explicitly
> document
> how we handle the speed/safety tradeoff in the presence of declarations.
>
I think that the ability to create fast code without excessive user
input is an important part of a good LISP compiler. There are some
things a compiler simply cannot know, and the user can help improve
the code. But for example the amount of 'the fixnum' code in
puzzle.lisp is just excessive:
(defun puzzle-start ()
(do ((m 0 (the fixnum (1+ (the fixnum m)))))
((> (the fixnum m) puzzle-size))
(declare (type fixnum m))
(setf (aref puzzle m) t))
(do ((i 1 (the fixnum (1+ (the fixnum i)))))
((> (the fixnum i) 5))
(declare (type fixnum i))
(do ((j 1 (the fixnum (1+ (the fixnum j)))))
((> (the fixnum j) 5))
(declare (type fixnum j))
(do ((k 1 (the fixnum (1+ (the fixnum k)))))
((> (the fixnum k) 5))
(declare (type fixnum k))
(setf (aref puzzle
(the fixnum (+ i
(the fixnum
(* puzzle-d
(the fixnum
(+ j
(the fixnum
(* puzzle-d k)))))))))
nil))))
...
If I have to write code like this, I prefer to do it in C ! Seriously,
once you have to start to think about implementation details like
boxing, use of registers etc., it is much easier (because more
obvious) in C.
An excellent example of a compiler that can do a large amount of
reasoning about types and ranges is the numerical code in CMUcl.
If you come up with a version that you think is fair to all the LISPs
I tested, I will be happy to do another run (as far as I have licences).
I do warn on the page about the general nature of benchmarks, and that
they will never be entirely fair. However, they do make sense to find
specific strengths and weaknesses, e.g. they show how much CMU's
performance in specific tests has improved with the new GC. The
numbers (and yours) also show that there isn't much to be inspected in
speedups in terms of e.g. integer performance for ACL 5.0 vs. 4.3. But
as you correctly point out, it only tests very specific areas, and a
30% speed increase in CLOS is certainly something worth looking
forward to.
[...]
>
> In general, since the Gabriel benchmarks pre-date CLOS, Allegro CL 5.0
> is
> perhaps marginally faster on these benchmarks. In certain CLOS related
> benchmarks, Allegro CL 5.0 is much faster than 4.3, sometimes 30% or so.
>
> Jim Veitch.
From: Christopher J. Vogt
Subject: Re: cmucl/acl Gabriel benchmarks under Linux /PPro
Date:
Message-ID: <35801618.232873D0@computer.org>
Thorsten Schnier wrote:
> Jim Veitch <···@franz.com> writes:
> > Our view is that not all Lisps are the same and you will need different
> > declarations in different Lisps to achieve maximum speed. One can argue
> > it
> > is harder or easier to achieve the maximum speed in different Lisps, but
> > with Allegro CL we document carefully how to do it and explicitly
> > document
> > how we handle the speed/safety tradeoff in the presence of declarations.
> >
>
> I think that the ability to create fast code without excessive user
> input is an important part of a good LISP compiler. There are some
> things a compiler simply cannot know, and the user can help improve
> the code. But for example the amount of 'the fixnum' code in
> puzzle.lisp is just excessive:
I generally agree with Thorsen's statements. I feel that if I declare
a variable to be of type fixnum, I shouldn't have to use "the" to
declare the return type when setting/binding that value. For example,
in the puzzle code, I feel that the code below should give me optimized
results, regardles of speed/safety settings.
Additionaly, I would add that in my experience (benchmarks aside) 80%
of the code I write performs just fine without declarations, and it
is on about 20% (or less) of the code that turns out to be speed
critical and can benefit from some added declarations (with significant
performance benefit), so I don't find this issue *that* burdensome.
(defun puzzle-start ()
(declare (type (simple-vector fixnum) puzzle)
(type fixnum puzzle-d))
(do ((m 0 (1+ m)))
((> m size))
(declare (type fixnum m))
(setf (aref puzzle m) true))
(do ((i 1 (1+ i)))
((> i 5))
(declare (type fixnum i))
(do ((j 1 (1+ j)))
((> j 5))
(declare (type fixnum j))
(do ((k 1 (1+ k)))
((> k 5))
(declare (type fixnum k))
(setf (aref puzzle (+ i (* puzzle-d (+ j (* puzzle-d k))))) nil)))))
My only added comment concerns the code below. I don't know where this
code comes from, but it truly is abysmal. I don't know of any experienced
Lisp programmer who would write code like that. It is pretty rare
to see "the" in anything other than a macro (see code following).
>
> (defun puzzle-start ()
> (do ((m 0 (the fixnum (1+ (the fixnum m)))))
> ((> (the fixnum m) puzzle-size))
> (declare (type fixnum m))
> (setf (aref puzzle m) t))
> (do ((i 1 (the fixnum (1+ (the fixnum i)))))
> ((> (the fixnum i) 5))
> (declare (type fixnum i))
> (do ((j 1 (the fixnum (1+ (the fixnum j)))))
> ((> (the fixnum j) 5))
> (declare (type fixnum j))
> (do ((k 1 (the fixnum (1+ (the fixnum k)))))
> ((> (the fixnum k) 5))
> (declare (type fixnum k))
> (setf (aref puzzle
> (the fixnum (+ i
> (the fixnum
> (* puzzle-d
> (the fixnum
> (+ j
> (the fixnum
> (* puzzle-d k)))))))))
> nil))))
>
> ...
>
> If I have to write code like this, I prefer to do it in C ! Seriously,
> once you have to start to think about implementation details like
> boxing, use of registers etc., it is much easier (because more
> obvious) in C.
>
(defmacro i+ (one two)
`(the fixnum (+ (the fixnum ,one) (the fixnum ,two))))
(defmacro i1+ (one)
`(i+ one 1))
(defmacro i* (one two)
`(the fixnum (* (the fixnum ,one) (the fixnum ,two))))
(defmacro i> (one two)
`(> (the fixnum ,one) (the fixnum ,two)))
(defun start ()
(do ((m 0 (i1+ m)))
((i> m size))
(declare (type fixnum m))
(setf (aref puzzle m) true))
(do ((i 1 (i1+ i)))
((i> i 5))
(declare (type fixnum i))
(do ((j 1 (i1+ j)))
((i> j 5))
(declare (type fixnum j))
(do ((k 1 (i1+ k)))
((i> k 5))
(declare (type fixnum k))
(setf (aref puzzle (i+ i (i* *d* (i+ j (i* *d* k))))) nil)))))
--
Christopher J. Vogt - Computer Consultant - Lisp, AI, Graphics, etc.
http://members.home.com/vogt/
From: Raymond Toy
Subject: Re: cmucl/acl Gabriel benchmarks under Linux /PPro
Date:
Message-ID: <4nk96nbvj5.fsf@rtp.ericsson.se>
"Christopher J. Vogt" <····@computer.org> writes:
[snip]
> I generally agree with Thorsen's statements. I feel that if I declare
> a variable to be of type fixnum, I shouldn't have to use "the" to
> declare the return type when setting/binding that value. For example,
> in the puzzle code, I feel that the code below should give me optimized
> results, regardles of speed/safety settings.
>
[snip]
>
> (defun puzzle-start ()
> (declare (type (simple-vector fixnum) puzzle)
There is no (simple-vector fixnum). Perhaps you meant just simple-vector.
> (type fixnum puzzle-d))
> (do ((m 0 (1+ m)))
> ((> m size))
> (declare (type fixnum m))
> (setf (aref puzzle m) true))
T instead of TRUE?
> (do ((i 1 (1+ i)))
> ((> i 5))
> (declare (type fixnum i))
> (do ((j 1 (1+ j)))
> ((> j 5))
> (declare (type fixnum j))
> (do ((k 1 (1+ k)))
> ((> k 5))
> (declare (type fixnum k))
> (setf (aref puzzle (+ i (* puzzle-d (+ j (* puzzle-d k))))) nil)))))
One of the problems is that if puzzle-d is a fixnum and k is a fixnum,
then (* puzzle-d k) might not be a fixnum, so either you need better
declarations or use (the fixnum ...) to tell the compiler the result
really is a fixnum.
Ideally, the compiler should be able to determine that I, J, and K are
all integers in the range 1 to 6. I don't know of any than can do
this, though. Then you'd still have to declare puzzle-d to be small
enough that (+ i (* puzzle-d (+ j (* puzzle-d k)))) is a fixnum for
all I, J, and K.
If I do this
(defun puzzle-start (puzzle size puzzle-d)
(declare (type (simple-vector *) puzzle)
(type (integer 0 100) puzzle-d)
(type (integer 1 100) size)
(optimize (speed 3)))
(do ((m 0 (1+ m)))
((> m size))
(setf (aref puzzle m) t))
(do ((i 1 (1+ i)))
((> i 5))
(declare (type (integer 1 6) i))
(do ((j 1 (1+ j)))
((> j 5))
(declare (type (integer 1 6) j))
(do ((k 1 (1+ k)))
((> k 5))
(declare (type (integer 1 6) k))
(setf (aref puzzle (+ i (* puzzle-d (+ j (* puzzle-d k))))) nil)))))
I get no complaints from CMUCL, so presumably everything has been
proven to be a fixnum. Then, I really don't need any (the fixnum ...)
anywhere. (I'm guessing about the possible range of values of
puzzle-d and size.)
Ray
From: Jim Veitch
Subject: Re: cmucl/acl Gabriel benchmarks under Linux /PPro
Date:
Message-ID: <3581B907.79AB@franz.com>
Thorsten Schnier wrote:
> Jim Veitch <···@franz.com> writes:
>
> > Bulent Murtezaoglu wrote:
> > >
> > > I thought I'd post my updated bechmark results using Thorsten
> > > Schnier's version of Gabriel bechmark suite from
> > >
> > > http://www.arch.su.edu.au/~thorsten/lisp/results.html
> >
> > I have some comments on this post. We were puzzled at this post here at
> > Franz since the timings looked orders of magnitude too slow and then
> > when we
> > read over Thorsten's benchmarking comments we understood. Thorsten
> > removes some of the declarations. In a sense, Thorsten seems to be
> > testing
> > how well each Lisp does generic, unoptimized operations, but even here
> > it
>
> Yes and no, I did remove some of the declarations, because they seemed
> excessive and targeted too specifically at a particular LISP (most
> declarations remain in the code). If you see orders-of-magnitude
> differences, then this is because I've also upped the loopcount for a
> number of tests, because they where so fast the timing would be too
> susceptible to random influences. The loop counts are indicated on the
> web page (and in the modified source).
That explains part of it; I missed the loop count issue.
> I also changed the way the test are run, since I had the impression
> the compiler would optimize-away the inner loop entirely in some
> cases, as the result was not used (gcl's C-compiler for example).
>
> I think that the ability to create fast code without excessive user
> input is an important part of a good LISP compiler.
In that case should one permit the compiler to optimize away loops or
shouldn't one? Actually I agree with you that the compilers should
be smarter and in CMUCL does a lot of reasoning about types which
reduces the burden on the programmer.
However changing the Gabriel benchmarks don't seem to me to be the way
to
measure this! Probably one should go through Lisp programs, pull out
common idioms, and build benchmarks around that.
Knuth estimated years ago most of the non-I/O running time is in 3% of
the code, so other tools like good profilers and compiler hints that
tell you what to do to get extra speed are also a key part to producing
optimized applications. I would guess Knuth's estimate isn't too far
off the mark today either.
The above comments apply to other languages such as C++ as well.
Jim Veitch.
Jim.
Jim Veitch <···@franz.com> writes:
>
> > I also changed the way the test are run, since I had the impression
> > the compiler would optimize-away the inner loop entirely in some
> > cases, as the result was not used (gcl's C-compiler for example).
> >
> > I think that the ability to create fast code without excessive user
> > input is an important part of a good LISP compiler.
>
> In that case should one permit the compiler to optimize away loops or
> shouldn't one?
Good point. I guess the question here is if you want to measure how
fast it runs a certain program, or how fast it does a certain (set of)
operation(s). If it is the second, then it is not good if some of
those are optimized away. I remember a benchmark posting in this ng a
while ago where someone had the same problem in a number of different
programming languages. If I remember correctly, in one language the
compiler itself 'solved' the problem already, and created a one-liner
code. The compiler ran for a long time, but not the resulting code...
> Actually I agree with you that the compilers should
> be smarter and in CMUCL does a lot of reasoning about types which
> reduces the burden on the programmer.
>
Yes. Another problem is portability: while LISP itself ports very
well, optimizations don't. E.g. allegro (I think) can represent
30-bit integers natively, CMU 32 bit. For a while, I was using four
different LISPs, depending on which machine was avaliable and which
LISP was running the particular application fastest.
Given that LISP is a common language for expert systems, it is ironic
that there isn't an expert system helping the optimization. The domain
is nicely bounded and stable, so it should be a good application. Maybe
even an evolutionary system, though it would be difficult to test if
the program is still correct (there's an interesting Ph.D. thesis...)
> However changing the Gabriel benchmarks don't seem to me to be the way
> to
> measure this! Probably one should go through Lisp programs, pull out
> common idioms, and build benchmarks around that.
>
Another problem with the gabriel tests is the way GCs are done. Some
LISPs do 'on the fly GCs' throughout the run, and then sometimes do an
intensive (slow) full GC; while others do full GCs regularly. Even
with the increased loop counts, the tests usually don't run long
enough to force all to do a full GC, biasing the result in favor of
those doing 'on the fly'.
I really don't think these benchmarks are very good (neither the
original nor my modified version), but its all that is available. An
alternative to your suggestion would be to go the way specmarks do and
use a set of existing applications. But it would still be very hard to
write such idioms in an 'even-handed' way or make such applications
comparable.
It would be good to have such a set, as there definitely are strengths
and weaknesses in all different lisps, and it can make a difference if
you pick the right one for the task. Demo versions are available for
all the LISPs, but again to test them all even for an application at
hand you'd have to learn how to optimize the code properly for each
one of them first. A set of predefined, optimized benchmarks could say
" this is what you get for fixed-number performance, this for
complex-floats, and this for CLOS". There would be lots of experience
required to create such a set. I've been writing LISP code for about 8
years now, and certainly don't think I could do it justice (apart from
the fact that I have to finish my thesis). Any takers ?
> Knuth estimated years ago most of the non-I/O running time is in 3% of
> the code, so other tools like good profilers and compiler hints that
> tell you what to do to get extra speed are also a key part to producing
> optimized applications. I would guess Knuth's estimate isn't too far
> off the mark today either.
I entirely agree, and that is something where the possiblity to run
demo versions really helps to chose the right product.
>
> The above comments apply to other languages such as C++ as well.
>
> Jim Veitch.
>
> Jim.
thorsten
-----------------------------------------------------------------------
\ k
\ e Thorsten Schnier
\ y
/\ \ Key Centre of Design Computing
/ / c University of Sydney
\ / e NSW 2006
\ / n
\/ t ········@arch.usyd.edu.au
DESIGN r http://www.arch.usyd.edu.au/~thorsten/
COMPUTING e