Here is a benchmark examination of Coral Allegro CL vs Lucid CL that is
part of a review article I did for our ITS Newsletter. Sorry the formatting
is not great, but it's the best I could do on short notice.
I also tried to run the gabriel suite on Lyric CL from Xerox, but the first
one took so long to run (never stopped in 5 minutes) that I gave it up. If
someone has the gabriel suite for Lyric CL, I'd like to see it.
Also, there is a benchmark listing in a past issue of BYTE. I can't recall
which.
Benchmarks
In order to determine the relative efficiency of Allegro CL, the Gabriel suite
of benchmarks was run in Allegro CL and Lucid CL on a Vax 8860. The results
appear below.
Allegro Optimized Allegro Vax (C) Vax (I)
Boyer 36.000 23.050 7.81/1.01 418.12/26.66
Browse 65.233 52.800 8.87/0.58 587.73/37.06
CTAK 4.317 3.500 0.74/0.01 35.81/2.99
Dderiv 31.933/4.267 31.017/4.283 3.83/0.2 48.02/2.68
Deriv 32.250/4.183 31.350/4.183 3.06/0.18 54.84/2.03
Destructive 9.017 7.383 0.99/0.09 113.85/1.44
Div-iter 5.333 3.117 1.99/0.11 98.85/6.22
DiOAv-rec 5.000 2.667 2.43/0.08 58.96/4.17
FFT 67.083/4.233 66.550/4.217 41.59/2.13 433.94/28.56
Fprint 9.650 9.567 0.6/0.06 0.63/0.03
Fread 3.700 3.333 1.26/0.13 1.23/0.04
Frpoly Power=2 r=x+y+z+1 0.033 0.050 0.0 0.3/0.02
Frpoly Power=2 r2=1000r 0.017 0.017 0.01 0.17/0.01
Frpoly Power=2 r3=r in flonums 0.017 0.017 0.02 0.18/0.01
Frpoly Power=5 r=x+y+z+1 0.167 0.117 0.02 1.61/0.13
Frpoly Power=5 r2=1000r 0.233 0.183 0.06/0.02 1.72/0.11
OAFrpoly Power=5 r3=r in flonums 0.183 0.150 0.04 1.6/0.16
Frpoly Power=10 r=x+y+z+1 1.517 1.167 0.22/0.02 18.2/1.27
Frpoly Power=10 r2=1000r 2.633 2.283 0.73/0.02 17.69/0.74
Frpoly Power=10 r3=r in flonums 1.900 1.550 0.36/0.01 18.07/0.82
Frpoly Power=15 r=x+y+z+1 9.767 7.567 1.54/0.13 115.77/1.78
Frpoly Power=15 r2=1000r 20.550 18.200 7.25/0.5 124.55/3.61
Frpoly Power=15 r3=r in flonums 12.133 9.867 3.83/0.24 118.95/3.26
Puzzle 69.283 68.267 20.29/0.87 823.23/46.0
STAK 15.200 15.667 0.77/0.03 45.38/0.86
TAK 1.250 0.700 0.26 29.48/0.26
TAKL 13.100 7.133 0.91 263.66/4.39
TAKR 1.433 0.917 0.38/0.03 29.75/0.56
Tprint 29.883 31.600 0.79/0.04 0.8/0.04
Traverse-init 36.533 35.517 5.57/0.29 913.73/85.41
Traverse 135.017 133.967 20.07/1.17 >2600
Note: The optimized Allegro column refers to compiled benchmarks including the
form (declare (optimize (speed 3) (size 0) (safety 0) ... ) wrapped around
them. The difference between the Lucid CL benchmarks when compiled and
interpreted is quite marked. The (C) column refers to files loaded with (load
"file.vbin") after (compile-file "file.cl") has been completed. The (I) column
refers to files loaded with (load "file.cl").
--
..........................................................................
Jeffrey Sullivan | University of Pittsburgh
···@cadre.dsl.pittsburgh.edu | Intelligent Systems Studies Program
······@PittVMS.BITNET, ······@cisunx.UUCP | Graduate Student