From: John Watton
Subject: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <uohanhwhu.fsf@alcoa.com>
What follows is the results of a small study I did to compare various
computing languages, especially those interpreted types that seem to
get so much interest for one reason or another. I would like to know
how they compare to each other syntax wise and performance wise. Since
I am new to many of them I am sure I didn't do them justice. If anyone
wants to offer any straightforward improvement to any, especially
Perl, Tcl, Java, Python, and Scheme versions please do so. Please
don't flame me if your favorite didn't fair well - this is only a
first pass and very much favors speed of numerical calculation. The
test is actually some very useful code in computer graphics. It is a
function to determine if a 2D point is inside a closed 2D polygon. The
polygon need not be convex. The original C version came from the
comp.graphics.algorithms faq. I call the function on 12 different
points half in and half out (alternating) for 100000 iterations and
sum up how many points are inside - which comes to 12*1/2*100000 or
600000 if the code is correct - just in case you want to write a
version for a language I didn't get to.
Happy computing!
MACHINE-TYPE: SPARC
SOFTWARE-VERSION: SunOS 5.4 Generic_101945-27 sun4m
native compiler time (sec) lang html
--------------- ---------- ---- ----
gcc -O3 (2.7.2) 6.0 C www.gnu.ai.mit.edu
gcc -O2 (2.7.2) 6.1 C
Franz Allegro Lisp 4.3 8.4 Common Lisp www.franz.com
cc -O (SC3.0.1) 8.5 C
cc (SC3.0.1) 15.4 C
interpreter
-----------
Sun's Java 1.0.2 170.0 Java www.javasoft.com
CINT (C/C++ interpr.) 530.0 C hpsalo.cern.ch/root/Cint
Python1.4 1450.0 Python www.python.org
Guile1.0 2060.0 Scheme www.gnu.ai.mit.edu
Tcl8.0a2 2480.0 Tcl sunscript.sun.com
Perl5.003 2600.0 Perl www.perl.com
Tcl7.4 21350.0 Tcl
Preliminary observations:
1. Unsuprisingly, native compiled code is faster than interpreted.
2. Tcl 8.0 is much improved over Tcl 7.4.
3. Typed interpreted (Java & CINT) outperforms untyped interpreted.
4. Going into Tcl, Perl, Java, and Python almost cold - Python was
fastest to pick up on and debug; Tcl the slowest (but I used 7.4).
5. Java has no functions except methods on objects? Irritating.
C-CINT-C-CINT-C-CINT-C-CINT-C-CINT-C-CINT-C-CINT-C-CINT-C-CINT-C-CINT-C-CINT
/*The code below is from Wm. Randolph Franklin <···@ecse.rpi.edu>
with some minor modifications for speed.
comp.graphics.algorithms FAQ
References:
[Gems IV] pp. 24-46
[O'Rourke] pp. 233-238
[Glassner:RayTracing] */
int pnpoly(int npol, double *xp, double *yp, double x, double y)
{
int i, j, c = 0;
for (i = 0, j = npol-1; i < npol; j = i++) {
if ((((yp[i]<=y) && (y<yp[j])) ||
((yp[j]<=y) && (y<yp[i]))) &&
(x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i]))
c = !c;
}
return c;
}
main() {
int npol=20, count=0;
double xp[20]= {0.0,1.0,1.0,0.0,0.0,1.0,-.5,-1.0,-1.0,-2.0,
-2.5,-2.0,-1.5,-.5,1.0,1.0,0.0,-.5,-1.0,-.5};
double yp[20]= {0.0,0.0,1.0,1.0,2.0,3.0,2.0,3.0,0.0,-.5,
-1.0,-1.5,-2.0,-2.0,-1.5,-1.0,-.5,-1.0,-1.0,-.5};
int i=0;
for(i=0;i<100000;i++) {
if (pnpoly(npol,xp,yp,0.5,0.5)) count++;
if (pnpoly(npol,xp,yp,0.5,1.5)) count++;
if (pnpoly(npol,xp,yp,-.5,1.5)) count++;
if (pnpoly(npol,xp,yp,0.75,2.25)) count++;
if (pnpoly(npol,xp,yp,0,2.01)) count++;
if (pnpoly(npol,xp,yp,-.5,2.5)) count++;
if (pnpoly(npol,xp,yp,-1.0,-.5)) count++;
if (pnpoly(npol,xp,yp,-1.5,.5)) count++;
if (pnpoly(npol,xp,yp,-2.25,-1.0)) count++;
if (pnpoly(npol,xp,yp,0.5,-.25)) count++;
if (pnpoly(npol,xp,yp,0.5,-1.25)) count++;
if (pnpoly(npol,xp,yp,-.5,-2.5)) count++;
}
printf("count %d \n", count);
return(0);
}
LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP-LISP
(defun pnpoly (npol xp yp x y)
(declare (optimize (speed 3) (safety 0))
(fixnum npol)
(double-float x y)
(type (simple-array double-float (*)) xp yp))
(let* ((c nil)
(j (1- npol)))
(declare (fixnum j))
(dotimes (i npol c)
(declare (fixnum i))
(if (and (or (and (<= (aref yp i) y) (< y (aref yp j)))
(and (<= (aref yp j) y) (< y (aref yp i))))
(< x (+ (aref xp i) (/ (* (- (aref xp j) (aref xp i)) (- y (aref yp i)))
(- (aref yp j) (aref yp i))))))
(setq c (not c)))
(setq j i))))
(defun pt-poly-test ()
(declare (optimize (speed 3) (safety 0)))
(let ((npol 20)
(count 0)
(xp (make-array 20 :element-type 'double-float
:initial-contents '(0.0 1.0d0 1.0d0 0.0d0 0.0d0 1.0d0 -.5d0 -1.0d0
-1.0d0 -2.0d0 -2.5d0 -2.0d0 -1.5d0 -.5d0 1.0d0
1.0d0 0.0d0 -.5d0 -1.0d0 -.5d0)))
(yp (make-array 20 :element-type 'double-float
:initial-contents '(0.0d0 0.0d0 1.0d0 1.0d0 2.0d0 3.0d0 2.0d0 3.0d0
0.0d0 -.5d0 -1.0d0 -1.5d0 -2.0d0 -2.0d0 -1.5d0
-1.0d0 -.5d0 -1.0d0 -1.0d0 -.5d0))))
(declare (fixnum npol count)
(type (simple-array double-float (20)) xp yp))
(dotimes (i 100000 count)
(if (pnpoly npol xp yp 0.5d0 0.5d0) (incf count))
(if (pnpoly npol xp yp 0.5d0 1.5d0) (incf count))
(if (pnpoly npol xp yp -.5d0 1.5d0) (incf count))
(if (pnpoly npol xp yp .75d0 2.25d0) (incf count))
(if (pnpoly npol xp yp 0.0d0 2.01d0) (incf count))
(if (pnpoly npol xp yp -.5d0 2.5d0) (incf count))
(if (pnpoly npol xp yp -1.0d0 -.5d0) (incf count))
(if (pnpoly npol xp yp -1.5d0 .5d0) (incf count))
(if (pnpoly npol xp yp -2.25d0 -1.0d0) (incf count))
(if (pnpoly npol xp yp .5d0 -.25d0) (incf count))
(if (pnpoly npol xp yp .5d0 -1.25d0) (incf count))
(if (pnpoly npol xp yp -.5d0 -2.5d0) (incf count))
)))
TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL-TCL
proc pnpoly {npol xp yp x y} {
set c 0
set j [expr $npol-1]
for {set i 0} {$i < $npol} {incr i 1} {
set xpi [lindex $xp $i]
set xpj [lindex $xp $j]
set ypi [lindex $yp $i]
set ypj [lindex $yp $j]
if {(((($ypi <= $y) && ($y < $ypj)) ||
(($ypj <= $y) && ($y < $ypi)))
&& ($x < [expr (($xpj - $xpi) * ($y - $ypi) / ($ypj - $ypi) + $xpi)]))} {
set c [expr !$c]
}
set j $i
}
return $c
}
proc pnpolytest {} {
set count 0
set npol 20
set xp {0.0 1.0 1.0 0.0 0 1.0 -.5 -1.0 -1.0 -2.0 -2.5 -2.0 -1.5 -.5 1.0 1.0 0.0 -.5 -1.0 -.5}
set yp {0.0 0.0 1.0 1.0 2.0 3.0 2.0 3.0 0.0 -.5 -1.0 -1.5 -2.0 -2.0 -1.5 -1.0 -.5 -1.0 -1.0 -.5}
for {set i 0} {$i<100000} {incr i} {
if {[pnpoly $npol $xp $yp 0.5 0.5]} {incr count}
if {[pnpoly $npol $xp $yp 0.5 1.5]} {incr count}
if {[pnpoly $npol $xp $yp -.5 1.5]} {incr count}
if {[pnpoly $npol $xp $yp 0.75 2.25]} {incr count}
if {[pnpoly $npol $xp $yp 0.0 2.01]} {incr count}
if {[pnpoly $npol $xp $yp -.5 2.5]} {incr count}
if {[pnpoly $npol $xp $yp -1.0 -.5]} {incr count}
if {[pnpoly $npol $xp $yp -1.5 0.5]} {incr count}
if {[pnpoly $npol $xp $yp -2.25 -1.0]} {incr count}
if {[pnpoly $npol $xp $yp 0.5 -.25]} {incr count}
if {[pnpoly $npol $xp $yp 0.5 -1.25]} {incr count}
if {[pnpoly $npol $xp $yp -0.5 -2.5]} {incr count}
}
return $count
}
PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL-PERL
sub pnpoly {
local($npol, $xxp, $yyp, $x, $y);
($npol, $xxp, $yyp, $x, $y) = @_;
local($i, $j, $c = 0, @yp = @$yyp, @xp = @$xxp);
$j = $npol-1;
for ($i = 0; $i < $npol; $i++) {
if (((($yp[$i] <= $y) && ($y < $yp[$j])) ||
(($yp[$j] <= $y) && ($y < $yp[$i]))) &&
($x < (($xp[$j] - $xp[$i]) * ($y - $yp[$i]) / ($yp[$j] - $yp[$i]) + $xp[$i]))) {
$c = !$c;}
$j = $i
}
$c;
}
sub polytest {
local($npol, $count, @xp, @yp);
$npol=20;
$count=0;
@xp = (0.0,1.0,1.0,0.0,0.0,1.0,-.5,-1.0,-1.0,-2.0,
-2.5,-2.0,-1.5,-.5,1.0,1.0,0.0,-.5,-1.0,-.5);
@yp = (0.0,0.0,1.0,1.0,2.0,3.0,2.0,3.0,0.0,-.5,
-1.0,-1.5,-2.0,-2.0,-1.5,-1.0,-.5,-1.0,-1.0,-.5);
for($ii = 0; $ii < 100000; $ii++) {
if (pnpoly($npol,·@xp,·@yp,0.5,0.5)) {$count++};
if (pnpoly($npol,·@xp,·@yp,0.5,1.5)) {$count++};
if (pnpoly($npol,·@xp,·@yp,-.5,1.5)) {$count++};
if (pnpoly($npol,·@xp,·@yp,0.75,2.25)) {$count++};
if (pnpoly($npol,·@xp,·@yp,0,2.01)) {$count++};
if (pnpoly($npol,·@xp,·@yp,-.5,2.5)) {$count++};
if (pnpoly($npol,·@xp,·@yp,-1.0,-.5)) {$count++};
if (pnpoly($npol,·@xp,·@yp,-1.5,.5)) {$count++};
if (pnpoly($npol,·@xp,·@yp,-2.25,-1.0)) {$count++};
if (pnpoly($npol,·@xp,·@yp,0.5,-.25)) {$count++};
if (pnpoly($npol,·@xp,·@yp,0.5,-1.25)) {$count++};
if (pnpoly($npol,·@xp,·@yp,-.5,-2.5)) {$count++};
}
print "\n count ", $count, "\n";
}
JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA-JAVA
public class Pnpoly {
public boolean pnpoly(int npol, double[] xp, double[] yp, double x, double y)
{
int j;
boolean c = false;
for (int i = 0, j = npol-1; i < npol; j = i++) {
if ((((yp[i]<=y) && (y<yp[j])) ||
((yp[j]<=y) && (y<yp[i]))) &&
(x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i]))
c = !c;
}
return c;
}
public static void main(String args[]) {
Pnpoly pp = new Pnpoly();
int npol=20, count=0;
double[] xp = {0.0,1.0,1.0,0.0,0.0,1.0,-.5,-1.0,-1.0,-2.0,-2.5,-2.0,-1.5,-.5,1.0,1.0,0.0,-.5,-1.0,-.5};
double[] yp = {0.0,0.0,1.0,1.0,2.0,3.0,2.0,3.0,0.0,-.5,-1.0,-1.5,-2.0,-2.0,-1.5,-1.0,-.5,-1.0,-1.0,-.5};
for(int i=0;i<100000;i++) {
if (pp.pnpoly(npol,xp,yp,0.5,0.5)) count++;
if (pp.pnpoly(npol,xp,yp,0.5,1.5)) count++;
if (pp.pnpoly(npol,xp,yp,-.5,1.5)) count++;
if (pp.pnpoly(npol,xp,yp,0.75,2.25)) count++;
if (pp.pnpoly(npol,xp,yp,0,2.01)) count++;
if (pp.pnpoly(npol,xp,yp,-.5,2.5)) count++;
if (pp.pnpoly(npol,xp,yp,-1.0,-.5)) count++;
if (pp.pnpoly(npol,xp,yp,-1.5,.5)) count++;
if (pp.pnpoly(npol,xp,yp,-2.25,-1.0)) count++;
if (pp.pnpoly(npol,xp,yp,0.5,-.25)) count++;
if (pp.pnpoly(npol,xp,yp,0.5,-1.25)) count++;
if (pp.pnpoly(npol,xp,yp,-.5,-2.5)) count++;
}
System.out.println("count " + count);
}
}
PYTHON-PYTHON-PYTHON-PYTHON-PYTHON-PYTHON-PYTHON-PYTHON-PYTHON-PYTHON-PYTHON-PYTHON
def pnpoly(npol, xp, yp, x, y):
c = 0
i=0
j=npol-1
while i < npol:
if ((((yp[i]<=y) and (y<yp[j])) or
((yp[j]<=y) and (y<yp[i]))) and
(x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i])):
c = not(c)
i = i+1
return c
def pnpolytest():
npol=20
count=0
xp= [0.0,1.0,1.0,0.0,0.0,1.0,-.5,-1.0,-1.0,-2.0,-2.5,-2.0,-1.5,-.5,1.0,1.0,0.0,-.5,-1.0,-.5]
yp= [0.0,0.0,1.0,1.0,2.0,3.0,2.0,3.0,0.0,-.5,-1.0,-1.5,-2.0,-2.0,-1.5,-1.0,-.5,-1.0,-1.0,-.5]
i=0
while i < 1000:
if (pnpoly(npol,xp,yp,0.5,0.5)): count=count+1
if (pnpoly(npol,xp,yp,0.5,1.5)): count=count+1
if (pnpoly(npol,xp,yp,-.5,1.5)): count=count+1
if (pnpoly(npol,xp,yp,0.75,2.25)): count=count+1
if (pnpoly(npol,xp,yp,0,2.01)): count=count+1
if (pnpoly(npol,xp,yp,-.5,2.5)): count=count+1
if (pnpoly(npol,xp,yp,-1.0,-.5)): count=count+1
if (pnpoly(npol,xp,yp,-1.5,.5)): count=count+1
if (pnpoly(npol,xp,yp,-2.25,-1.0)): count=count+1
if (pnpoly(npol,xp,yp,0.5,-.25)): count=count+1
if (pnpoly(npol,xp,yp,0.5,-1.25)): count=count+1
if (pnpoly(npol,xp,yp,-.5,-2.5)): count=count+1
i=i+1
print 'count '+ `count`
GUILE-SCHEME-GUILE-SCHEME-GUILE-SCHEME-GUILE-SCHEME-GUILE-SCHEME-GUILE-SCHEME-GUILE-SCHEME
(define pt-in-poly
(lambda (npol xp yp x y)
(let ((c #f)
(j (- npol 1)))
(do ((i 0 (+ i 1)))
((= i npol) c)
(if (and (or (and (<= (vector-ref yp i) y) (< y (vector-ref yp j)))
(and (<= (vector-ref yp j) y) (< y (vector-ref yp i))))
(< x (+ (vector-ref xp i)
(/ (* (- (vector-ref xp j) (vector-ref xp i)) (- y (vector-ref yp i)))
(- (vector-ref yp j) (vector-ref yp i))))))
(set! c (not c)))
(set! j i))
)))
(define pnpolytest
(lambda ()
(let ((npol 20)
(count 0)
(xp '#1(0.0 1.0 1.0 0.0 0.0 1.0 -.5 -1.0 -1.0 -2.0 -2.5 -2.0 -1.5 -.5 1.0 1.0 0.0 -.5 -1.0 -.5))
(yp '#1(0.0 0.0 1.0 1.0 2.0 3.0 2.0 3.0 0.0 -.5 -1.0 -1.5 -2.0 -2.0 -1.5 -1.0 -.5 -1.0 -1.0 -.5)))
(do ((i 0 (+ i 1)))
((= i 10000))
(if (pt-in-poly npol xp yp 0.5 0.5) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp 0.5 1.5) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp -.5 1.5) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp .75 2.25) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp 0.0 2.01) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp -.5 2.5) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp -1.0 -.5) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp -1.5 .5) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp -2.25 -1.0) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp .5 -.25) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp .5 -1.25) (set! count (+ count 1)))
(if (pt-in-poly npol xp yp -.5 -2.5) (set! count (+ count 1)))
)
(display "count ") (write count) (newline)
count
)))
--
John D. Watton
Aluminum Company of America
Alcoa Technical Center
100 Technical Drive, Alcoa Center PA 15069
(412) 337-2165
From: Harvey J. Stein
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <m2rafiutv9.fsf@blinky.bfr.co.il>
In article <·············@alcoa.com> John Watton <···········@alcoa.com> writes:
What follows is the results of a small study I did to compare various
computing languages, especially those interpreted types that seem to
get so much interest for one reason or another. I would like to know
how they compare to each other syntax wise and performance wise.
There are some good studies comparing language efficiency that are
available on the net. It's very difficult to do speed comparisons.
One reason is that each language is actually giving you very different
features, even on constructs which appear similar.
Here are a couple of papers that I've found interesting:
Performing Lisp - Analysis of the FANNKUCH Benchmark
Kenneth R. Anderson & Duane Rettig
(I downloaded a copy of this, but can't seem to find a URL for it
now.)
Benchmarking Implementations of Functional Languages with `Pseudoknot', a
Float-Intensive Benchmark
P. H. Hartel and M. Feeley and M. Alt and L. Augustsson and P. Baumann and M.
Beemster and E. Chailloux and C. H. Flood and W. Grieskamp and J. H. G. van
Groningen and K. Hammond and B. Hausman and M. Y. Ivory and P. Lee and X. Leroy
and S. Loosemore and N. Rojemo and M. Serrano and J.-P. Talpin and J. Thackray and
P. Weis and P. Wentworth
(Abstract at http://www.iro.umontreal.ca/~serrano/publi.html)
(Paper at http://www.iro.umontreal.ca/~serrano/diffusion/ps/jfp96.ps.gz)
In my own testing, I've basically found that the scheme compilers
bigloo, hobbit, and gambit-C can get pretty close to C speed when
given the appropriate compile time switches (such as telling them not
to do generic arithmetic). Also, stalin often outputs programs which
run *faster* than the analogous C code, and requires absolutely *no*
such hints - it deduces all type information from the Scheme code
itself.
--
Harvey J. Stein
Berger Financial Research
····@netvision.net.il
In article <·············@alcoa.com>, John Watton <···········@alcoa.com> wrote:
> native compiler time (sec) lang html
> --------------- ---------- ---- ----
> gcc -O3 (2.7.2) 6.0 C www.gnu.ai.mit.edu
> gcc -O2 (2.7.2) 6.1 C
> Franz Allegro Lisp 4.3 8.4 Common Lisp www.franz.com
> cc -O (SC3.0.1) 8.5 C
> cc (SC3.0.1) 15.4 C
ACL does a pretty good job on type inferencing. Cool.
On my PowerBook (100Mhz 603e) a version with full type declarations (this
is needed, sigh) runs in Macintosh Common Lisp 4.x in about
15 seconds. Which is quite good - I think.
LispWorks on a PC seem to be much slower (just a short test). Sigh.
--
http://www.lavielle.com/~joswig/
From: Paul Meurer
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <33724f6f.49028296@nntp.uib.no>
On Thu, 08 May 1997 13:41:35 +0200, ······@lavielle.com (Rainer
Joswig) wrote:
>> native compiler time (sec) lang html
>
>> --------------- ---------- ---- ----
>> gcc -O3 (2.7.2) 6.0 C www.gnu.ai.mit.edu
>> gcc -O2 (2.7.2) 6.1 C
>> Franz Allegro Lisp 4.3 8.4 Common Lisp www.franz.com
>> cc -O (SC3.0.1) 8.5 C
>> cc (SC3.0.1) 15.4 C
>
>ACL does a pretty good job on type inferencing. Cool.
>
>On my PowerBook (100Mhz 603e) a version with full type declarations (this
>is needed, sigh) runs in Macintosh Common Lisp 4.x in about
>15 seconds. Which is quite good - I think.
>
>LispWorks on a PC seem to be much slower (just a short test). Sigh.
>
But bear in mind that you explicitly have to compile your functions in
LispWorks. My experience is that it is slightly faster than Allegro.
Here is a short comparision for the Gabriel benchmarks on a PentiumPro
200 with 64MB RAM, with no compiler optimizations. (By the way, MCL
4.0 on a PPC 604/120 compares favorably to those figures.)
LispWorks 4.0:
Boyer: 1.48s
Browse: 2.55s
Dderiv: 0.64s
Deriv: 0.56s
Destru: 0.12s
Div2: 0.50s
FFT: 1.45s
Frpoly: 1.83s
Puzzle: 0.95s
Ctak: 0.01s
Stak: 0.05s
Tak: 0.11s
Takl: 0.12s
Takr: 0.02s
Traverse: 4.72s
Triang: 6.27s
AllegroWin:
Boyer: 1.97s
Browse: 1.63s
Dderiv: 0.45s
Deriv: 0.41s
Destru: 0.17s
Div2: 0.48s
FFT: 1.86s
Frpoly: 1.99s
Puzzle: 1.19s
Ctak: 0.09s
Stak: 0.09s
Tak: 0.59s
Takl: 0.27s
Takr: 0.08s
Traverse: 3.70s
Triang: 9.92s
Paul
Paul Meurer
Norwegian Term Bank
Allegt. 27
N-5007 Bergen
Norway
In article <·················@nntp.uib.no>, ····@hd.uib.no (Paul Meurer) wrote:
> But bear in mind that you explicitly have to compile your functions in
> LispWorks. My experience is that it is slightly faster than Allegro.
"Compile Buffer" or "Compile Definition" using the Editor should
be sufficient. I hope. I'm not using a listener that often. ;-)
--
http://www.lavielle.com/~joswig/
-----BEGIN PGP SIGNED MESSAGE-----
John,
As I'm sure many people will point out to you, for languages like Python,
Perl and Tcl, if you already have a numeric function written in C, the best
way to run it is to wrap it, not rewrite it. I appended a trivial extension
module for Python written in C that calls the main program from your C
version of the test. (I renamed your "main" function to "pnpolytest".)
On my P100 box the C version ran 9.65 seconds for a count of 600,000. The
Python version extrapolated to 1842 seconds from the reduced count of 5,000.
The precise numbers are pretty unimportant, but a speedup of around 200-to-1
is worth the effort it takes to wrap the code. These "slow" languages can
do pretty well when things are properly organized.
I wrapped this function manually because it was so trivial. If you have a
library of numeric functions you should look at SWIG that can automatically
generate interfaces to C/C++ libraries for a number of different
languages. I can't remember the source - U of Colorado perhaps. I'm sure
you can find it with a search for SWIG at the Python Locator web page
(http://www.python.org/locator/).
People of the Perl and Tcl persuasion will likely also remind you about
compilers for those languages that take their code to C. (I'm not sure how
effective tcl2c would be here, but I presume the Perl-to-C compiler would do
a pretty fair job.)
The obvious advantage is that you can choose the most appropriate language
for each part of a larger system. There's no particular need to pick just
one.
Cheers,
- --
Skip Montanaro | Liposuction for your web site: http://www.webfast.com/
····@calendar.com | Musi-Cal: http://concerts.calendar.com/
(518)372-5583 | Python: http://www.python.org/
#include "Python.h"
static PyObject *
numtst_pnpolytest(self, args)
PyObject *self; /* Not used */
PyObject *args;
{
pnpolytest();
Py_INCREF(Py_None);
return Py_None;
}
/* List of functions defined in the module */
static PyMethodDef xx_methods[] = {
{"pnpolytest", numtst_pnpolytest, 1},
{NULL, NULL} /* sentinel */
};
void
initnumtst()
{
PyObject *m, *d;
/* Create the module and add the functions */
m = Py_InitModule("numtst", xx_methods);
/* Add some symbolic constants to the module */
d = PyModule_GetDict(m);
/* Check for errors */
if (PyErr_Occurred())
Py_FatalError("can't initialize module numtst");
}
-----BEGIN PGP SIGNATURE-----
Version: 2.6.2
Comment: Processed by Mailcrypt 3.4, an Emacs/PGP interface
iQCVAwUBM3EvMR+q0G630cGhAQGt2QP/fwOapRxGwwFS078Vl36qRuh+PQBWPqSR
Xrx/HGYEXIZSBJwtSh2OLt2qS4cD2FDuNTyqjCTrXdd/SxzNX1oKUdDhk3SuRanC
A3q/QakwEZ3p4sW5ZJj/UhvsTC7/WIQOGchL7Oa4hfCtrXkVMC7VwOMAGrELy8PQ
6HlilHkMzks=
=Si/K
-----END PGP SIGNATURE-----
From: Dik T. Winter
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <E9w192.D4H@cwi.nl>
In article <·····················@dolphin.automatrix.com> ····@calendar.com (Skip Montanaro) writes:
> As I'm sure many people will point out to you, for languages like Python,
> Perl and Tcl, if you already have a numeric function written in C, the best
> way to run it is to wrap it, not rewrite it.
Indeed. Interpreted languages are great for prototyping. Also when speed
is not important they are great because you have ease of writing and
debugging. When speed becomes important you want to have selective parts
done in a compiled language and use wrappers around those parts.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
From: Chris Bitmead uid(x22068)
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <BITMEADC.97May13142419@Alcatel.com.au>
In article <··········@cwi.nl> ···@cwi.nl (Dik T. Winter) writes:
>Indeed. Interpreted languages are great for prototyping. Also when speed
>is not important they are great because you have ease of writing and
>debugging. When speed becomes important you want to have selective parts
>done in a compiled language and use wrappers around those parts.
And isn't it cool when your language can be compiled or interpreted!
From: ···@dante.mh.lucent.com
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <863181362.3967@dejanews.com>
In article <·············@alcoa.com>,
John Watton <···········@alcoa.com> wrote:
> Happy computing!
>
> MACHINE-TYPE: SPARC
> SOFTWARE-VERSION: SunOS 5.4 Generic_101945-27 sun4m
>
> native compiler time (sec) lang html
> --------------- ---------- ---- ----
> gcc -O3 (2.7.2) 6.0 C www.gnu.ai.mit.edu
> gcc -O2 (2.7.2) 6.1 C
> Franz Allegro Lisp 4.3 8.4 Common Lisp www.franz.com
> cc -O (SC3.0.1) 8.5 C
> cc (SC3.0.1) 15.4 C
>
> interpreter
> -----------
> Sun's Java 1.0.2 170.0 Java www.javasoft.com
> CINT (C/C++ interpr.) 530.0 C hpsalo.cern.ch/root/Cint
> Python1.4 1450.0 Python www.python.org
> Guile1.0 2060.0 Scheme www.gnu.ai.mit.edu
> Tcl8.0a2 2480.0 Tcl sunscript.sun.com
> Perl5.003 2600.0 Perl www.perl.com
> Tcl7.4 21350.0 Tcl
Pythonic observations:
0) I'm actually rather pleased, especially since math in bare Python
is probably its weakest point, but...
1) Darn, couldn't you wait for Python 1.5, which reports as much
as a 2x speedup for some types of processing (rsn).
2) I'm not sure, but I believe your example might map nicely into
the Python Numerics extensions, where you might see nearer
compiled speed (maybe you'd consider this cheating, but I don't).
3) Comparing free Python against commercial Lisp or GPL Guile is
not too meaningful in many real world contexts, for legal reasons.
Also note that Python runs in more environments than most of
the languages compared.
> def pnpoly(npol, xp, yp, x, y):
> c = 0
> i=0
> j=npol-1
> while i < npol:...
> if ((((yp[i]<=y) and (y<yp[j])) or
> ((yp[j]<=y) and (y<yp[i]))) and
> (x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i])):
> c = not(c)
> i = i+1
> return c
I think should read
def pnpoly(npol, xp, yp, x, y):
c = 0
j=npol-1
xpj = xp[j]
ypj = yp[j]
for i in xrange(npol):
ypi = yp[i]
xpi = xp[i]
if ( (ypi<=y<ypj or ypj<=y<ypi) and
x < (xpj - xpi) * (y - ypi) / (ypj - ypi) + xpj):
c = not c
return c
It may not be too much faster, but it's easier on the eyes.
I also may have screwed it up with typos, AFAIK. Please
sanity check it!
for the python list: will floats in 1.5 have the same malloc optimizations
as exist for integers? I think this might speed up floats in python-core
quite a bit (?). (interested? http://www.python.org)
-- Aaron Watters
===
I know, it's only rock and roll
but I like it. -- Stones
-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/ Search, Read, Post to Usenet
From: Bruno Bienfait
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <3373CB6F.2847@helix.nih.gov>
John D. Watton wrote:
>- just in case you want to write a
> version for a language I didn't get to.
Yes. I tried Smalltalk. Here are the results:
MACHINE-TYPE: Pentium 90 Mhz
SOFTWARE-VERSION: Win 95
native compiler time (sec) lang html
--------------- ---------- ---- ----
gcc -O2 (2.7.2) 13.5 C www.gnu.ai.mit.edu
LearningWorks 198 Smalltalk www.parcplace.com
LearningWorks is based on VisualWorks 2.5. It uses a bytecode to machine
language compiler (=JIT in the Java world). Stack overflow, array
boundaries and numeric under/overflows are always checked.
Here is the smalltalk code :
------------------------------------------------------------------
| pnpoly npol count xp yp time|
npol := 20. count :=0.
xp := #(0.0d 1.0d 1.0d 0.0d 0.0d 1.0d -0.5d -1.0d -1.0d -2.0d
-2.5d -2.0d -1.5d -0.5d 1.0d 1.0d 0.0d -0.5d -1.0d
-0.5d).
yp := #(0.0d 0.0d 1.0d 1.0d 2.0d 3.0d 2.0d 3.0d 0.0d -0.5d
-1.0d -1.5d -2.0d -2.0d -1.5d -1.0d -0.5d -1.0d -1.0d
-0.5d).
pnpoly := [ :npol :x :y|
| j c |
j := npol.
c := false.
1 to: npol do:
[ :i |
(
(
((yp at: i) <= y and:[ y < (yp at: j)]) or:
[(yp at: j) <= y and:[ y < (yp at: i)]]
) and:
[x <
( (xp at: i)
+
(
((xp at: j) - (xp at: i))
*
(y - (yp at: i))
/
((yp at: j) - (yp at: i))
)
)
]
) ifTrue:[ c := c not].
j:= i.
]. "end loop"
c
].
time := Time millisecondsToRun: [
100000 timesRepeat:[
(pnpoly value: npol value: 0.5d value: 0.5d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.5d value: 1.5d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -0.5d value: 1.5d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.75d value: 2.25d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.0d value: 2.01d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -0.5d value: 2.5d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -1.0d value: -0.5d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -1.5d value: 0.5d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -2.25d value: -1.0d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.5d value:-0.25d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.5d value:-1.25d)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -0.5d value:-2.5d)
ifTrue:[count := count + 1].
]].
Array with: count with: time /1000.0
---------------------------------------------------------------------
Note: the code above uses a block as a function (pnpoly). Blocks are
limited to 3 arguments in VW 2.5. The simplest thing to do is not to
pass the two arrays as arguments. I don't think this would affect the
timing.
> MACHINE-TYPE: SPARC
> SOFTWARE-VERSION: SunOS 5.4 Generic_101945-27 sun4m
>
> native compiler time (sec) lang html
> --------------- ---------- ---- ----
> gcc -O3 (2.7.2) 6.0 C www.gnu.ai.mit.edu
> gcc -O2 (2.7.2) 6.1 C
> Franz Allegro Lisp 4.3 8.4 Common Lisp www.franz.com
> cc -O (SC3.0.1) 8.5 C
> cc (SC3.0.1) 15.4 C
>
> interpreter
> -----------
> Sun's Java 1.0.2 170.0 Java www.javasoft.com
> CINT (C/C++ interpr.) 530.0 C hpsalo.cern.ch/root/Cint
> Python1.4 1450.0 Python www.python.org
> Guile1.0 2060.0 Scheme www.gnu.ai.mit.edu
> Tcl8.0a2 2480.0 Tcl sunscript.sun.com
> Perl5.003 2600.0 Perl www.perl.com
> Tcl7.4 21350.0 Tcl
Bruno
--
[ Bruno Bienfait, Ph. D. Laboratory of Medicinal Chemistry ]
[ National Cancer Institute ]
[ Email : ······@helix.nih.gov National Institutes of Health ]
[ Phone : (301) 402-3111 Building 37, Room 5B20 ]
[ Fax : (301) 496-5839 Bethesda Maryland 20892 , USA ]
[ WWW : http://schiele.organik.uni-erlangen.de/Bruno_Bienfait ]
From: David N. Smith
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <5l7p7q$1g7v$1@watnews1.watson.ibm.com>
In article <·············@helix.nih.gov> Bruno Bienfait,
······@helix.nih.gov writes:
>| pnpoly npol count xp yp time|
> npol := 20. count :=0.
> xp := #(0.0d 1.0d 1.0d 0.0d 0.0d 1.0d -0.5d -1.0d -1.0d -2.0d
> -2.5d -2.0d -1.5d -0.5d 1.0d 1.0d 0.0d -0.5d -1.0d
>-0.5d).
> yp := #(0.0d 0.0d 1.0d 1.0d 2.0d 3.0d 2.0d 3.0d 0.0d -0.5d
> -1.0d -1.5d -2.0d -2.0d -1.5d -1.0d -0.5d -1.0d -1.0d
>-0.5d).
>pnpoly := [ :npol :x :y|
> | j c |
> j := npol.
> c := false.
>
> 1 to: npol do:
> [ :i |
> (
> (
> ((yp at: i) <= y and:[ y < (yp at: j)]) or:
> [(yp at: j) <= y and:[ y < (yp at: i)]]
> ) and:
> [x <
> ( (xp at: i)
> +
> (
> ((xp at: j) - (xp at: i))
> *
> (y - (yp at: i))
> /
> ((yp at: j) - (yp at: i))
> )
> )
> ]
> ) ifTrue:[ c := c not].
>
> j:= i.
> ]. "end loop"
> c
>].
>
>time := Time millisecondsToRun: [
> 100000 timesRepeat:[
> (pnpoly value: npol value: 0.5d value: 0.5d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: 0.5d value: 1.5d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: -0.5d value: 1.5d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: 0.75d value: 2.25d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: 0.0d value: 2.01d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: -0.5d value: 2.5d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: -1.0d value: -0.5d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: -1.5d value: 0.5d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: -2.25d value: -1.0d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: 0.5d value:-0.25d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: 0.5d value:-1.25d)
> ifTrue:[count := count + 1].
> (pnpoly value: npol value: -0.5d value:-2.5d)
> ifTrue:[count := count + 1].
> ]].
>
>Array with: count with: time /1000.0
==================================================================
This isn't how a Smalltalk programmer would write this code. It should be
a class with at least two methods, one containing the block contents and
one to call it. You repeatedly reindex the arrays and should probably
index them once and save the values. (It's even a good idea in C.)
Note that blocks can take more than three parameters, but the syntax
changes:
[ :a :b :c :d :e :f | a + b + c + d + e + f ] valueWithArguments: #(1 2 3
4 5 6 )
21
However, using blocks may overwhelm the test with odd overhead, depending
on the compiler and depending on whether or not variables in the block
are defined outside the block, and depending on what contortions are
needed to invoke a block. In this case variables are defined outside the
block but used in it, and the resulting block might be much slower.
At a minimum, this should also be run with an empty block in order to
measure other computation costs.
There is no reason to believe that this test measures anything useful.
Can you justify it as a useful measure of something?
==================================================================
For what it's worth, here is an IBM Smalltalk version run with a single
timing. Note that the 100000 on the #timesRepeat: was changed to 10000 to
get a quick answer for testing.
Note that in IBM Smalltalk, all floats are double, so I had to remove the
'd' from the literal floats. Also note that the first parameter to the
block is named the same as a variable in the outer scope; this is not
allowed in many Smalltalk systems and I had to change that.
==================================================================
| pnpoly npol count xp yp time|
npol := 20. count :=0.
xp := #(0.0 1.0 1.0 0.0 0.0 1.0 -0.5 -1.0 -1.0 -2.0
-2.5 -2.0 -1.5 -0.5 1.0 1.0 0.0 -0.5 -1.0 -0.5).
yp := #(0.0 0.0 1.0 1.0 2.0 3.0 2.0 3.0 0.0 -0.5
-1.0 -1.5 -2.0 -2.0 -1.5 -1.0 -0.5 -1.0 -1.0 -0.5).
pnpoly := [ :nnpol :x :y|
| j c |
j := nnpol.
c := false.
1 to: nnpol do:
[ :i |
(
(
((yp at: i) <= y and:[ y < (yp at: j)]) or:
[(yp at: j) <= y and:[ y < (yp at: i)]]
) and:
[x <
( (xp at: i)
+
(
((xp at: j) - (xp at: i))
*
(y - (yp at: i))
/
((yp at: j) - (yp at: i))
)
)
]
) ifTrue:[ c := c not].
j:= i.
]. "end loop"
c
].
time := Time millisecondsToRun: [
10000 timesRepeat:[
(pnpoly value: npol value: 0.5 value: 0.5)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.5 value: 1.5)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -0.5 value: 1.5)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.75 value: 2.25)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.0 value: 2.01)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -0.5 value: 2.5)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -1.0 value: -0.5)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -1.5 value: 0.5)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -2.25 value: -1.0)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.5 value:-0.25)
ifTrue:[count := count + 1].
(pnpoly value: npol value: 0.5 value:-1.25)
ifTrue:[count := count + 1].
(pnpoly value: npol value: -0.5 value:-2.5)
ifTrue:[count := count + 1].
]].
Array with: count with: time /1000.0
(60000 19.083)
==================================================================
And here is a class based version that seems to run with about the same
timings.
Object subclass: #Atest
instanceVariableNames: 'npol xp yp '
classVariableNames: ''
poolDictionaries: ''!
!Atest class publicMethods !
new
^ super new initialize! !
!Atest publicMethods !
initialize
npol := 20.
xp := #(0.0 1.0 1.0 0.0 0.0 1.0 -0.5 -1.0 -1.0 -2.0
-2.5 -2.0 -1.5 -0.5 1.0 1.0 0.0 -0.5 -1.0 -0.5).
yp := #(0.0 0.0 1.0 1.0 2.0 3.0 2.0 3.0 0.0 -0.5
-1.0 -1.5 -2.0 -2.0 -1.5 -1.0 -0.5 -1.0 -1.0 -0.5)!
test
| count time|
count :=0.
time := Time millisecondsToRun: [
10000 timesRepeat:[
(self value: 0.5 value: 0.5)
ifTrue:[count := count + 1].
(self value: 0.5 value: 1.5)
ifTrue:[count := count + 1].
(self value: -0.5 value: 1.5)
ifTrue:[count := count + 1].
(self value: 0.75 value: 2.25)
ifTrue:[count := count + 1].
(self value: 0.0 value: 2.01)
ifTrue:[count := count + 1].
(self value: -0.5 value: 2.5)
ifTrue:[count := count + 1].
(self value: -1.0 value: -0.5)
ifTrue:[count := count + 1].
(self value: -1.5 value: 0.5)
ifTrue:[count := count + 1].
(self value: -2.25 value: -1.0)
ifTrue:[count := count + 1].
(self value: 0.5 value:-0.25)
ifTrue:[count := count + 1].
(self value: 0.5 value:-1.25)
ifTrue:[count := count + 1].
(self value: -0.5 value:-2.5)
ifTrue:[count := count + 1].
]].
^ Array with: count with: time /1000.0
!
value: x value: y
| j c |
j := npol.
c := false.
1 to: npol do:
[ :i |
(
(
((yp at: i) <= y and:[ y < (yp at: j)]) or:
[(yp at: j) <= y and:[ y < (yp at: i)]]
) and:
[x <
( (xp at: i)
+
(
((xp at: j) - (xp at: i))
*
(y - (yp at: i))
/
((yp at: j) - (yp at: i))
)
)
]
) ifTrue:[ c := c not].
j:= i.
].
^ c
! !
Atest initializeAfterLoad!
Dave
_____________________________________________
David N. Smith
IBM T J Watson Research Center, Hawthorne, NY
Mailto: ·······@watson.ibm.com
Home Page: http://www.dnsmith.com/
_____________________________________________
Any opinions or recommendations are those
of the author and not of his employer.
From: ······@wartech.com
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <5lcq3j$912@masters0.InterNex.Net>
In <·············@watnews1.watson.ibm.com>, David N. Smith <·······@watson.ibm.com> writes:
David: and all our brother programmers:
There is no single language which is ideal for everything. Smalltalk is best, in my
view, for GUIs, but it never makes a good number crunching language -- nothing
really measures up to native Assembly Language for that. Lists are best done with
LISP, and systems kernels with one of C/Modula/Oberon etc.
In the best of all possible worlds, we'd all have a chance to be fluent in many
languages, and pull out our tools -- our languages -- as an auto mechanic does
his wrenches. He doesn't use a screwdriver on a bolt head, nor a clamp tool
on the ignition.
We would do well, all of us, to keep this in mind -- that all languages that survive
have significant strengths, but that none of them will ever be all things to all
people.
I have an in-progress Smalltalk application that solves simultaneous systems of
differential equations. How fast does it run? Faster than anyone's existing product
done in C. This is because I used a C compiler, and the escape to the ASM keyword,
to get access to the floating point processor directly, and put the computations in
a DLL. It still drags a 200 MHz Pentium to its knees for several seconds if one wants
a final output correct to the third decimal place. But in native smalltalk, it would
take not ten seconds, but probably all day.
I think your best response to this message, David, would be that problems like this
are what OSObject and OSPtr are for. Kind of a pain to set them up, but once one
takes the time, they run as fast as C (because they ARE C) or, in my case, by
using the escape to the native math co-processor, even FASTER in Smalltalk than
in C :-)
Smalltalk? Faster than C?
Yes, if you know how to do it :-)
In article <··········@masters0.InterNex.Net>, ······@wartech.com wrote:
> In <·············@watnews1.watson.ibm.com>, David N. Smith
<·······@watson.ibm.com> writes:
>
> David: and all our brother programmers:
>
> There is no single language which is ideal for everything. Smalltalk is
best, in my
> view, for GUIs, but it never makes a good number crunching language -- nothing
> really measures up to native Assembly Language for that. Lists are best
done with
> LISP, and systems kernels with one of C/Modula/Oberon etc.
Come on - you're kidding.
--
http://www.lavielle.com/~joswig/
From: Travis Griggs
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <337BF01D.D31@bmi.net>
Rainer Joswig wrote:
>
> In article <··········@masters0.InterNex.Net>, ······@wartech.com wrote:
>
> > In <·············@watnews1.watson.ibm.com>, David N. Smith
> <·······@watson.ibm.com> writes:
> >
> > David: and all our brother programmers:
> >
> > There is no single language which is ideal for everything. Smalltalk is
> best, in my
> > view, for GUIs, but it never makes a good number crunching language -- nothing
> > really measures up to native Assembly Language for that. Lists are best
> done with
> > LISP, and systems kernels with one of C/Modula/Oberon etc.
>
> Come on - you're kidding.
I didn't really pay to close attention to the beginning of this thread,
but am I correct in noticing that it's about a test of which is the
quickest number basher and Fortran was excluded? Fortran may seem like a
foregone system of the dinosaurs to many, but in the real
science/engineering netherworlds, it's still king of the hill. And
there's a reason, when it comes to raw looping, arrays, and number
bashing, its second only to custom (and well) written assembler. Anytime
I had to do any *serious* number bashing, I'd use nothing but Fortran. I
found this out years ago, when coming from a Fortran background, I
learned C, and discovered that in number crunching, Fortran produced
programs that were as much as an order of magnitude faster than C
counterparts.
Smalltalk *is* good for putting together GUIs (as long as they don't
require heavy duty real time animation and such, though if you know what
you're doing, its even good there). But I've seen many another good GUI
builder for other languages as well. Smalltalk's real power lies in its
ability to map so closely to real world problems. I have found nothing
better for modeling real world *things*. IMHO, it's because real world
things are quite heterogenous, is very loosely typed, and is way
polymorphic. So is ST.
--
Travis or Kerrin Griggs
Key Technology (509) 529-2161
···@bmi.net (509) 527-8743
From: ···@watson.atc.alcoa.com
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <kdupvurbmpv.fsf@watson.atc.alcoa.com>
Travis Griggs <···@bmi.net> writes:
> I didn't really pay to close attention to the beginning of this thread,
> but am I correct in noticing that it's about a test of which is the
> quickest number basher and Fortran was excluded? Fortran may seem like a
> foregone system of the dinosaurs to many, but in the real
> science/engineering netherworlds, it's still king of the hill. And
> there's a reason, when it comes to raw looping, arrays, and number
> bashing, its second only to custom (and well) written assembler. Anytime
> I had to do any *serious* number bashing, I'd use nothing but Fortran. I
> found this out years ago, when coming from a Fortran background, I
> learned C, and discovered that in number crunching, Fortran produced
> programs that were as much as an order of magnitude faster than C
> counterparts.
>
I am the original poster. Please excuse the alternate email address
since my Windows NT machine died and I am now posting from my
Sparcstation.
First the original test was not about what is the fastest number
crunching language. It grew from a conversion of the C code for pnpoly
to Common Lisp which I did for a project. I did my best to make the
Lisp version competitive with the C version performance wise. I
declare success in this. Then I converted it to these other languages
out of boredom: tcl, perl, scheme, python, and java. I wanted to get a
feel for how they compare to each other and how they compare to
languages with a native compiler. Out of the goodness of my heart I
posted these results with source so that others could benefit.
Now, what about Fortran. It usually compiles to fast code but it is
not my experience that it has anything on C except maybe those
exceptionally descriptive 6 character variable names :-) For you and
all the others living in the past here are the results with a Fortran
version.
MACHINE-TYPE: SPARCstation10 100MHz 128Mb RAM
SOFTWARE-VERSION: SunOS 5.4 Generic_101945-27 sun4m
native compiler time (sec) lang html
--------------- ---------- ---- ----
gcc -O3 (2.7.2) 6.0 C www.gnu.ai.mit.edu
gcc -O2 (2.7.2) 6.1 C
f77 -O (SC3.0.1) 7.5 Fortran
Franz Allegro Lisp 4.3 8.4 Common Lisp www.franz.com
cc -O (SC3.0.1) 8.5 C
cc (SC3.0.1) 15.4 C
interpreter
-----------
Kaffe 0.8.4 77.0 Java www.kaffe.org
Sun's JDK 1.1.1 80.0 Java www.javasoft.com
Sun's JDK 1.0.2 163.0 Java www.javasoft.com
CINT (C/C++ interpr.) 530.0 C hpsalo.cern.ch/root/Cint
Python1.4 1280.0 Python www.python.org
Perl5.003 1710.0 Perl www.perl.com
Guile1.0 2060.0 Scheme www.gnu.ai.mit.edu
Tcl8.0a2 2240.0 Tcl sunscript.sun.com
Tcl7.4 24600.0 Tcl
FORTRAN-FORTRAN-FORTRAN-FORTRAN-FORTRAN-FORTRAN-FORTRAN-FORTRAN-FORTRAN-FORTRAN-FORTRAN
logical function pnpoly(npol, xp, yp, x, y)
integer npol
double precision x, y, xp(*), yp(*)
pnpoly=.false.
j=npol
do 10 i=1,npol
if ((((yp(i).le.y).and.(y.lt.yp(j))).or.
1 ((yp(j).le.y).and.(y.lt.yp(i)))).and.
2 (x.lt.((xp(j)-xp(i))*((y-yp(i))/(yp(j)-yp(i)))+xp(i)))) pnpoly=.not.pnpoly
j=i
10 continue
end
program pntest
integer npol, count
logical pnpoly
double precision xp(20), yp(20)
data npol,count/20, 0/
data xp/0.0d0,1.0d0,1.0d0,0.0d0,0.0d0,1.0d0,-0.5d0,-1.0d0,-1.0d0,-2.0d0,
1 -2.5d0,-2.0d0,-1.5d0,-0.5d0,1.0d0,1.0d0,0.0d0,-0.5d0,-1.0d0,-0.5d0/
data yp/0.0d0,0.0d0,1.0d0,1.0d0,2.0d0,3.0d0,2.0d0,3.0d0,0.0d0,-0.5d0,
1 -1.0d0,-1.5d0,-2.0d0,-2.0d0,-1.5d0,-1.0d0,-0.5d0,-1.0d0,-1.0d0,-0.5d0/
do 20 i=1,100000
if (pnpoly(npol,xp,yp,0.5d0,0.5d0)) count=count+1
if (pnpoly(npol,xp,yp,0.5d0,1.5d0)) count=count+1
if (pnpoly(npol,xp,yp,-0.5d0,1.5d0)) count=count+1
if (pnpoly(npol,xp,yp,0.75d0,2.25d0)) count=count+1
if (pnpoly(npol,xp,yp,0.0d0,2.01d0)) count=count+1
if (pnpoly(npol,xp,yp,-0.5d0,2.5d0)) count=count+1
if (pnpoly(npol,xp,yp,-1.0d0,-0.5d0)) count=count+1
if (pnpoly(npol,xp,yp,-1.5d0,0.5d0)) count=count+1
if (pnpoly(npol,xp,yp,-2.25d0,-1.0d0)) count=count+1
if (pnpoly(npol,xp,yp,0.5d0,-0.25d0)) count=count+1
if (pnpoly(npol,xp,yp,0.5d0,-1.25d0)) count=count+1
if (pnpoly(npol,xp,yp,-0.5d0,-2.5d0)) count=count+1
20 continue
print *, 'count = ', count
end
--
John D. Watton
Aluminum Company of America
Alcoa Technical Center
100 Technical Drive, Alcoa Center PA 15069
(412) 337-2165 (wk)
(412) 325-2814 (hm)
From: Bernhard Pfahringer
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <5kua0b$75v@borg.cs.waikato.ac.nz>
John Watton <···········@alcoa.com> wrote:
>MACHINE-TYPE: SPARC
>SOFTWARE-VERSION: SunOS 5.4 Generic_101945-27 sun4m
>
>native compiler time (sec) lang html
>--------------- ---------- ---- ----
>gcc -O3 (2.7.2) 6.0 C www.gnu.ai.mit.edu
>gcc -O2 (2.7.2) 6.1 C
>Franz Allegro Lisp 4.3 8.4 Common Lisp www.franz.com
>cc -O (SC3.0.1) 8.5 C
>cc (SC3.0.1) 15.4 C
>
another data point from a linux-pc (times are user-times):
MACHINE-TYPE: Pentium
SOFTWARE-VERSION: Linux 2.0.30
native compiler time (sec) lang html
--------------- ---------- ---- ----
gcc -O3 (2.7.2) 7.1 C www.gnu.ai.mit.edu
Franz Allegro Lisp 4.3 5.5 Common Lisp www.franz.com
--
Bernhard Pfahringer
···············@cs.waikato.ac.nz http://www.ai.univie.ac.at/~bernhard
John Watton <···········@alcoa.com> wrote in article
<·············@alcoa.com>...
>The test is actually some very useful code in computer graphics. It is a
> function to determine if a 2D point is inside a closed 2D polygon.
I suppose that the conclusion to draw from the test is that if you are
doing
numerical programming you should use a good C compiler?
But then how do you pick an interpreter or component assembly
technology?
From: ···@dante.mh.lucent.com
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <863437448.18465@dejanews.com>
In article <··························@gjchome.nis.newscorp.com>,
"George J. Carrette" <···@delphi.com> wrote:
>
> John Watton <···········@alcoa.com> wrote in article
> <·············@alcoa.com>...
> >The test is actually some very useful code in computer graphics. It is a
> > function to determine if a 2D point is inside a closed 2D polygon.
>
> I suppose that the conclusion to draw from the test is that if you are
> doing
> numerical programming you should use a good C compiler?
>
> But then how do you pick an interpreter or component assembly
> technology?
Oh, that's easy. Obviously you use Python! :)
-- Aaron Watters
===
Pay no attention to the man behind the curtain!
-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/ Search, Read, Post to Usenet
From: Howard R. Stearns
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <33787E4C.59E2B600@elwoodcorp.com>
George J. Carrette wrote:
>
> John Watton <···········@alcoa.com> wrote in article
> <·············@alcoa.com>...
> >The test is actually some very useful code in computer graphics. It is a
> > function to determine if a 2D point is inside a closed 2D polygon.
>
> I suppose that the conclusion to draw from the test is that if you are
> doing
> numerical programming you should use a good C compiler?
>
> But then how do you pick an interpreter or component assembly
> technology?
>
If I understand correctly, John's (and other people's) results show C
and Common Lisp to be a few orders of magnitude faster for some tasks
than some other languages, variously described as "interpreted",
"component assembly" or "scripting" languages.
Any language (including C and Common Lisp) may or may not have features
that make it well suited for a particular task.
One feature that people sometimes ask for is the ability to evaluate
language expressions and see the results immediately, without going
through an edit/compile/re-execute cycle. Sometimes languages which
support this are called "interpreted", though this may be confusing
because there is no reason that this feature cannot be supported by
having the implementation compile the code, dynamically load it into the
running program, and execute it.
Providing the ability to evaluate language expressions opens up many
other issues:
- How are existing function definitions updated? Do old compiled
callers of newly defined functions see the new definitions?
- What happens when an object definition (in object oriented languages)
gets redefined? Do existing instances of the old object see the new
definition? What if new slots/attributes/operations are added, or what
if they disappear? What about definitions of classes that inherit from
the redefined class?
- How are new polymorphic operations added? When a new
operation/method is added, should existing code call the new method?
In some language research communities, these issues go under the heading
of "dynamic" language issues, rather than "interpreted", or "scripting"
issues. Different languages provide varying degrees of dynamicism. For
example, Java provides some dynamic language features, but not others.
I choosing a language, it may be useful to note that ANSI Common Lisp
(such as the Franz Allegro Lisp implementation in John's original study)
provides a very high degree of dynamicism. (I am not personally aware
of any language which provides more dynamicism, or any dynamic feature
which is not provided for in Lisp. Perhaps other people can come up
with something. It should be noted, though, that ANSI Common Lisp does
not provide standardized ways of programatically LIMITING dynamicsm,
such as is provided for in to some degree in Java.) Since Common Lisp
is as dynamic as anything else, and as fast as anything else (with some
implementations being faster than some C implementations for certain
tasks, and others being slower), it seems to me that it is the clear
choice for combining speed and dynamicism. Of course there are other
reasons for choosing a language other than speed and dynamicsm...
From: Steve Frazee
Subject: Re: [long] Numerical test in C, lisp, tcl, perl, python, java, & scheme
Date:
Message-ID: <01bc5cee$dd081ec0$LocalHost@blackhole>
The following deals only with the Java code, although it may apply to other
languages:
You may want to try the following change. I haven't had the time to try it
out, but it may make
a difference. It depends how (or if) the code gets optimized. Assign the
array values,
that are used frequently, right off the bat, and then use the variable
references instead of the
array references. I may save a little time over recalculating the array
reference each time.
Original code follows: (Sorry for the reformatting, but I am stuck in my
ways and need to view
it this way to keep my sanity. I am always formatting other's code to this
style, it just seems
so much more readable to me.
>public class Pnpoly
>{
> public boolean pnpoly(int npol, double[] xp, double[] yp, double x,
double y)
> {
> int j;
> boolean c = false;
> for (int i = 0, j = npol - 1; i < npol; j = i++)
> {
> if ((((yp[i] <= y) && (y < yp[j])) ||
> ((yp[j] <= y) && (y < yp[i]))) &&
> (x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i]))
> {
> c = !c;
> }
> }
> return c;
> }
>}
Slightly modified code follows:
public class Pnpoly
{
public boolean pnpoly(int npol, double[] xp, double[] yp, double x,
double y)
{
int j;
boolean c = false;
for (int i = 0, j = npol - 1; i < npol; j = i++)
{
fastypi = yp[i]; // May or may not help speed of calculations
fastypj = yp[j]; // May or may not help speed of calculations
fastxpi = xp[i]; // May or may not help speed of calculations
if ((((fastypi <= y) && (y < fastypj)) ||
((fastypj <=y ) && (y < fastypi))) &&
(x < (xp[j] - fastxpi) * (y - fastypi) / (fastypj - fastypi) +
fastxpi))
{
c = !c;
}
}
return c;
}
}
<snip>
John Watton <···········@alcoa.com> wrote in article
<·············@alcoa.com>...
What follows is the results of a small study I did to compare various
computing languages, especially those interpreted types that seem to
get so much interest for one reason or another. I would like to know
how they compare to each other syntax wise and performance wise.
<snip>