In article <··········@euas20.eua.ericsson.se> ···@erix.ericsson.se (Joe Armstrong) writes:
In article <···············@netcom.com>, ·····@netcom.com (John Nagle) writes:
|> Lately, I've been looking at interpreters suitable for use as
|> extension languages in a control application. I need something that
|> can do computation reasonably fast, say no worse than 1/10 of the
|> speed of compiled C code. Interpreters have been written in that speed
|> range quite often in the past. But when I try a few of the interpreters
|> available on the Mac, performance is terrible.
|>
|> My basic test is to run something equivalent to
|>
|> int i; double x = 0.0;
|> for (i = 0; i < 1000000; i++) x = x + 1.0;
|>
|> The Smalltalk and Python versions are slower than the C version by factors
|> of greater than 1000. This is excessive. LISP interpreters do a bit
|> better, but still don't reach 1/10 of C. What interpreters do a decent
|> job on computation?
|>
|> John Nagle
Well one reason could be that Smalltalk usually gets integer
arithmetic right and C doesn't. Multiply two integers in C which
overflow 32 bits and you get the wrong answer - many interpreted
languages coerse automatically to bignums and get the answers right.
Another reason could be the lack of type information - lisps
etc. are unsually untyped, so the contents of a variable can change
type during execution. This involves a deal of run-time tag checking -
C compilers can make a good job of compilation because the
programmers have to specify a lot of staggeringly boring little
details - this is why C programming is fun it takes a deal of time
trouble and skill to get all the details right.
It depends what you want - speed of execution or ease of writing.
Since my machine is blindingly fast I'd go for ease of writing!
The compromise is to add type declarations in the very few tight loops
(such as the one above) where programs typically spend most of their
time but very little of their functionality. I compiled and ran the
following on a Sparc LX:
main ()
{
int i;
double x = 0.0;
for (i = 0; i < 1000000; ++i)
x = x + 1.0;
return x;
}
The user time was 0.61 sec.
I then compiled and ran the following on a Common Lisp implementation
on that same Sparc LX:
(defun my-float-loop-double ()
(declare (optimize (speed 3) (safety 0) (debug 0) (space 0)))
(do ((i 0 (the fixnum (1+ i)))
(x 0d0 (the double-float (+ 1d0 x))))
((>= i 1000000) x)
(declare (fixnum i) (double-float x))))
The user time was 0.23 sec. The Common Lisp code was close to three
times faster! The function returns the correct value, and its
disassembly includes the telltale "add.d %f30,%f28,%f30", branch
instructions, etc., so it's not optimizing away the loop.
--
Lawrence G. Mayka
AT&T Bell Laboratories
···@ieain.att.com
Standard disclaimer.
[Groups comp.programming, comp.lang.smalltalk, and comp.lang.python
removed.]
In article <·················@polaris.ih.att.com>, Lawrence G. Mayka <···@polaris.ih.att.com> wrote:
>(defun my-float-loop-double ()
> (declare (optimize (speed 3) (safety 0) (debug 0) (space 0)))
> (do ((i 0 (the fixnum (1+ i)))
> (x 0d0 (the double-float (+ 1d0 x))))
> ((>= i 1000000) x)
> (declare (fixnum i) (double-float x))))
I find it disturbing that one would need any declarations for this
snippet of code, except maybe (SAFETY 0) to turn off floating point
overflow traps. Assuming 1000000 is in fixnum range, the 1+ will
always return a fixnum, and sans traps the float add will return a
double float. Heck, the whole piece of code should be constant
propagated away. Looks like our compilers have a way to go.
--David Gadbois
* David Gadbois wrote:
> wrote:
>> (defun my-float-loop-double ()
>> (declare (optimize (speed 3) (safety 0) (debug 0) (space 0)))
>> (do ((i 0 (the fixnum (1+ i)))
>> (x 0d0 (the double-float (+ 1d0 x))))
>> ((>= i 1000000) x)
>> (declare (fixnum i) (double-float x))))
> I find it disturbing that one would need any declarations for this
> snippet of code, except maybe (SAFETY 0) to turn off floating point
> overflow traps. Assuming 1000000 is in fixnum range, the 1+ will
> always return a fixnum, and sans traps the float add will return a
> double float. Heck, the whole piece of code should be constant
> propagated away. Looks like our compilers have a way to go.
This:
(defun my-float-loop-double ()
(declare (optimize (speed 3) (safety 0) (debug 0) (space 0)))
(let ((x 0d0))
(declare (double-float x))
(dotimes (i 1000000 x)
(setq x (1+ x)))))
compiles pretty well in CMUCL (better than the original one). Turning
off all the optimization doesn't seem to make any odds (I presume it
doesn't need to generate code for overflow since the machine will trap
anyway). It's not clear to me why you need to tell it about X's type
still, but you do.
--tim
In article <·················@polaris.ih.att.com> ···@polaris.ih.att.com (Lawrence G. Mayka) writes:
main ()
{
int i;
double x = 0.0;
for (i = 0; i < 1000000; ++i)
x = x + 1.0;
return x;
}
The user time was 0.61 sec.
Oops, I forgot optimization. With '-O', this C program runs in 0.15
sec.
(defun my-float-loop-double ()
(declare (optimize (speed 3) (safety 0) (debug 0) (space 0)))
(do ((i 0 (the fixnum (1+ i)))
(x 0d0 (the double-float (+ 1d0 x))))
((>= i 1000000) x)
(declare (fixnum i) (double-float x))))
The user time was 0.23 sec. The Common Lisp code was close to three
A slight modification to this reduces the time on Allegro Common Lisp
to 0.15 sec, the same as the C version:
(defun my-float-loop-double-faster ()
(declare (optimize (speed 3) (safety 0) (debug 0) (space 0)))
(let ((y 1d0))
(declare (double-float y))
(do ((i 0 (the fixnum (1+ i)))
(x 0d0 (the double-float (+ y x))))
((>= i 1000000) x)
(declare (fixnum i) (double-float x)))))
--
Lawrence G. Mayka
AT&T Bell Laboratories
···@ieain.att.com
Standard disclaimer.