From: Dima Zinoviev
Subject: C/Lisp/Python/Tcl - yet another performance comparision
Date: 
Message-ID: <DMITRY.96Jan16220951@pavel.physics.sunysb.edu>
Hi folks!

The following performance comparision of the more or less popular
interpreted languages (including C++ and GNU awk as reference points)
has been done using the code of Chris Trimble posted a few days ago to
some of these newsgroups (can be found at
http://vizlab.beckman.uiuc.edu/people/trimble/all_test_code.txt,
"system" feature has been omitted), and the original code for Lisp and
Tcl (let me know if you want it!). 


---------- The machine -------------

IBM PC
Pentium 100 MHz CPU
32M main memory
1G IDE HDD
Linux 1.2.13

---------- The languages -----------

The following languages have been tested: GNU C++ 2.6.3, GNU awk 2.15,
Xlisp 2.1g (both byte-compiled and plain), Python 1.2, Common Lisp 2.6
(both byte-compiled and plain) and Tcl 7.3.


---------- The tests ---------------

The following tests have been done:
ARRAY -- 1M array references
B.ARTH -- 4M base arithmetic operations (+-*/)
E.ARTH -- 4M advanced arithmetic operations (sqrt/sin)
FIO -- 200k file accesses
LOOP -- 3.2M nested loop iterations

---------- The results -------------
The upper number is the execution time, in seconds.
The lower number is the ratio of this number over the C++ performance
(the "badness" of the interpreter).

------------------------------------
		ARRAY	B.ARTH	E.ARTH	FIO	LOOP	OVERALL

g++		0.44	0.17	4.4	7.78	0.49	13.3
2.6.3		1	1	1	1	1	1

gawk		26	26	39	13	24	128
2.15		59	153	8.9	1.7	49	9.6

xlisp		29	11	54	33	20	147
2.1g cmp	66	65	12.3	4.2	41	11

Perl		23	28	38	27	37	153
4.0.1.8		52	165	8.6	3.5	76	11.5

xlisp		52	87	100	41	34	314
2.1g		118	512	22.7	5.3	69.4	23.6

Python		66	104	309	69	114	662
1.2		150	612	70.2	8.9	233	50

clisp		10.9	4.1	520	215	7.8	758
2.6 cmp		24.8	24.1	118.3	27.6	15.9	57

clisp		80.5	174	757	252	249	1,513
2.6		183	1,018	172	32.4	508	114

tcl		287	898	1,041	90.3	498	2,817
7.3		652	5,282	237	11.6	1,017	211.8

--------------------------
Cheers,
Dmitry
--
     Error 666: REALITY.SYS corrupted. Reboot Universe (Y/n)?   
-------------------------------------------------------------------------------
     Dmitry Zinoviev <http://pavel.physics.sunysb.edu>, at your service!

From: Aaron Watters
Subject: Re: C/Lisp/Python/Tcl - yet another performance comparision
Date: 
Message-ID: <4ditv6$7ig@nntpa.cb.att.com>
Yes, executing loops and stuff at the python global level
is both bad style and slow.  Global python variable accesses
translate to hash table look ups (due to python's dynamic nature).
Good coding style would do most of the work inside functions,
and in python this is much faster as well.

I put the benchmark inside a python function, and here are the
results:

dnn-pc: python bench1.py
Python performance test
Performing 1000000 index accesses in 68 seconds
Performing 1000 system calls in 28 seconds
Performing 1000000 basic arithmetic functions in 111 seconds
Performing 2000000 complex arithmetic functions in 332 seconds
Performing 2000 local file accesses in 89 seconds
Performing 20 nested loops in 120 seconds
Overall test executed in 748 seconds

dnn-pc: python bench1.py
Python performance test
Performing 1000000 index accesses in 24 seconds
Performing 1000 system calls in 28 seconds
Performing 1000000 basic arithmetic functions in 54 seconds
Performing 2000000 complex arithmetic functions in 216 seconds
Performing 2000 local file accesses in 67 seconds
Performing 20 nested loops in 44 seconds
Overall test executed in 433 seconds

748 vs 433... why that's almost a 50% improvement!

Please use good coding style in your benchmarks and repost your
conclusions.  There are other obvious improvements on your python
style that would speed things up even more, but I'll stop now.
Didn't you know that good programmers do almost nothing at the global
level?		-Aaron Watters
===
Never confuse efficiency with a liver complaint.
  -- From Disney's "Mary Poppins"
From: Dima Zinoviev
Subject: Re: C/Lisp/Python/Tcl - yet another performance comparision
Date: 
Message-ID: <DMITRY.96Jan17150631@pavel.physics.sunysb.edu>
In article <··········@nntpa.cb.att.com> Aaron Watters <···@big.att.com> writes:
----- Stuff deleted ---------
>
>   Please use good coding style in your benchmarks and repost your
>   conclusions.  There are other obvious improvements on your python
>   style that would speed things up even more, but I'll stop now.
>   Didn't you know that good programmers do almost nothing at the global
>   level?		-Aaron Watters
>   ===
>   Never confuse efficiency with a liver complaint.
>     -- From Disney's "Mary Poppins"
>
>

Didn't I tell that I just used the code of Chris Trimble posted to
this very newsgroup on January 12? Well, I know that good programmers
do almost nothing at the global level. However, if Python is intended
to be used not only as a scripting language, but as a command-line
interpreter as well, you must admit that most of the variables will
be global. Moreover, none of the other tested languages
(Perl/Lisp/Tcl/awk) minded using global variables, just Python :-)

--
     Error 666: REALITY.SYS corrupted. Reboot Universe (Y/n)?   
-------------------------------------------------------------------------------
     Dmitry Zinoviev <http://pavel.physics.sunysb.edu>, at your service!
From: Chris Trimble
Subject: Re: C/Lisp/Python/Tcl - yet another performance comparision
Date: 
Message-ID: <4e9jhk$m24@news1.panix.com>
In article <······················@netlabs.com>
·····@netlabs.com (Larry Wall) writes:

[About 1...1000000 array creations]
> Actually, it's about as optimal as you can get, given that you really
> *do* want to construct an array.  But it's far from the optimal way to
> write a loop, for sure.  This is documented.

 It's very suprising to hear that, actually, because I get the opposite
results due to swapping.  On an Indy R4400 with 64M of RAM and about 160M
of swap, my machine even had to go into *virtual swap* when I tried
creating an array in this manner -- bringing the machine to its knees. 
However, using a loop and "push"ing the array together remains very
manageable in memory and hardly swaps at all.

 Theoretically, this may be the fastest way of creating large arrays in
perl.  But, as far as I can tell from experimentation, this isn't the case
practically -- unless you have a quarter gig of real memory on a machine.


> I also agree with the poster who pointed out that it's not entirely
> fair to compare a heavily optimizing C or C++ compiler with a
> load-and-go compiler like Perl's or Python's (at least if you're going
> to feed them untuned code).  The users of such languages are expected
> to worry a little more about that sort of thing, if speed is what they
> happen to be optimizing for at the moment.

 It's true, but it did matter to us.  One of our considerations in looking
at these langs was the idea of being able to prototype code in the
interpreted language and then be able to take that code and port it to C++
for excessively used, dynamically linked code.  In these situations, I
think it does mean something to experiment with what kind of performance
you'll see when you port to C++.  Then you can sort of guess if you need
to optimize a lot before you even get into C++.

 - Chris

--
Chris Trimble  --  ·······@panix.com