From: David Steuber
Subject: Speaking of Bignum arithmetic
Date: 
Message-ID: <m28yhngnfv.fsf@david-steuber.com>
How well do Lisp implementations do in comparison to other language
implementations?

For a more concrete example, suppose I want to calculate the
factorial of 100,000 using CMUCL and using Sun's JVM and standard
Java libraries.  How will CMUCL fair?  What if I coded using C
compiled with GCC?

One thing I can see doing with C code is using the same fixed size
arrays to hold the values of the calculations.  So consing would not
be an issue like it is with Lisp or Java.

One real win I see for Lisp here is that for normal type integer
arithmetic, you don't have to worry about overflow.  You get a nice,
smooth transition into bignums that you do not get in C or Java.

For day to day programming, I think that is likely to be a huge win
indeed.

-- 
It would not be too unfair to any language to refer to Java as a
stripped down Lisp or Smalltalk with a C syntax.
--- Ken Anderson
    http://openmap.bbn.com/~kanderso/performance/java/index.html

From: Jens Axel Søgaard
Subject: Re: Speaking of Bignum arithmetic
Date: 
Message-ID: <406542f1$0$220$edfadb0f@dread11.news.tele.dk>
David Steuber wrote:

> How well do Lisp implementations do in comparison to other language
> implementations?
> 
> For a more concrete example, suppose I want to calculate the
> factorial of 100,000 using CMUCL and using Sun's JVM and standard
> Java libraries.  How will CMUCL fair?  What if I coded using C
> compiled with GCC?

Calculating the value of 100000! is not a typical use of bignums.
Most bignum calculations are on numbers just a little greater
than the largest fixnums.

Jaffer made the following experiments with the symbolical math
system JACAL:

     <http://swissnet.ai.mit.edu/~jaffer/CNS/DIMPA>

At the time of writing the above server doesn't respond, so here
is the google cache link:

<http://216.239.59.104/search?q=cache:eDE3dos53uAJ:swissnet.ai.mit.edu/~jaffer/CNS/DIMPA+jaffer+bignum>

Which unfortunately doesn't have the distribution graphs.

-- 
Jens Axel S�gaard
From: Tim Bradshaw
Subject: Re: Speaking of Bignum arithmetic
Date: 
Message-ID: <ey3n061gfur.fsf@cley.com>
* Jens Axel S�gaard wrote:

> Calculating the value of 100000! is not a typical use of bignums.
> Most bignum calculations are on numbers just a little greater
> than the largest fixnums.

I think this is absolutely right.  Another issue is that number
systems that support bignums natively generally also support correct
arithmetic natively (rationals &c), so applications which need that
are more likely to work.

--tim
From: Adam Warner
Subject: Re: Speaking of Bignum arithmetic
Date: 
Message-ID: <pan.2004.03.30.11.33.16.854941@consulting.net.nz>
Hi David Steuber,

> How well do Lisp implementations do in comparison to other language
> implementations?
> 
> For a more concrete example, suppose I want to calculate the
> factorial of 100,000 using CMUCL and using Sun's JVM and standard
> Java libraries.  How will CMUCL fair?  What if I coded using C
> compiled with GCC?

Here's something similar. 10,000! computed 10 times. Below is the Lisp and
Java code. I'm using iteration instead of recursion.

==============
factorial.lisp
==============

(defun factorial (n)
  (declare (fixnum n))
  (let ((total 1))
    (declare (integer total))
    (loop for i from 2 to n do
	  (setf total (* total i)))
    total))

(let ((evaluate 10000) (times 10))
  (format t "Evaluation of ~A! ~A times took ~A seconds.~%"
	  evaluate times
	  (/ (- (prog1 (get-internal-real-time)
		  (dotimes (i times)
		    (factorial evaluate)))
		(get-internal-real-time))
	     internal-time-units-per-second
	     -1.0)))

==============
Factorial.java
==============

import java.math.BigInteger;

class Factorial {
    public static void main(String[] args) {
	long start_time = System.currentTimeMillis();
	long evaluate = 10000L;
        int times = 10;
	for (int i = 1; i<=times; ++i) {
	    factorial(evaluate);
	}
	float time = (System.currentTimeMillis()-start_time)/1000.0f;
	System.out.println("Evaluation of "+evaluate+"! "+times+
			   " times took "+time+" seconds.");
    }

    public static BigInteger factorial(long n) {
	BigInteger product = BigInteger.ONE;
	for (long i = 2; i<=n; ++i) {
	    product=product.multiply(BigInteger.valueOf(i));
	}
	return product;
    }
}


I iterated over a Long instead of an Integer in the Java version because
the BigInteger library only appears to supply a method to convert Long to
BigInteger (not Integer to BigInteger).

On my system "java Factorial" runs almost exactly three times slower than
the CMUCL compiled version. With the runtime arguments "java -server
-XX:+UseConcMarkSweepGC -Xms512m -Xmx512m Factorial" I can get this down
to twice as slow as CMUCL (using the Sun 1.5.0 beta 1 JVM).

Dealing with Java BigInteger objects is horrible compared to the
transparency of Lisp integers. Not only do you have to perform explicit
type conversions but all the infix syntax is thrown out the window.
(* product i) is elegant. product.multiply(BigInteger.valueOf(i)) isn't.

One thing to consider is many real world numbers will not be larger than
signed 64-bit integers (Java's "long") and Java is much faster than CMUCL
for Long arithmetic.

Compare (time (sum)):

(defun sum ()
  (declare (optimize (speed 3) (safety 0)))
  (let ((total 4294967296))
    (declare (type (signed-byte 64) total))
    (dotimes (i 10000000)
      (setf total (+ total i)))
    total))

With:

class Add {
    public static void main(String[] args) {
        long start_time = System.currentTimeMillis();
        long total = 4294967296L;
        for (long i = 0; i<10000000; ++i) {
            total=total+i;
        }
        System.out.println(total+" in "+
                           ((System.currentTimeMillis()-start_time)/1000.0f)
                           +" seconds.");
    }
}

On my system the above Java code is 25 times faster. Or 50 times faster
with the -server option.
 
> One thing I can see doing with C code is using the same fixed size
> arrays to hold the values of the calculations.  So consing would not
> be an issue like it is with Lisp or Java.

You cannot fit any arbitrary sized integer into a fixed sized array.

> One real win I see for Lisp here is that for normal type integer
> arithmetic, you don't have to worry about overflow.  You get a nice,
> smooth transition into bignums that you do not get in C or Java.

Indeed. The JVM specification also mandates that no exception is thrown
for integer overflow. There isn't even an overflow flag that can be
checked. Checking may require converting each int to long, performing the
calculations, seeing whether the resulting long fits into an int and if so
converting the long back to an int.

It is so expensive that undeclared integer Lisp code running on a JVM
is usually implemented solely with bignum-like objects (i.e. without the
benefit of int or long data types as a fixnum representation).

Regards,
Adam
From: Pete Kirkham
Subject: Re: Speaking of Bignum arithmetic
Date: 
Message-ID: <4069cfc5$0$6542$cc9e4d1f@news-text.dial.pipex.com>
Adam Warner wrote:
> Indeed. The JVM specification also mandates that no exception is thrown
> for integer overflow. There isn't even an overflow flag that can be
> checked. Checking may require converting each int to long, performing the
> calculations, seeing whether the resulting long fits into an int and if so
> converting the long back to an int.

Following a comparison for fibbonacci numbers of scheme vs C++, I played 
a bit with a different representation of big integers from the 
BigInteger in Java, which uses long promotion.

You can represent using the lowest 62 bits of a long, allowing 4 
additions between each overflow check, and multiplication as four 31 bit 
multiplications. For bigger than 62 bits, an object holding an array of 
longs using that technique takes ~40% less time than BigInteger for 
addition and ~45% for multiplication, but only for quite big integer 
values (a few hundred digits). (see 
http://kin.sourceforge.net/bignum.html ).

As I didn't actually have a need to use integers that big, how ever fast 
they are, and pointless bit twiddling has so much entertainment value, I 
didn't get round to implementing division.  I would suspect that the 
gains are less on a 64 bit JVM, as there is less cost for promotion of 
int to long.

If you want something that is as fast as lisp and a full, production 
quality implementation, then use lisp.


Pete
From: Adam Warner
Subject: Re: Speaking of Bignum arithmetic
Date: 
Message-ID: <pan.2004.04.02.14.20.32.153544@consulting.net.nz>
Hi Pete Kirkham,

> Following a comparison for fibbonacci numbers of scheme vs C++, I played
> a bit with a different representation of big integers from the
> BigInteger in Java, which uses long promotion.
> 
> You can represent using the lowest 62 bits of a long, allowing 4
> additions between each overflow check, and multiplication as four 31 bit
> multiplications. For bigger than 62 bits, an object holding an array of
> longs using that technique takes ~40% less time than BigInteger for
> addition and ~45% for multiplication, but only for quite big integer
> values (a few hundred digits). (see
> http://kin.sourceforge.net/bignum.html ).

Given that your wrote your library in plain Java (as it appears Sun
Microsystems did with their BigInteger library) I find the performance I
benchmarked with the Sun library (2 to 3 times slower than compiled CMUCL
code) quite impressive. It is clear that some Java implementations are
becoming rather fast.

I suspect if one tried to similarly implement big integers in say Python
or CLISP (using 32- or 64-bit integers and without calling a C library)
the performance would be abysmal.

So while the Java Virtual Machine may not provide abstraction potential
for generating high level and high speed code to the same degree that a
native code compiling Lisp does, Sun's implementation appears to be doing
surprisingly well.

Regards,
Adam
From: Pete Kirkham
Subject: Re: Speaking of Bignum arithmetic
Date: 
Message-ID: <406db022$0$6549$cc9e4d1f@news-text.dial.pipex.com>
Adam Warner wrote:
> Given that your wrote your library in plain Java (as it appears Sun
> Microsystems did with their BigInteger library) I find the performance I
> benchmarked with the Sun library (2 to 3 times slower than compiled CMUCL
> code) quite impressive. It is clear that some Java implementations are
> becoming rather fast.

For anything without virtual dispatch or exceptions, often somewhere 
between MSVC++ 6 and gcc C++ in speed, assuming you use checked arrays 
in both languages (ie std::vector<double>). Often you get to spend the 
time you save developing in safe memory management environment 
performance tuning.

> So while the Java Virtual Machine may not provide abstraction potential
> for generating high level and high speed code to the same degree that a
> native code compiling Lisp does, Sun's implementation appears to be doing
> surprisingly well.

Yes; each time I tried to do division with the same optimisations I'd 
done with addition and multiplication, I'd get lost; from where I'm 
standing the limiting factor is the lack of any higher level features in 
the language that don't involve method calls, rather than the virtual 
machine.


Pete
From: Richard Fateman
Subject: Re: Speaking of Bignum arithmetic
Date: 
Message-ID: <uEEhc.39557$WA1.35507@newssvr29.news.prodigy.com>
If you want to compare Lisp bignums to a good implementation,
compare it to GMP.  Some Lisps actually use GMP, which so far
as I can tell from benchmarking performed by DJ Bernstein, is
about as fast as you can get.

I've used GMP with Allegro CL, which does NOT natively use GMP.
For large enough numbers, GMP is distinctly faster than Allegro.
For smallish numbers, Allegro is faster.

RJF


Pete Kirkham wrote:
> Adam Warner wrote:
> 
>> Given that your wrote your library in plain Java (as it appears Sun
>> Microsystems did with their BigInteger library) I find the performance I
>> benchmarked with the Sun library (2 to 3 times slower than compiled CMUCL
>> code) quite impressive. It is clear that some Java implementations are
>> becoming rather fast.
> 
> 
> For anything without virtual dispatch or exceptions, often somewhere 
> between MSVC++ 6 and gcc C++ in speed, assuming you use checked arrays 
> in both languages (ie std::vector<double>). Often you get to spend the 
> time you save developing in safe memory management environment 
> performance tuning.
> 
>> So while the Java Virtual Machine may not provide abstraction potential
>> for generating high level and high speed code to the same degree that a
>> native code compiling Lisp does, Sun's implementation appears to be doing
>> surprisingly well.
> 
> 
> Yes; each time I tried to do division with the same optimisations I'd 
> done with addition and multiplication, I'd get lost; from where I'm 
> standing the limiting factor is the lack of any higher level features in 
> the language that don't involve method calls, rather than the virtual 
> machine.
> 
> 
> Pete