From: Frank Buss
Subject: Lisp implementation problems with high memory usage
Date: 
Message-ID: <kcvneopbo8wd$.1lk9fw5cqgvwb.dlg@40tude.net>
I've tried this in LispWorks 4.3.7 on Windows, with 2 GB physical RAM, and
1.6 GB available, before LispWorks was started.

(proclaim '(optimize (speed 3) (debug 0) (safety 0)))

(defun test ()
  (let ((list (loop for i from 1 to 50000000 collect i)))
    (loop for i in list sum i)))

It's not very useful, but compiled, this is the result of (time (test)) :
Timing the evaluation of (TEST)
Error: Failed to allocate object of size 16

The taskmanager shows a memory usage of 949 MB of the LispWorks process and
I have 2 GB physical memory. How many memory do I need for a list with 50
million numbers?

Now with SBCL 1.0.7, 32 bit version, on a Debian Linux, with a 2.6 kernel:

unix:~# free
             total       used       free     shared    buffers     cached
Mem:        906732     197428     709304          0      18948     155228
-/+ buffers/cache:      23252     883480
Swap:            0          0          0
[...]
* (time (test))
Heap exhausted during garbage collection: 0 bytes available, 8 requested.
[...]
   Total bytes allocated=536856072
GC control variables:
          *GC-INHIBIT* = false
          *GC-PENDING* = true
 *STOP-FOR-GC-PENDING* = false
fatal error encountered in SBCL pid 1708(tid 1075342464):
Heap exhausted, game over.
LDB monitor
ldb>

Is this only my Linux installation, or is there a limit at about 512 MB?

Ok, let's try it on a nice Dual-Core AMD Operator 64 bit Linux machine with
4 GB RAM and 2.6 GHz speed. The 64 bit version of SBCL 1.0.7:

* (time (test))

Evaluation took:
  11.222 seconds of real time
  8.140509 seconds of user run time
  3.072192 seconds of system run time
  [Run times include 10.049 seconds GC run time.]
  0 calls to %EVAL
  0 page faults and
  800,008,128 bytes consed.
1250000025000000

This looks very good. Let's play a bit with it:

unix:~# free
             total       used       free     shared    buffers     cached
Mem:       4034312    3997004      37308          0        128      11244
-/+ buffers/cache:    3985632      48680
Swap:      1959920    1010876     949044

Ok, I have 5 GB RAM. What does SBCL do if I try to allocate more:

* (progn (defparameter *m* (make-array (truncate 10e9) :initial-element 0
:element-type '(unsigned-byte 8))) nil)
Heap exhausted during allocation: 8552558592 bytes available, 10000000016
requested.

Hey, it says there is 8.5 GB available :-) Let's try it:

* (progn (defparameter *m* (make-array (truncate 8e9) :initial-element 0
:element-type '(unsigned-byte 8))) nil)

After some minutes:

Killed
unix:~#

Stressing the GC a bit with this program:

(null
 (dotimes (i 100)
   (with-output-to-string (s)
     (dotimes (j 10000000)
       (write-char #\c s)))))

works on the 64 bit version of SBCL 1.0.7, but on the 32 bit version it
fails with a "Heap exhausted during allocation" (and someone on my ICFP
team reported a "violates gc invariant", but I think this was an earlier
version of SBCL). Looks like with adding a "(sb-ext:gc :full t)" in the
loop it works.

SBCL on Windows is unusable. Maybe this is a nice test, because the test
function from above crashes all the time with this message:

* (test)
fatal error encountered in SBCL pid 7808:
GC invariant lost, file "gencgc.c", line 832

LDB monitor
ldb>

The reason why I posted this was that my team had many more difficulties
with SBCL while trying to write a program for the ICFP contest. Looks like
most of the problems are because of the limitations of the 32 bit versions.
Do I need a 64 bit computer for using more memory?

BTW: a more optimized C++ version is not much faster:

#include <cstdlib>
#include <iostream>

using namespace std;

struct Cons {
	long long value;
	Cons* next;
};

int main(int argc, char** argv) {
	Cons* start = new Cons();
	start->value = 1;
	Cons* next = start;
	for (int i = 2; i <= 50000000; i++) {
		next->next = new Cons();
		next = next->next;
		next->value = i;
	}
	next->next = NULL;
	next = start;
	long long sum = 0;
	while (next) {
		sum += next->value;
		next = next->next;
	}
	cout << sum << endl;
	return 0;
}

Tested on the 64 bit machine, with "g++ -o2 -o test test.cpp":

unix:~# time ./test
1250000025000000

real    0m4.094s
user    0m2.616s
sys     0m1.476s

But it works on my 32 bit Windows machine, too, unlike every other Lisp
implementation I have tested on it.

Any plans to improve memory handling in SBCL on Windows?

-- 
Frank Buss, ··@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de

From: Matthias Benkard
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <1187124937.238699.296340@g4g2000hsf.googlegroups.com>
Hi,

> Is this only my Linux installation, or is there a limit at about 512 MB?

On my 32-bit x86 Debian system:

$ sbcl --dynamic-space-size 2048
This is SBCL 1.0.7, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.

SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses.  See the CREDITS and COPYING files in the
distribution for more information.
* (defun test ()
    (declare (optimize speed space (safety 0) (debug 0)))
    (let ((list (loop for i from 1 to 50000000 collect i)))
      (loop for i in list sum i)))
; ...
; ... <lots of optimization notes> ...
; ...

TEST
* (time (test))

Evaluation took:
  68.98 seconds of real time
  17.325083 seconds of user run time
  2.104131 seconds of system run time
  [Run times include 13.317 seconds GC run time.]
  0 calls to %EVAL
  109 page faults and
  1,599,318,168 bytes consed.
1250000025000000

It thrashes a lot, because I don't have that much memory, but hey, the
result seems to be correct.

Bye-bye,
Matthias
From: Matthias Buelow
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <5ielmmF3pv6l0U1@mid.dfncis.de>
Frank Buss <··@frank-buss.de> wrote:

> Is this only my Linux installation, or is there a limit at about 512 MB?

What does ulimit -d say?

> BTW: a more optimized C++ version is not much faster:
[...]

Just for being anal, this version doesn't look very optimized to me,
since allocating cons-like structures for consing up a list can be done
a lot better than new'ing (or malloc'ing) a Cons structure as you did
(that is, using a fast special-purpose allocator for such commonly used
cells).
From: Pummelo
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <f9v9i5$pkt$1@atlantis.news.tpi.pl>
"Frank Buss" <··@frank-buss.de> wrote:

> BTW: a more optimized C++ version is not much faster:

This version is broken - it doesn't free the allocated cells.
From: Alex Mizrahi
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <46c409e0$0$90263$14726298@news.sunsite.dk>
(message (Hello 'Frank)
(you :wrote  :on '(Tue, 14 Aug 2007 22:39:02 +0200))
(

 FB> I've tried this in LispWorks 4.3.7 on Windows, with 2 GB physical RAM,
 FB> and 1.6 GB available, before LispWorks was started.

 FB> The taskmanager shows a memory usage of 949 MB of the LispWorks process
 FB> and I have 2 GB physical memory. How many memory do I need for a list
 FB> with 50 million numbers?

it has nothing to do with physical memory -- that's because of address space 
limitations. and address space fragmentation -- it's fragmented because lots 
of stuff floating there, like DLLs, and it's not possible to obtain 
continuous space region.

 FB> Now with SBCL 1.0.7, 32 bit version, on a Debian Linux, with a 2.6 
kernel:
 FB> Heap exhausted, game over.

this really freaks me about SBCL -- i don't want to use stuff like that in 
production!

 FB> contest. Looks like most of the problems are because of the limitations
 FB> of the 32 bit versions. Do I need a 64 bit computer for using more
 FB> memory?

ye

 FB> But it works on my 32 bit Windows machine, too, unlike every other Lisp
 FB> implementation I have tested on it.

C has much simplier memory management -- it doesn't need to allocate 
continuous regions in memory.

it would be interesting to compare Java -- it also has GC.
test of ABCL (CL running on JVM) on Windows shows that it didn't do better 
than Lispworks -- although it allowed to create heap upto 1.5 GB, it failed 
when commiting somewhat like 1.2 GB, with freaky error message:

Exception java.lang.OutOfMemoryError: requested 131072000 bytes for GrET* in 
C:/BUILD_AREA/jdk1.5.0/ho
tspot\src\share\vm\utilities\growableArray.cpp. Out of swap space?

just like SBCL :). perhaps Windows did something unexpected.. (i didn't knew 
there's such thing as overcommit on windows..)
but with smaller heaps -- 1 GB in my case -- it nicely signals an error.

 FB> Any plans to improve memory handling in SBCL on Windows?

i'd like they first get rid of "game over" thing -- or is there some magic 
switch that will tell SBCL to signal error instead of gameovering?

)
(With-best-regards '(Alex Mizrahi) :aka 'killer_storm)
"choose no life") 
From: Espen Vestre
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <m1absreuh8.fsf@gazonk.vestre.net>
"Alex Mizrahi" <········@users.sourceforge.net> writes:

>  FB> The taskmanager shows a memory usage of 949 MB of the LispWorks process
>  FB> and I have 2 GB physical memory. How many memory do I need for a list
>  FB> with 50 million numbers?
>
> it has nothing to do with physical memory -- that's because of address space 
> limitations. and address space fragmentation -- it's fragmented because lots 
> of stuff floating there, like DLLs, and it's not possible to obtain 
> continuous space region.

This used to be very tricky with LW prior to version 4.3 (we had to
build relocated images to be able to run LW 4.2 on linux installation
with the CONFIG_2GB kernel flag set), but starting with LW 4.3 this
should really not be a big problem.  Our general experience is that LW
4.3 and 4.4 is able to grow to about 1.4GB. LW 5 can grow to 2GB,
though, and running global GC on large images is much faster than in
LW 4.4 (I observe this difference every day, we still run som debian
woody servers that can't use LW 5-made deliverables because of the old
libc). 
-- 
  (espen)
From: Rainer Joswig
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <joswig-0AE120.10530716082007@news-europe.giganews.com>
In article <··············@gazonk.vestre.net>,
 Espen Vestre <·····@vestre.net> wrote:

> "Alex Mizrahi" <········@users.sourceforge.net> writes:
> 
> >  FB> The taskmanager shows a memory usage of 949 MB of the LispWorks process
> >  FB> and I have 2 GB physical memory. How many memory do I need for a list
> >  FB> with 50 million numbers?
> >
> > it has nothing to do with physical memory -- that's because of address space 
> > limitations. and address space fragmentation -- it's fragmented because lots 
> > of stuff floating there, like DLLs, and it's not possible to obtain 
> > continuous space region.
> 
> This used to be very tricky with LW prior to version 4.3 (we had to
> build relocated images to be able to run LW 4.2 on linux installation
> with the CONFIG_2GB kernel flag set), but starting with LW 4.3 this
> should really not be a big problem.  Our general experience is that LW
> 4.3 and 4.4 is able to grow to about 1.4GB. LW 5 can grow to 2GB,
> though, and running global GC on large images is much faster than in
> LW 4.4 (I observe this difference every day, we still run som debian
> woody servers that can't use LW 5-made deliverables because of the old
> libc).

Plus LW5 now has 64bit versions. LW4 was 32bit only, IIRC.
From: Espen Vestre
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <m1643fetjf.fsf@gazonk.vestre.net>
Rainer Joswig <······@lisp.de> writes:

> Plus LW5 now has 64bit versions. LW4 was 32bit only, IIRC.

Sure, we've already moved some of our stuff to 64 bit linux servers.
The 64 bit version (*) of LW 5 is very fast and rock solid. My
most memory-hungry server app has run for 3 months non-stop now, and
hasn't leaked a single byte of memory.

(*) we use the linux version, but the mac version seems similar
-- 
  (espen)
From: Rainer Joswig
Subject: Re: Lisp implementation problems with high memory usage
Date: 
Message-ID: <joswig-91F85C.11121716082007@news-europe.giganews.com>
In article <··············@gazonk.vestre.net>,
 Espen Vestre <·····@vestre.net> wrote:

> Rainer Joswig <······@lisp.de> writes:
> 
> > Plus LW5 now has 64bit versions. LW4 was 32bit only, IIRC.
> 
> Sure, we've already moved some of our stuff to 64 bit linux servers.
> The 64 bit version (*) of LW 5 is very fast and rock solid. My
> most memory-hungry server app has run for 3 months non-stop now, and
> hasn't leaked a single byte of memory.

This also got much better with LW5, right.

> (*) we use the linux version, but the mac version seems similar