This is the program:
(defvar *objects* nil)
(defclass objeto ()
((nombre :initform (gensym "")
:initarg :nombre
:reader nombre)
(fuerza :initform 0
:initarg :fuerza
:reader fuerza)
(relaciones :initform nil
:initarg :relaciones
:reader relaciones)
(degeneracion :initform 0
:initarg :degeneracion
:reader degeneracion)
(ultimo-acceso :initform (get-universal-time))))
(dotimes (n 10000000) (push (make-instance 'objeto) *objects*))
SBCL, after some seconds doing the initializations, it terminates
gracefully saying the following:
Heap exhausted during garbage collection: 0 bytes available, 32
requested.
[....]
fatal error encountered in SBCL pid 4939(tid 3085137584):
Heap exhausted, game over.
Yes, that is, "game over". ;-)
I have tested the same code in CLISP and it works.
The heap exausted much before I ran out of memory (SBCL was using
490MB), and my computers has 2Gb. I'm using SBCL 1.0.11 on Debian.
Is this a bug in SBCL or is it my fault?
J> The heap exausted much before I ran out of memory (SBCL was using
J> 490MB), and my computers has 2Gb. I'm using SBCL 1.0.11 on Debian.
iirc it pre-allocates memory, you can increase size:
--dynamic-space-size <megabytes>
Size of the dynamic space reserved on startup in megabytes.
Default value is platform dependent.
man sbcl
btw it's not that important how much does your computer has, since virtual
memory is used, it doesn't distinguish between RAM and swap; however on
32-bit platforms virt. memory is limited to less than 4 GB.
On 20 dic, 20:59, "Alex Mizrahi" <········@users.sourceforge.net>
wrote:
> J> The heap exausted much before I ran out of memory (SBCL was using
> J> 490MB), and my computers has 2Gb. I'm using SBCL 1.0.11 on Debian.
>
> iirc it pre-allocates memory, you can increase size:
>
> --dynamic-space-size <megabytes>
> Size of the dynamic space reserved on startup in megabytes.
> Default value is platform dependent.
SBCL is not able to automatically adjust that?
J>>> The heap exausted much before I ran out of memory (SBCL was using
J>>> 490MB), and my computers has 2Gb. I'm using SBCL 1.0.11 on Debian.
??>>
??>> iirc it pre-allocates memory, you can increase size:
??>>
??>> --dynamic-space-size <megabytes>
??>> Size of the dynamic space reserved on startup in
??>> megabytes. Default value is platform dependent.
J> SBCL is not able to automatically adjust that?
i don't know, why should it?
it's _reserved_ size, not allocated -- memory is used as needed
automatically.
i think it's quite normal that there is a limit, otherwise some program bug
can blow whole system to be unresponsive due to swap thrashing.. do you want
this?
i know beforehands that data i'm working with should not exceed, say, 500
MB, and if it exceeds it's a bug.
however, i find SBCL's reaction on hitting limits unfriendly.
i'm working with large data sets in Java-based ABCL (Java also has heap
limit command line parameter),
and when i hit limit it throws exception, unwinds stack and frees memory. so
i get error reported when i'm doing something wrong.
of course i have to adjust allocated memory parameter according to data size
i'm currently working with.
"Alex Mizrahi" <········@users.sourceforge.net> writes:
>
> i think it's quite normal that there is a limit, otherwise some
> program bug can blow whole system to be unresponsive due to swap
> thrashing.. do you want this?
I'd rather slow down the whole system due to thrashing than crash the
whole system...
--
Robert Uhl <http://public.xdi.org/=ruhl>
I still can't see a wasp without thinking '400K 1W.'
--Derek Potter, uk.misc
On Dec 21, 2:26 pm, Robert Uhl <·········@NOSPAMgmail.com> wrote:
> "Alex Mizrahi" <········@users.sourceforge.net> writes:
>
> > i think it's quite normal that there is a limit, otherwise some
> > program bug can blow whole system to be unresponsive due to swap
> > thrashing.. do you want this?
>
> I'd rather slow down the whole system due to thrashing than crash the
> whole system...
Limiting the virtual memory use of an application prevents both
situations. (Unless of course, you consider the application the
``whole system'', which is probably true if you're doing everything in
Lisp).
In practice there is little difference between a system crash and
thrashing. Hence the saying:
``Thrashing is virtual crashing.''
Kaz Kylheku wrote:
>
> In practice there is little difference between a system crash and
> thrashing. Hence the saying:
>
> ``Thrashing is virtual crashing.''
Just to point out (like a broken record...), there's an extra level of
horror on linux:
sysctl vm.overcommit_memory and the OOM Killer. Linux overcommits
memory by default - "out-of-box" it will typically merrily grant an
application's request for more vm space than would be possible to back
up with real backing store (ram+swap) if that space were ever to be
used. Fortunately the behaviour can be tuned by the aforementioned
sysctl.
If you want (relatively) graceful thrashing then out-of-memory failure
of the application, rather than the Linux OOM Killer going on a sudden
shooting spree and perhaps killing innocent bystanders (like, oh, sshd,
perhaps locking you out of the box...), disallow/rein-in overcommit
(overcommit_memory=2) - that way, the kernel won't promise more memory
to sbcl than it can possibly deliver.
Robert Uhl <·········@NOSPAMgmail.com> writes:
> "Alex Mizrahi" <········@users.sourceforge.net> writes:
>>
>> i think it's quite normal that there is a limit, otherwise some
>> program bug can blow whole system to be unresponsive due to swap
>> thrashing.. do you want this?
>
> I'd rather slow down the whole system due to thrashing than crash the
> whole system...
So, you want infinite swap space?
??>> i think it's quite normal that there is a limit, otherwise some
??>> program bug can blow whole system to be unresponsive due to swap
??>> thrashing.. do you want this?
RU> I'd rather slow down the whole system due to thrashing than crash the
RU> whole system...
do you mean "whole system" = "Lisp application"? because normally Lisp
application cannot crash "whole system", it will be terminating when it'll
ask too much (Windows will just deny allocations).
we have two alternatives here: "being slow, then terminate" or "terminate
without being slow".
"slow down and get succesful result" outcome has very low probability,
because you cannot know how many does unexpected allocations demand -- they
easily can demand dozens gigabytes, terabytes, or be just infinite.
my experiments show that on 32-bit Debian GNU/Linux SBCL cannot allocate
more than 1.8 GB. say, we have 1 GB of free RAM, and can either set limit to
1 GB, or to 1.8 GB maximum.
if we set it to 1 GB, we'll have error once it exceeds allocation size. if
we set 1.8 GB, but allocation requires 1.9 GB, we'll have some 30 minutes of
hardcore thrashing and totally unresponsive system, and then error.
what are chances that unexpected allocations will not fit in 1 GB, but will
in 1.8 GB? i think those chances are quite low, and tormenting your system
for 30 minutes doesn't have sense.
actually i find terminating Lisp applicaton after excesive allocation not
acceptable. in ABCL these limits work just find -- when i have some weird
code that allocates too much, i instantly get an error and fix it. no
thrasing, no termination, works just fine..
setting limits is also good for servers. once i was working with server with
too little RAM, and was not able to login via SSH because set heap limit too
high, so my only option was rebooting it.
"Alex Mizrahi" <········@users.sourceforge.net> writes:
> ??>> i think it's quite normal that there is a limit, otherwise some
> ??>> program bug can blow whole system to be unresponsive due to swap
> ??>> thrashing.. do you want this?
>
> RU> I'd rather slow down the whole system due to thrashing than crash the
> RU> whole system...
>
> do you mean "whole system" = "Lisp application"?
Yup, because Lisp's natural state seems to be one large Lisp image for
everything.
> we have two alternatives here: "being slow, then terminate" or
> "terminate without being slow". "slow down and get succesful result"
> outcome has very low probability, because you cannot know how many
> does unexpected allocations demand -- they easily can demand dozens
> gigabytes, terabytes, or be just infinite.
I get SBCL memory messages pretty regularly. I think there's a memory
leak somewhere in my code/CLSQL/Hunchentoot, but can't figure out
exactly where. Since my code has few globals and none of 'em should be
holding onto lots of mem, I'm pretty sure that's not it--OTOH, CLSQL and
Hunchentoot are well-written whereas my own code's really not.
> my experiments show that on 32-bit Debian GNU/Linux SBCL cannot allocate
> more than 1.8 GB. say, we have 1 GB of free RAM, and can either set limit to
> 1 GB, or to 1.8 GB maximum.
> if we set it to 1 GB, we'll have error once it exceeds allocation size. if
> we set 1.8 GB, but allocation requires 1.9 GB, we'll have some 30 minutes of
> hardcore thrashing and totally unresponsive system, and then error.
Virtual memory's not really all that bad these days! Yes, if the access
patterns are bad enough, then thrashing will occur. But I regularly
allocate more memory than I have installed, and I see thrashing
_rarely_. It all depends on access patterns.
--
Robert Uhl <http://public.xdi.org/=ruhl>
A friend told me about an evening he spent drinking beer and discussing
language features to hack into C++ with a group that included Bjarne
Stroustrup. My reaction was, 'It all make sense now. C++ looks exactly
like a language designed by drunk people in a bar.' --Steve VanDevender
* Robert Uhl<··············@latakia.dyndns.org>
Wrote on Fri, 21 Dec 2007 15:26:37 -0700:
|> i think it's quite normal that there is a limit, otherwise some
|> program bug can blow whole system to be unresponsive due to swap
|> thrashing.. do you want this?
|
| I'd rather slow down the whole system due to thrashing than crash the
| whole system...
On Linux, it means your system is essentially unusable until the errant
process runs out of swap and the OOM killer puts an end to it.
This is a real problem for leaky Lisps with generational garbage
collectors on Linux. Tuning both the kernel's scheduler and the GC have
yielded only limited improvements in responsiveness IME. [The idea was
to get enough responsiveness from the system so you can kill the process
yourself]
Note: If your machines are dedicated to just running the offending
application it isnt much different from crashing.
I wonder how the professional lisp shops deal with this. [I suspect
they deal with it doing rolling reboots/restarts in the grand old "If it
is good enough for microsoft windows it is good enough for us" tradition
:)
But thats just you restarting the system preemptively than after the
system crashes...]
--
Madhu
Madhu <·······@meer.net> writes:
> * Robert Uhl<··············@latakia.dyndns.org>
> Wrote on Fri, 21 Dec 2007 15:26:37 -0700:
>
> I wonder how the professional lisp shops deal with this. [I suspect
> they deal with it doing rolling reboots/restarts in the grand old "If it
> is good enough for microsoft windows it is good enough for us" tradition
> :)
Why reboot? Just exec() yourself. I have no idea if any lisp app
actually does, but it's not unheard of for daemon processes to do
this or something similar. See also apache httpd's strategy of forking
off child processes to do the work and shutting them down after some amount
of time/requests.
Joost.
M> On Linux, it means your system is essentially unusable until the errant
M> process runs out of swap and the OOM killer puts an end to it.
is it worth to have a swap nowadays, at all?
with RAM size about 2 GB typical, unlikely moving unused stuff to swap will
have big effect -- it's quite unlikely you'll find more than 100-200 MB of
unused RAM. (certainly you can run several few progs and forget about them,
but why?).
but negative impact from swap is big -- it's very likely that it will find
"unused" pages of your Lisp application, and that will cause significant
delay during full GC due to excessive HDD seeks.
so negative effects outweight marginal benefits IMO, and people should
disable swap when they're working with Lisp or Java, unless their situation
is somehow special (cannot get enough RAM for some reasons).
P� Sat, 22 Dec 2007 21:59:58 +0100, skrev Alex Mizrahi
<········@users.sourceforge.net>:
> M> On Linux, it means your system is essentially unusable until the
> errant
> M> process runs out of swap and the OOM killer puts an end to it.
>
> is it worth to have a swap nowadays, at all?
yes.
> with RAM size about 2 GB typical, unlikely moving unused stuff to swap
> will
> have big effect -- it's quite unlikely you'll find more than 100-200 MB
> of
> unused RAM. (certainly you can run several few progs and forget about
> them,
> but why?).
> but negative impact from swap is big -- it's very likely that it will
You got it wrong. The positive effect of swap is big. By swapping out
unused programs/deamons/drivers more of the space can be used for disk
cache which speed up execution of routine tasks. Tons of the stuff that is
loaded on OS boot is rarely if ever used..
--------------
John Thingstad
??>> with RAM size about 2 GB typical, unlikely moving unused stuff to swap
??>> will have big effect -- it's quite unlikely you'll find more than
??>> 100-200 MB of unused RAM. (certainly you can run several few progs
??>> and forget about them, but why?). but negative impact from swap is
??>> big -- it's very likely that it will
JT> You got it wrong. The positive effect of swap is big. By swapping out
JT> unused programs/deamons/drivers more of the space can be used for disk
JT> cache which speed up execution of routine tasks. Tons of the stuff that
JT> is loaded on OS boot is rarely if ever used..
say i have 2 GB of RAM, stuff that OS loads is 200 MB. AT MOST i will have
10% more space. it will not result in 10% performance increasing -- cache
increases performance only *sometimes*.
but impact on GC time is tremendous -- i've seen full GC time of 300 MB heap
grew to about 30 seconds from about 1 second.
30 times slower GC for 10% *potential* benefit -- is it worth?
Alex Mizrahi wrote:
> but impact on GC time is tremendous -- i've seen full GC time of 300
> MB heap grew to about 30 seconds from about 1 second.
> 30 times slower GC for 10% *potential* benefit -- is it worth?
I've never seen behaviour quite that bad, but anyway: vm swappiness
is also a kernel tunable? Might be worth playing with that a bit.
Alex Mizrahi wrote:
> J> The heap exausted much before I ran out of memory (SBCL was using
> J> 490MB), and my computers has 2Gb. I'm using SBCL 1.0.11 on Debian.
>
> iirc it pre-allocates memory, you can increase size:
>
> --dynamic-space-size <megabytes>
> Size of the dynamic space reserved on startup in megabytes.
> Default value is platform dependent.
>
> man sbcl
>
> btw it's not that important how much does your computer has, since virtual
> memory is used, it doesn't distinguish between RAM and swap; however on
> 32-bit platforms virt. memory is limited to less than 4 GB.
>
>
Unless you use the /3GB switch on XP, and recompile SBCL with that
hackery, I don't think you can use more than 2GB of virtual space on
Windows (32bit).
Might be wrong, but George Neuner would know better (there was a
discussion about it here).
DmS> Unless you use the /3GB switch on XP, and recompile SBCL with that
DmS> hackery, I don't think you can use more than 2GB of virtual space on
DmS> Windows (32bit).
iirc Java was not able to allocate even 1.5 GB heap on Windoze.
but original post mentiones Debian GNU/Linux, not Windows.