From: Aaron Sloman
Subject: Re: Object creation/deletion
Date: 
Message-ID: <1991Dec25.093606.4268@cs.bham.ac.uk>
In article <······················@microsoft.com>
    ·····@microsoft.com (Jim ADCOCK) writes:
> In article <·····@life.ai.mit.edu> ·······@rice-chex.ai.mit.edu
>   (Walter E. Gillett) writes:
> .......
> |......Lucid's
> |stuff is fairly fast for "Ephemeral" gc's, but "Dynamic" gc's mean
> |coffee time!
>

I suspect it's impossible to know what this means without knowing (a)
how large the heap was and (b) whether the machine had enough memory for
the gc to occur without paging. As the heap grows larger, scanning it
and moving things round will take longer. A full gc should not mean
"coffee time" on a reasonably configured machine with a well-designed
garbage collector, unless the heap has grown to monstrous size, and in
that case you probably don't really need all that space to be re-useable
via GC. (I appreciate that that last statement could be wrong in some
applications.)

I recently did some informal tests on the Poplog "stop and copy" garbage
collector. (Poplog provides Common Lisp, Prolog, Pop-11, Standard ML,
and an interface to X11R4). E.g. on a SPARCServer 470 with 32 MBytes
main memory, the Poplog garbage collector takes about 0.85 seconds to do
a full GC on a 2.5 Mbyte heap consisting mainly of vectors of (short)
strings. With an 8 Mbyte heap consisting mainly of highly circular
tangled lists, it takes about 6 seconds. When the heap is expanded to 15
Mbytes, still mostly tangled lists, the time taken is about 13 secs cpu
time, about 30-40 seconds real time, depending what else is going on.

If the heap is "locked" by the user (the sort of thing you would do when
you know that most of what's currently in it is non-garbage (e.g. lots
of compiled functions and permanently required tables)), then the GC
takes *under 2 seconds* for the 15 Mbyte heap.

If paging were required because of shortage of main memory, these times
could be much larger.

When there's not enough space for the "stop and copy" garbage collector,
Poplog uses a non-copying GC algorithm, which shuffles data around, and
is slower if you have enough memory, but takes less space, so, if it
avoids paging, may be faster when you are short of memory.

I suspect that some lisp systems take longer than Poplog because they
try to be clever about partitioning the heap. Poplog puts everything
(lists, records, arrays, compiled functions, properties, bignums, double
floats, etc. etc.) into a single undifferentiated heap. This simplifies
store management when requirements change dynamically. Poplog also copes
with "holes" in the heap caused by memory allocated by "external"
procedures, e.g. C programs linked in to poplog. For Poplog programs
that use "call-backs" from external procedures, you can specify that
certain structures should never be relocated by the garbage collector.
You can also attach "destroy" actions to garbage collectable items.
(E.g. if you want to tell X to remove a window on the screen if ever a
certain data-structure becomes garbage, you can use a "destroy"
action).

The Poplog store management system was designed and implemented by
John Gibson, at Sussex University, where Poplog is developed.
-- 
Aaron Sloman,
School of Computer Science,
The University of Birmingham, B15 2TT, England
    EMAIL   ········@cs.bham.ac.uk