From: lawrence.g.mayka
Subject: Real-time garbage collection on stock hardware, for Lisp
Date: 
Message-ID: <3123@cbnewsc.ATT.COM>
Is any commercial Common Lisp vendor using, or planning to use,
a genuinely real-time garbage collector on a conventional processor?
For example, has any vendor implemented the promising algorithm
described by Appel, Ellis, and Li in "Real-time Concurrent Collection
on Stock Multiprocessors," Proceedings of the SIGPLAN '88 Conference
on Programming Language Design and Implementation, SIGPLAN Notices 23:7,
July 1988?  Their algorithm makes use of per-page access protections
to reclaim memory in a truly incremental fashion.  (Despite the article's
title, the algorithm does not require a multiprocessor, but it can
handle that case, too.)  Or has anyone installed such a garbage
collector into Kyoto Common Lisp?

For that matter, has any researcher attempted to implement an algorithm
such as that of Appel, Ellis, and Li for any dialect of Lisp?


	Lawrence G. Mayka
	AT&T Bell Laboratories
	···@ihlpf.att.com
From: Paul Wilson
Subject: Re: Real-time garbage collection on stock hardware, for Lisp
Date: 
Message-ID: <11818@polya.Stanford.EDU>
You might not want to consider the Appel-Ellis-Li gc (beautiful as it is)
quite real time, for most people's purposes.  The unit of incrementalness
is rather large sometimes -- scanning whole pages of memory and copying their
referents.  In the very worst (and extremely unlikely) case you could
cause such a fault at every memory reference, until you'd done a
whole gc the hard way.  (And since there's fault-handling overhead,
it would be slower than stop-and-copy.)

This doesn't appear likely in practice, but the faults you get may
be densely clustered after a flip.  (Very much like cold-start
cache misses.)  This apparently didn't do a lot of harm for
Appel, Ellis, and Li's non-generational implementation, but it
might be more of a problem for a generational system (a higher
percentage of the data to be copied may be accessed more often,
for the frequently-collected generations).  Then again, maybe not.
If so, you might want to atomically stop-and-copy your youngest
generation and incrementally collect the older ones.

You might also want a small page size, which is increasingly
hard to come by, to decrease the severity of slowdowns.

For my own gc, I went with stop-and-copy (generational), since it's
easier and I couldn't expect the incremental one to be seriously
real-time.  Either way, I advocate using clever scheduling to
hide the potential pauses (and increase gc efficiency by scavenging
when there's not likely to be much to copy).

Still, the A-E-L algorithm is very slick, especially if you need
concurrent multiprocessor collection.  And its principles of operation
are applicable to other things as well, like faulting objects
into memory from a huge persistent store.  (And translating persistent
pointers into regular pointers at page fault time, so there's
no continual overhead in checking and dereferencing pointers.
You can have big persistent identifiers on disk, but the running
program only ever sees real pointers and can whiz along without
knowing anything about the persistent store.)




 
Paul R. Wilson                         
Software Systems Laboratory               lab ph.: (312) 996-9216
U. of Illin. at C. EECS Dept. (M/C 154)   ······@carcoar.stanford.edu
Box 4348   Chicago,IL 60680