From: Paul Donnelly
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <873aliq2dz.fsf@plap.localdomain>
Francogrex <······@grex.org> writes:

> This quote below is from 'Successful Lisp:How to Understand and Use
> Common Lisp' by David B. Lamkins:
> "If you have dreams of writing the next great videogame, you've
> probably already discovered that you need a language that lets you
> program "close to the machine" -- If so, Lisp will disappoint you."
> I don't understand this completely. Does this mean that:
> A- It's harder (or less practical) to, for example, write a videogame
> software using common lisp than with other languages (C, CPP etc)? or
> B- Does he mean you cannot do it as good (like it will be a crappy
> game)? or
> C- You cannot do it at all in common lisp?
> I have no intention to write the next generation videogame, just
> wondering about the quote and the capabilities of lisp and also, has
> anyone ever written a videogame purely in lisp? Thanks.

It looked at first like a variation of the "Lisp is slow" myth, or the
"GC causes unpredicatable pauses" complaint, which is technically
true, though I am skeptical about whether the pauses are long or
frequent enough to disrupt gameplay--a 3d game does not need hard
real-time, but in Chapter 1 he debunks both of these. So all I can
figure is that he means it literally, saying that one needs low-level
access to the machine.

All you *really* need is the ability to interface with OpenGL and
something to read the keyboard, mouse, and joystick. You can do that
with CFFI and C libraries, with CFFI and a shim to make those C
libraries nicer, or by embedding Lisp in a host that will manage those
libraries. You could write a nice looking game more easily in Lisp
than in any of the usual languages, offloading whatever low-level or
micro-optimized stuff you need to a support codebase written in C.

The quote doesn't make much sense to me.

From: Slobodan Blazeski
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <60f3dd62-1c07-409a-88d7-251f2e9545e3@27g2000hsf.googlegroups.com>
On Aug 6, 4:54 am, Paul Donnelly <·············@sbcglobal.net> wrote:
> Francogrex <······@grex.org> writes:
> > This quote below is from 'Successful Lisp:How to Understand and Use
> > Common Lisp' by David B. Lamkins:
> > "If you have dreams of writing the next great videogame, you've
> > probably already discovered that you need a language that lets you
> > program "close to the machine" -- If so, Lisp will disappoint you."
> > I don't understand this completely. Does this mean that:
> > A- It's harder (or less practical) to, for example, write a videogame
> > software using common lisp than with other languages (C, CPP etc)? or
> > B- Does he mean you cannot do it as good (like it will be a crappy
> > game)? or
> > C- You cannot do it at all in common lisp?
> > I have no intention to write the next generation videogame, just
> > wondering about the quote and the capabilities of lisp and also, has
> > anyone ever written a videogame purely in lisp? Thanks.
>
> It looked at first like a variation of the "Lisp is slow" myth, or the
> "GC causes unpredicatable pauses" complaint, which is technically
> true, though I am skeptical about whether the pauses are long or
> frequent enough to disrupt gameplay--a 3d game does not need hard
> real-time, but in Chapter 1 he debunks both of these. So all I can
> figure is that he means it literally, saying that one needs low-level
> access to the machine.
>
> All you *really* need is the ability to interface with OpenGL and
> something to read the keyboard, mouse, and joystick. You can do that
> with CFFI and C libraries, with CFFI and a shim to make those C
> libraries nicer, or by embedding Lisp in a host that will manage those
> libraries. You could write a nice looking game more easily in Lisp
> than in any of the usual languages, offloading whatever low-level or
> micro-optimized stuff you need to a support codebase written in C.


I'm currently working on edi, my sort-of-interpreter for my sort-of-
lisp that's aimed toward working with DirectX.
It'll be interesthing to see how fast will it go without
optimizations. I believe that most of the time is actually spent in
the GPU so as soon I finish support for continuations, unification and
array processing will give it some benchmarking.

So it'll either show that:
A Speed difference is not that much
B I'm a lousy implementer :)

cheers
bobi
From: Pascal J. Bourguignon
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <87zlnqr9f6.fsf@hubble.informatimago.com>
Paul Donnelly <·············@sbcglobal.net> writes:

> Francogrex <······@grex.org> writes:
>
>> This quote below is from 'Successful Lisp:How to Understand and Use
>> Common Lisp' by David B. Lamkins:
>> "If you have dreams of writing the next great videogame, you've
>> probably already discovered that you need a language that lets you
>> program "close to the machine" -- If so, Lisp will disappoint you."
>> I don't understand this completely. Does this mean that:
>> A- It's harder (or less practical) to, for example, write a videogame
>> software using common lisp than with other languages (C, CPP etc)? or
>> B- Does he mean you cannot do it as good (like it will be a crappy
>> game)? or
>> C- You cannot do it at all in common lisp?
>> I have no intention to write the next generation videogame, just
>> wondering about the quote and the capabilities of lisp and also, has
>> anyone ever written a videogame purely in lisp? Thanks.

Somebody wrote a system, including device drivers, purely in lisp. Is
that close enough to the hardware?  (eg. the various Lisp Machines,
Movitz, and probably more I don't know about).

> It looked at first like a variation of the "Lisp is slow" myth, or the
> "GC causes unpredicatable pauses" complaint, which is technically
> true, though I am skeptical about whether the pauses are long or
> frequent enough to disrupt gameplay--a 3d game does not need hard
> real-time, but in Chapter 1 he debunks both of these. So all I can
> figure is that he means it literally, saying that one needs low-level
> access to the machine.

You may need low level access to the machine.  For example, you may
need to compile some rendering loop into your graphic card processor
code (yay! You've got Lisp, the best programming language to write
compilers in, to write your graphic coprocessor compiler), and then
you may need to send the binary code to the graphic card.  For this,
the package COMMON-LISP won't be of much help, but in most
implementations there are extensions that allow you to do it, or you
can add extensions to do it.


> All you *really* need is the ability to interface with OpenGL and
> something to read the keyboard, mouse, and joystick. You can do that
> with CFFI and C libraries, with CFFI and a shim to make those C
> libraries nicer, or by embedding Lisp in a host that will manage those
> libraries. You could write a nice looking game more easily in Lisp
> than in any of the usual languages, offloading whatever low-level or
> micro-optimized stuff you need to a support codebase written in C.
>
> The quote doesn't make much sense to me.

Indeed.  Sometimes lispers still get contamined by anti-lisp ideas.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/
From: Vend
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <e5e87388-c09b-40db-8d1d-625787a7e616@26g2000hsk.googlegroups.com>
On 6 Ago, 04:54, Paul Donnelly <·············@sbcglobal.net> wrote:
> Francogrex <······@grex.org> writes:
> > This quote below is from 'Successful Lisp:How to Understand and Use
> > Common Lisp' by David B. Lamkins:
> > "If you have dreams of writing the next great videogame, you've
> > probably already discovered that you need a language that lets you
> > program "close to the machine" -- If so, Lisp will disappoint you."
> > I don't understand this completely. Does this mean that:
> > A- It's harder (or less practical) to, for example, write a videogame
> > software using common lisp than with other languages (C, CPP etc)? or
> > B- Does he mean you cannot do it as good (like it will be a crappy
> > game)? or
> > C- You cannot do it at all in common lisp?
> > I have no intention to write the next generation videogame, just
> > wondering about the quote and the capabilities of lisp and also, has
> > anyone ever written a videogame purely in lisp? Thanks.
>
> It looked at first like a variation of the "Lisp is slow" myth, or the
> "GC causes unpredicatable pauses" complaint, which is technically
> true, though I am skeptical about whether the pauses are long or
> frequent enough to disrupt gameplay--a 3d game does not need hard
> real-time, but in Chapter 1 he debunks both of these.

But in some kinds of videogames a 1-2 seconds pause is unacceptable.

If I understand correctly, garbage collectors with low pause times
suitable for realtime exist, but they impose a performance loss, so
maybe they are unsuitable for modern games.

> So all I can
> figure is that he means it literally, saying that one needs low-level
> access to the machine.
>
> All you *really* need is the ability to interface with OpenGL and
> something to read the keyboard, mouse, and joystick. You can do that
> with CFFI and C libraries, with CFFI and a shim to make those C
> libraries nicer, or by embedding Lisp in a host that will manage those
> libraries. You could write a nice looking game more easily in Lisp
> than in any of the usual languages, offloading whatever low-level or
> micro-optimized stuff you need to a support codebase written in C.
>
> The quote doesn't make much sense to me.
From: Tamas K Papp
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6g2g30Fdrc7mU2@mid.individual.net>
On Thu, 07 Aug 2008 23:26:16 -0700, Vend wrote:

> But in some kinds of videogames a 1-2 seconds pause is unacceptable.

I have never even seen a pause that long because of garbage collection.  

Tamas
From: Espen Vestre
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <m1hc9vls5i.fsf@gazonk.vestre.net>
Tamas K Papp <······@gmail.com> writes:

>> But in some kinds of videogames a 1-2 seconds pause is unacceptable.
>
> I have never even seen a pause that long because of garbage collection.  

You haven't seen much - I have seen a 16 hours pause ;-)

My most memory-hungry server program (written in 64-bits LW for linux)
uses 7 seconds to do a global GC, reducing its allocated memory from
5,2 GB to 3,8 GB. But this is a scheduled global gc for a long-running
server, not a normal automatic GC. 1-2 seconds of pause in a videogame
sounds very unlikely with modern lisps on modern hardware, you'll
probably rarely see more than 1/10 second.
-- 
  (espen)
From: Vend
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <40aee596-fc68-4d3a-b8ba-4908ddc7fcbf@r66g2000hsg.googlegroups.com>
On 8 Ago, 12:26, Espen Vestre <·····@vestre.net> wrote:
> Tamas K Papp <······@gmail.com> writes:
>
> >> But in some kinds of videogames a 1-2 seconds pause is unacceptable.
>
> > I have never even seen a pause that long because of garbage collection.
>
> You haven't seen much - I have seen a 16 hours pause ;-)
>
> My most memory-hungry server program (written in 64-bits LW for linux)
> uses 7 seconds to do a global GC, reducing its allocated memory from
> 5,2 GB to 3,8 GB. But this is a scheduled global gc for a long-running
> server, not a normal automatic GC. 1-2 seconds of pause in a videogame
> sounds very unlikely with modern lisps on modern hardware, you'll
> probably rarely see more than 1/10 second.

The professors at my college taught Robotics in Java, but advised
against using it to program actual robot. Apparently they did so and
the robot during a demonstration embarrassingly bumped into people as
it performed GC.
From: Vend
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <7fde655d-dcc5-496c-8390-853045d84af1@m36g2000hse.googlegroups.com>
On 8 Ago, 17:21, Vend <······@virgilio.it> wrote:
> On 8 Ago, 12:26, Espen Vestre <·····@vestre.net> wrote:
>
> > Tamas K Papp <······@gmail.com> writes:
>
> > >> But in some kinds of videogames a 1-2 seconds pause is unacceptable.
>
> > > I have never even seen a pause that long because of garbage collection.
>
> > You haven't seen much - I have seen a 16 hours pause ;-)
>
> > My most memory-hungry server program (written in 64-bits LW for linux)
> > uses 7 seconds to do a global GC, reducing its allocated memory from
> > 5,2 GB to 3,8 GB. But this is a scheduled global gc for a long-running
> > server, not a normal automatic GC. 1-2 seconds of pause in a videogame
> > sounds very unlikely with modern lisps on modern hardware, you'll
> > probably rarely see more than 1/10 second.
>
> The professors at my college taught Robotics in Java, but advised
> against using it to program actual robot. Apparently they did so and
> the robot during a demonstration embarrassingly bumped into people as
> it performed GC.

But in fairness this was years ago, so maybe it was an old JVM with a
stop-the-world collector.
From: ········@gmail.com
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <04c610bf-173a-4acc-902f-aa58b7d49c75@34g2000hsh.googlegroups.com>
On Aug 8, 11:25 am, Vend <······@virgilio.it> wrote:

> But in fairness this was years ago, so maybe it was an old JVM with a
> stop-the-world collector.

Such JVMs are still out there, though, in limited machines.  My
cellphone's J2ME has one; I can tell because I get significant GC
pauses (seconds) in at least one of the games I have on it.

My worst GC experience was a few jobs back, writing a compiler in
Java.  Until we got the memory usage down, the JVM required a heap of
about 1.3GB.  I was working from home one night, and discovered that a
20-minute compile had turned into 2 hours.  Turned out that (duh) GC
was interacting badly with virtual memory on my 512MB box.  It was JDK
1.3, and it apparently did use stop-the-world.

(On the plus side, it meant I could get my boss to buy me an extra gig
of RAM.  :-)
From: Tamas K Papp
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6g377fFe0q5uU1@mid.individual.net>
On Fri, 08 Aug 2008 08:21:04 -0700, Vend wrote:

> On 8 Ago, 12:26, Espen Vestre <·····@vestre.net> wrote:
>> Tamas K Papp <······@gmail.com> writes:
>>
>> >> But in some kinds of videogames a 1-2 seconds pause is unacceptable.
>>
>> > I have never even seen a pause that long because of garbage
>> > collection.
>>
>> You haven't seen much - I have seen a 16 hours pause ;-)
>>
>> My most memory-hungry server program (written in 64-bits LW for linux)
>> uses 7 seconds to do a global GC, reducing its allocated memory from
>> 5,2 GB to 3,8 GB. But this is a scheduled global gc for a long-running
>> server, not a normal automatic GC. 1-2 seconds of pause in a videogame
>> sounds very unlikely with modern lisps on modern hardware, you'll
>> probably rarely see more than 1/10 second.
> 
> The professors at my college taught Robotics in Java, but advised
> against using it to program actual robot. Apparently they did so and the
> robot during a demonstration embarrassingly bumped into people as it
> performed GC.

I have been using SBCL for numerical computations for two years now, and 
I can't say I have noticed the GC.  Maybe my experience is indeed 
limited, but the GC of modern Lisps appears to be quite sophisticated and 
mostly unnoticeable.

Yes, I can imagine that it is possible to torture the GC with some 
applications designed for this, but do others notice the GC when they 
work with Lisp?  Or is this just the perpetuation of an old myth?

Tamas
From: Thomas F. Burdick
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <fe1dc9bf-930e-43e4-9d7e-63e0543d880e@y21g2000hsf.googlegroups.com>
On Aug 8, 5:32 pm, Tamas K Papp <······@gmail.com> wrote:
> On Fri, 08 Aug 2008 08:21:04 -0700, Vend wrote:
> > On 8 Ago, 12:26, Espen Vestre <·····@vestre.net> wrote:
> >> Tamas K Papp <······@gmail.com> writes:
>
> >> >> But in some kinds of videogames a 1-2 seconds pause is unacceptable.
>
> >> > I have never even seen a pause that long because of garbage
> >> > collection.
>
> >> You haven't seen much - I have seen a 16 hours pause ;-)
>
> >> My most memory-hungry server program (written in 64-bits LW for linux)
> >> uses 7 seconds to do a global GC, reducing its allocated memory from
> >> 5,2 GB to 3,8 GB. But this is a scheduled global gc for a long-running
> >> server, not a normal automatic GC. 1-2 seconds of pause in a videogame
> >> sounds very unlikely with modern lisps on modern hardware, you'll
> >> probably rarely see more than 1/10 second.
>
> > The professors at my college taught Robotics in Java, but advised
> > against using it to program actual robot. Apparently they did so and the
> > robot during a demonstration embarrassingly bumped into people as it
> > performed GC.
>
> I have been using SBCL for numerical computations for two years now, and
> I can't say I have noticed the GC.  Maybe my experience is indeed
> limited, but the GC of modern Lisps appears to be quite sophisticated and
> mostly unnoticeable.
>
> Yes, I can imagine that it is possible to torture the GC with some
> applications designed for this, but do others notice the GC when they
> work with Lisp?  Or is this just the perpetuation of an old myth?
>

Modern, ha ha ha! In terms of GC, SBCL hasn't changed from the CMUCL
that was ported on x86 in the early 90's. And no, that's not better
than it sounds, the GC is really ancient.

But no objection to the rest. GC pauses are barely measurable, and
don't make any difference even in highly-consing interactive GUI
programs. SBCL's GC technology kinda sucks, but Moore's law saved us
here and made it irrelevant. It's just not worth it to move from an
okay GC to a good one, when there are other things like Unicode,
threads, callbacks, CLOS, various transforms ... generally speaking,
no major work has happened with the GC because it's just not worth it.
Good enough is good enough, and you'd have to try hard to see a
difference even with the .NET CLR, which is about as good as it gets.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <rem-2008aug11-002@yahoo.com>
> From: Tamas K Papp <······@gmail.com>
> the GC of modern Lisps appears to be quite sophisticated and
> mostly unnoticeable. ... do others notice the GC when they work
> with Lisp?  Or is this just the perpetuation of an old myth?

Many years ago, when I used Lisp 1.6 on SU-AI, or UCI-Lisp on same,
or MacLisp on MIT-MC, or MACL 1.2.2 on my Macintosh Plus (purchased
in early 1990), I could notice the pause when a GC happened. But
running CMUCL 18b on my Unix shell account in recent years, I
haven't particularily noticed when GCs happen, except if I have it
in verbose mode whereupon multiple GC reports spew out faster than
the 19200 BPS modem can handle them, causing loss of data because
VT100 dialup modem isn't flow-regulated/controlled. (The modem
itself can handle standard 28k and nonstandard 56k, but the "Baud"
menu in the VT100 emulator I use doesn't have an option for
anything faster than 19200.)

But using PowerLisp 2.01 68k (copyright 1996) on my Macintosh
Performa, just a day or so ago I noticed a GC happening. It paused,
and the status line said it was garbage collecting, so after a few
seconds I pulled down the Misc. menu to select the option to watch
memory consumption, and I saw Heap and/or Nodes near the maximum
and slowly creeping downward. After a minute or so it finally
finished the GC and got back to my application. I guess the
transition from stop-the-world-GC to modern-GC was sometime after
1996.
From: Brian
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6f7af77c-3ee0-4482-bab8-3d45c18c3969@c58g2000hsc.googlegroups.com>
On Aug 8, 5:26 am, Espen Vestre <·····@vestre.net> wrote:
> 1-2 seconds of pause in a videogame sounds very
> unlikely with modern lisps on modern hardware,
> you'll probably rarely see more than 1/10 second.
> --
> (espen)
If the console has multiple cores it may be possible
to run GC in the background.  If the game is played
online you would already have some lag so GC probably
wouldn't be so noticeable.
From: Dimiter "malkia" Stanev
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6g3kfdFe4no0U1@mid.individual.net>
Brian wrote:
> If the console has multiple cores it may be possible
> to run GC in the background.  If the game is played
> online you would already have some lag so GC probably
> wouldn't be so noticeable.

Not with games, maybe with a web server, or any other non-real-time 
(video games are kind of soft-real-time applications).

You can't, because you have to synchronize quite a lot of things, for 
example you would start various working threads across your cpus, or 
whatever you have, at some point you'll have to wait for them, and maybe 
at the same time you were drawing the previous frame, while you are 
collecting data for the new one.. That to be said, garbage collecting in 
one of your threads, although not stopping the rest of them, is actually 
stopping them, if someone expects that thread to finish. I mean a 
minimum 30fps game, with 33ms frame-time, everything would have to 
finish on time (well not everything, your server game code might be 
running on say 10fps, or even less, kind of independet from the game).

Also having memory manager per core might seem like a good idea, but 
overall hard to implement, and balance.

But, most of the games I've been working on, do not really have much 
dynamic allocation throughout the game - most of it is preallocated, so 
if you have to do it in Lisp, you have to be careful maybe only for 
boxing/unboxing the damn floats :) -  Or you can always use the latter 
through arrays.

Off course this is not the only approach, and granted for some specific 
games, even more relaxed situations could be done. For example a turned 
based strategy game such as chess, Heroes of Might and Magic, or 
Disciples type of game - would be totally fine with garbage collecting.
From: Pascal J. Bourguignon
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <87d4kjz2vv.fsf@hubble.informatimago.com>
"Dimiter \"malkia\" Stanev" <······@gmail.com> writes:
> [...]
> Also having memory manager per core might seem like a good idea, but
> overall hard to implement, and balance.
>
> But, most of the games I've been working on, do not really have much
> dynamic allocation throughout the game - most of it is preallocated,
> so if you have to do it in Lisp, you have to be careful maybe only for
> boxing/unboxing the damn floats :) -  Or you can always use the latter
> through arrays.
> [...]

A CL implementation should be only 17 special operators away anyways.
If you have special needs, it would be perfectly acceptable to have a
specialy developed CL implementation to fulfill them.  An
implementation that wouldn't box/unbox floats, and with a garbage
collector that would be designed in accordance to the application
constraints.  Or even, without a garbage collector, after all it's not
even mandatory.


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

"Our users will know fear and cower before our software! Ship it!
Ship it and let them flee like the dogs they are!"
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <rem-2008aug11-003@yahoo.com>
> From: ····@informatimago.com (Pascal J. Bourguignon)
> Or even, without a garbage collector, after all it's not even mandatory.

Here's an alternative idea: At the start of each "game", you
expunge all toplevel references then expunge the entire heap at
once. Then you install the initial-state of the game in the heap,
using only a tiny portion of it. Then as the game runs, it
allocates memory, using reference counts to keep track of what
temporary structures can be immediately recycled. The occurrance of
pointer loops, whereby something can become "orphaned" yet the
reference count never goes to zero so it can't be reclaimed, should
be small enough that over the course of a week or so playing a
single game the heap would never fill up due to such "memory leak",
and even two weeks of the same game it'd maybe not fill up, and
nobody plays a single game more than two hours anyway so the
possibility of the heap *ever* filling up during a single game is a
non-issue. Most of the time a "game" per this model is just one
"level" of the overall game, which lasts maybe five or ten minutes,
nowhere near long enough for "memory leak" to exhaust the heap.

You could even have a visual indicator at the bottom of the screen
showing what fraction of the heap is currently "in use" (not yet
recycled), but disguise it to be labeled as time allowed for a
game, in unstated units, where if you don't finish the game before
the bar fills up then the game quits and you get zero points for
the game (for that "level" of a multi-level game, so you have to
re-start that "level" again to see if you can finish it in less
than two weeks next time).

Do I qualify as a "hacker" in the sense of somebody who can figure
out a non-obvious (to everyone else) ugly solution which nevertheless
solves the *real* problem and doesn't have too high a cost?

I'm still looking for employment. See the URL in my From: address
and click on the "Looking for Employment" link at the top.


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to study the 'vacation' message I received:

/----------------------------------------------------------------------------\
|   Thanks for your email, I will be out of office on Thuursday afternoon,   |
|   July 24th returning Friday, July 25th.                                   |
\----------------------------------------------------------------------------/

One of the words is mis-spelled. Enter it exactly as it appears
there (5-10 chars), without correction, into this TextField:
          +----------+
          |          |
          +----------+
Ignore the bad punctuation. You may be asked about that later.
From: Vend
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <08c0ce1d-60b3-4819-931f-b9da802d65de@25g2000hsx.googlegroups.com>
On 11 Ago, 23:37, ··················@spamgourmet.com.remove (Robert
Maas, http://tinyurl.com/uh3t) wrote:
> > From: ····@informatimago.com (Pascal J. Bourguignon)
> > Or even, without a garbage collector, after all it's not even mandatory.
>
> Here's an alternative idea: At the start of each "game", you
> expunge all toplevel references then expunge the entire heap at
> once. Then you install the initial-state of the game in the heap,
> using only a tiny portion of it. Then as the game runs, it
> allocates memory, using reference counts to keep track of what
> temporary structures can be immediately recycled. The occurrance of
> pointer loops, whereby something can become "orphaned" yet the
> reference count never goes to zero so it can't be reclaimed, should
> be small enough that over the course of a week or so playing a
> single game the heap would never fill up due to such "memory leak",
> and even two weeks of the same game it'd maybe not fill up, and
> nobody plays a single game more than two hours anyway so the
> possibility of the heap *ever* filling up during a single game is a
> non-issue. Most of the time a "game" per this model is just one
> "level" of the overall game, which lasts maybe five or ten minutes,
> nowhere near long enough for "memory leak" to exhaust the heap.

Naive reference counting requires an expensive read barrier, which
probably makes it unsuitable for cpu-intensive games (it has also a
theoretically unbounded pause time on reference destruction, but I
think this is not an issue in games).

Deferred reference counting requires scanning the root set, thus I
think has similar issues of other marking collectors.

I'm not sure whether reference counting is appropriate in multi-thread/
multi-core systems (most modern games are multi-threaded, and modern
consoles have multiple cores).
I would think reference counting requires expensive synchronization to
update and check the counters in these systems (I've read that atomic
increment and decrement operations are usually expensive).

> You could even have a visual indicator at the bottom of the screen
> showing what fraction of the heap is currently "in use" (not yet
> recycled), but disguise it to be labeled as time allowed for a
> game, in unstated units, where if you don't finish the game before
> the bar fills up then the game quits and you get zero points for
> the game (for that "level" of a multi-level game, so you have to
> re-start that "level" again to see if you can finish it in less
> than two weeks next time).
>
> Do I qualify as a "hacker" in the sense of somebody who can figure
> out a non-obvious (to everyone else) ugly solution which nevertheless
> solves the *real* problem and doesn't have too high a cost?
>
> I'm still looking for employment. See the URL in my From: address
> and click on the "Looking for Employment" link at the top.
>
> -
> Nobody in their right mind likes spammers, nor their automated assistants.
> To open an account here, you must demonstrate you're not one of them.
> Please spend a few seconds to study the 'vacation' message I received:
>
> /----------------------------------------------------------------------------\
> |   Thanks for your email, I will be out of office on Thuursday afternoon,   |
> |   July 24th returning Friday, July 25th.                                   |
> \----------------------------------------------------------------------------/
>
> One of the words is mis-spelled. Enter it exactly as it appears
> there (5-10 chars), without correction, into this TextField:
>           +----------+
>           |          |
>           +----------+
> Ignore the bad punctuation. You may be asked about that later.
From: ········@gmail.com
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <f2e6c6b3-c4f3-4ee5-841e-4bd6a1b6e424@34g2000hsh.googlegroups.com>
On Aug 12, 4:14 am, Vend <······@virgilio.it> wrote:

> Naive reference counting requires an expensive read barrier,

This can be made less expensive, though, by using atomic CPU
instructions instead of OS-level mutexes.  It's not magic, but it is
pretty cheap in the absence of contention.

I think.
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <rem-2008sep01-001@yahoo.com>
> From: Vend <······@virgilio.it>
> Naive reference counting requires an expensive read barrier,

I don't understand why this should be true. It seems to me that the
only places related to reference count where you need to wait until
one operation is done before another can be started are:
- Set initial reference count = 1 --> decrement reference count and
   check if it reached zero.
- Two different changes to reference count, such as increasing and
   decreasing or two of either.
With good coding of low-level algorithms, all changes to reference
count should be rare enough that you can afford to spawn a new
green thread each time you need to do it, so the main game threads
never get tied up waiting for reference count management to finish.
Does that sound like it would work?

> it has also a theoretically unbounded pause time on reference
> destruction

I think my method of lazy-processing of reference destruction,
using two primary registers and a daisy-chain of CONS cells for the
queue of pending destructions (and of course another daisy-chain of
CONS cells for the freelist itself) would allow a main process to
start reference decrement in fixed time and each step of checking
if count has reached zero and if so initiating a cascade of
reference destruction and additional decrement-test-zero to be
fixed time also. (Where fixed time is very very short time.)

Note that all arrays would be emulated by self-balancing binary
search trees, thus avoiding the need for any dynamic storage other
than CONS cells and atomic values. An array of size N with initial
value a constant or a computable function can be created in fixed
(very short) time by using a sparse array where you make an empty
array initially and add new elements only when somebody tries to
access an element that hasn't yet been initialized. Thus each
single-element array access takes log(n) time regardless of whether
it's the access that creates that element for the first time or
some later access. The only **actual** array would be the screen
paint/refresh buffer, but that's static, allocated once at system
boot, so we don't have to worry about allocating it from the
reference-count system. (That remark is for a video game or other
application that requires a local screen.)

> Deferred reference counting requires scanning the root set

Not the way I'd do it. Primitive operations on the root registers
would be strictly controlled. If you want to set a value in a
register, you must first erase it to a non-pointer value, then you
can set it to the new pointer value you want. Thus you first do
lazy decrement-and-test-zero of the reference count of its old
value (if it wasn't already empty) then do increment of reference
count of new value second. Whether those root registers are actual
CPU registers, or just a startup-time-allocated array in RAM,
doesn't matter. They are a special place in the machine, the only
places that are controlled manually (the program always knows which
of them are in use hence anything they point to is accessible, and
which are not in use hence they don't point to anything so nothing
additional is deemed accessible on their account). At all times
(except in the middle of a primitive erase or set operation on a
root register), the reference count is consistent all the way down
from the root registers, except in cells which are currently in the
middle of lazy reference count updating/destruction, where the
reference count might be one larger than correct at this moment
because the decrement-and-test operation on that cell has been
queued but not yet started.

> I'm not sure whether reference counting is appropriate in
> multi-thread/ multi-core systems (most modern games are
> multi-threaded, and modern consoles have multiple cores).

You definitely need to design your application so that
decrement-and-test-zero (DATZ) is relatively rare, so that you
don't have three or more different threads simultantaeously
requesting DATZ service where two of them are waiting for the first
to finish. With good algorithms, you **know** that RPLACA/RPLACD
will never occur in the middle of a chain from your toplevel
pointer to some lower place in a structure you're traversing, so
you can safely *not* keep track of reference counts when simply
traversing within a structure for which you are holding a toplevel
pointer. Here's an example, with reference counts shown on each cell:
Start of traverse:
  Top>--->[1;h]->[1;e]->[1;l]->[1;l]->[1;o]->NIL
All reference counts equal 1 because Top is the only pointer to
first cell, first cell is only pointer to second cell, etc.
  Scan>---+
          V
  Top>--->[1;h]->[1;e]->[1;l]->[1;l]->[1;o]->NIL
Scan is subservient to Top, so it doesn't count as an extra
reference, otherwise we'd need that first cell to get reference
count incremented to 2.
  Scan>----------+
                 V
  Top>--->[1;h]->[1;e]->[1;l]->[1;l]->[1;o]->NIL
Scan is subservient to Top, so it doesn't count as an extra
reference, otherwise we'd need that first cell to get reference
count decremented from 2 back to 1 and tested to see if it reached
zero, and we'd need that second cell to get reference count
incremented to 2.

It's illegal under this rule to discard the Top pointer until after
all local pointers subservient to it have been previously
discarded.

At any point when we've found what we're looking for and want to
*return* the value of Scan to the caller or *pass* the value of
Scan to a called function, we must register Scan in some toplevel
structure so that a path from root registers to whatever Scan
points to will exist. This will cause whichever CONS cell Scan
points to to get its reference count incremented. If a function is
called, it can assume the pointer it's given is registered, so it
doesn't have to anything special, unless it stores that pointer
value somewhere or passes it further along. Upon return, Scan can
be de-registered, so it's back to being just a subservient pointer.
But if Scan was registered because it's being returned to caller,
then it's the job of the caller to de-register it eventually. This
is a bit complicated, and I think it will take some thought to get
it designed in a foolproof way. Maybe in fact there's an easier
way: Whenever calling a function or returning from a function,
there's a special CONS cell that contains either the return values
or the parameters. This special CONS cell is always registered in a
root register. So any time you pass parameters or return values,
the reference-count adjustment automatically happens when you build
the list of parameters or the list of return values. After the call
or return is completed, so you're now in the called function or out
from the previously-called function to its caller, that function
you're now in must save local copies of all parameters or return
values, then manually increment the reference count for all master
pointers but *not* for subservient pointers, and finally dispose
the list that was given to it. So a single lazy DATS is invoked at
each function call or function return. Since every call or return
has at least one non-subservient pointer, *that* pointer gets
reference count decreased during destruction of the parameter or
return values list but won't go to zero because of the extra
increment that occurred during the call or return mechanism. Oh,
one extra thing that must happen during a return, but not during a
call, *after* building the return values list, is invoking a lazy
DATS for every non-subservient local pointer, to undo the
increment(s) that happened during the original call that started
this function running. OK, I think *that* design will actually work!
What do you think?

> I would think reference counting requires expensive
> synchronization to update and check the counters in these systems

I'm thinking that my design reduces the needed synchronization to a
bare minimum, with fixed very-small time used by the very few
operations that require synchronization.

> (I've read that atomic increment and decrement operations are
>  usually expensive).

With my design, that expensive operation is performed only within
the RC (reference count) thread, so the only expense that affects
any main application thread is the interlock needed to prevent an
application thread from getting such service until after the RC
thread has finished with the previous atomic incr or DATZ
operation. Hmm, actually the way I have it designed, DATZ isn't
atomic. It works like this:
- Request for DATZ from other thread, specifying address of the
   cell to be processed. This locks the RC thread busy as part of
   the atomic request;
- RC DATZ server grabs a cell from a free-list to use to push the
   address onto the list of cells that have been queued for DATZ;
- RC turns busy status back off, is now available for another request;
- Whenever RC has free time, no pending requests, it does this:
   - Lock itself busy;
   - Check what's the most urgent thing to do, such as start a
      DATZ, or start a INCR, or continue an in-progress DATZ, etc.
   - If there's any such thing at all to do, it does just one
      atomic step of that process, which usually involves popping
      something off a to-do queue, adjusting or testing the
      reference count, and pushing the item back onto a different
      queue that reflects what needs to be done next.
   - Unlock itself, can accept a new service request for a brief moment now.
Broken into separate atomic steps, DATZ works like this:
- Pop off need-to-start-DATZ queue, decrement reference count,
   push onto need-to-test-zero queue.
- Pop off need-to-test-zero queue, test zero:
   - If zero, push onto need-to-destruct queue;
   - If nonzero, just discard it, no further work on this address is needed.
- Pop off need-to-destruct queue, test whether CONS or atomic:
   - If atomic, push onto freelist;
   - If CONS cell, push onto need-recurse-CAR+CDR queue.
- Pop off need-recurse-CAR+CDR queue, push CDR onto
   need-to-start-DATZ queue, use the cell itself to link its CAR
   into need-to-start-DATZ queue;
Note that the RC system itself keeps its own private free list
which is used most of the time. When anything is popped off a
queue, the cell is immediately moved to the private free list.
Whenever anything is pushed onto a queue, the cell to be used is
gotten from the private free list, unless that free list has gone
empty in which case a cell is gotten from the application freelist.
Ideally the application freelist never goes empty, because the
application is small enough to fit in available RAM with some room
to spare for lazy DATZ to hold onto some cells for a while before
they are really needed by the application. The private free list
never gets really big because only a small number of DATZs are
pending at any time. The worse case is probably when a large SBBST
is suddenly disposed, when log(n) simultaneous CDR-DATZs might be
in the various queues at various stages of their recursive
disposal, forcing log(n) cells from application freelist to be
moved to private free list then never returned. CAR-DATZs don't
require anything from any free list because they are linked into a
queue using the original cell undergoing DATZ. Hmm, I now realize
this process releases N cells, of which log(n) need extra cells
from private queue, but the rest carry their own cells, but *none*
of those are ever recycled to the application freelist by this
algorithm. So we need a counter of how large the private free list
is and whenever it is at maximum design size we pop one item from
it and push onto application freelist, so that the private (RC
DATZ) free list never gets too large, so the application freelist
never is missing too many cells it could otherwise have.
Alternately, whenever the application freelist is running almost
dry, send a request to the RC system to please give back one of the
cells that had been in the private free list too long. I'm not sure
which way is best.

Of course all that assumes games are of unbounded length so we
really do need to recycle memory. As I mentionned earlier, for some
types of games, only a little amount of dynamic memory is ever
allocated during the entire course of a game, so we might run with
no GC system at all for such games. Most video-action games should
fall into this category. So maybe we don't have a problem that
needs solving. Still I think my design for lazy DATS is nice and
worth emulating to see if it really works, and if it does then
worth building an actual software environment using that system and
comparing performance against other GC designs.
From: Espen Vestre
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <m17iarcqch.fsf@gazonk.vestre.net>
Brian <··············@gmail.com> writes:

> If the console has multiple cores it may be possible
> to run GC in the background.  

If you have a system that can run GC its own thread while the other
lisp threads keep running, GC won't be recognizable anyway...
-- 
  (espen)
From: Dimiter "malkia" Stanev
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6g3k3kFe4j41U1@mid.individual.net>
> But in some kinds of videogames a 1-2 seconds pause is unacceptable.

It's acceptable as long as you don't play, but if you are console game, 
you need to sustain 30fps at least on the PS3 (there are exceptions). A 
slowdown from 100-200ms (less than 5-10fps) or more, even if it's 
occasional (but during game play), might get your game not approved, but 
it might get it too.

Off course there are tricks, if you know in advance that some bad 
operation is going to happen - you can somehow lie to the player - The 
famous one was Jack&Daxter Trip&Hop - when it was streaming the world, 
if the new piece of the world hasn't been loaded fully, the character 
was trip & hopping, slowing down it's speed (instead of saying 
"LOADING..." or ... hahaha "GARBAGE COLLECTING...").
From: Paul Donnelly
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <87r68z9jdl.fsf@plap.localdomain>
Vend <······@virgilio.it> writes:

> On 6 Ago, 04:54, Paul Donnelly <·············@sbcglobal.net> wrote:
>> Francogrex <······@grex.org> writes:
>> > This quote below is from 'Successful Lisp:How to Understand and Use
>> > Common Lisp' by David B. Lamkins:
>> > "If you have dreams of writing the next great videogame, you've
>> > probably already discovered that you need a language that lets you
>> > program "close to the machine" -- If so, Lisp will disappoint you."
>> > I don't understand this completely. Does this mean that:
>> > A- It's harder (or less practical) to, for example, write a videogame
>> > software using common lisp than with other languages (C, CPP etc)? or
>> > B- Does he mean you cannot do it as good (like it will be a crappy
>> > game)? or
>> > C- You cannot do it at all in common lisp?
>> > I have no intention to write the next generation videogame, just
>> > wondering about the quote and the capabilities of lisp and also, has
>> > anyone ever written a videogame purely in lisp? Thanks.
>>
>> It looked at first like a variation of the "Lisp is slow" myth, or the
>> "GC causes unpredicatable pauses" complaint, which is technically
>> true, though I am skeptical about whether the pauses are long or
>> frequent enough to disrupt gameplay--a 3d game does not need hard
>> real-time, but in Chapter 1 he debunks both of these.
>
> But in some kinds of videogames a 1-2 seconds pause is unacceptable.

How often? Even if GC did cause a pause that long, if it only happened
once each hour of gameplay, it would be nothing to worry about. It's
not like no game has ever lagged before, or like airplanes will fly
into the ground when Generic Shooter Sequel 8 hiccups. I'd prefer no
pauses at all, but considering the amount of bad video game behaviour
everyone puts up with, pausing for a second is not the end of the
world.
From: Dimiter "malkia" Stanev
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6g45u4Fe4c1lU1@mid.individual.net>
> How often? Even if GC did cause a pause that long, if it only happened
> once each hour of gameplay, it would be nothing to worry about. It's
> not like no game has ever lagged before, or like airplanes will fly
> into the ground when Generic Shooter Sequel 8 hiccups. I'd prefer no
> pauses at all, but considering the amount of bad video game behaviour
> everyone puts up with, pausing for a second is not the end of the
> world.

For PC games anything is almost acceptable :), but for consoles it's up 
to the big guys, whether they like that occasional hiccup or not 
(honestly if it happens once in one hour, it's just fine, but if it 
happens every minute or so, then it's bad). Right now the money are in 
the consoles, exception being the for the casual & MMO games on the PC.
From: George Neuner
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <tbgp941vn7p329dapq6skq9kaid3pr6q9a@4ax.com>
On Thu, 7 Aug 2008 23:26:16 -0700 (PDT), Vend <······@virgilio.it>
wrote:

>If I understand correctly, garbage collectors with low pause times
>suitable for realtime exist, but they impose a performance loss, so
>maybe they are unsuitable for modern games.

There are several designs for RTGC, but so far the only one that can
reliably achieve hard microsecond pause times (on appropriate
hardware) is Brook's variation of Baker's treadmill design.  It's
problem is not performance, but rather memory overhead - it requires a
per-object header with 2 pointers.  You can amortize the header by
choosing a reasonable grain size for small objects and you can play
games with page indexing rather using pointers, but the overhead for
small objects remains high.

George
From: Dimiter "malkia" Stanev
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6g40qiFe46fhU1@mid.individual.net>
George Neuner wrote:
> On Thu, 7 Aug 2008 23:26:16 -0700 (PDT), Vend <······@virgilio.it>
> wrote:
> 
>> If I understand correctly, garbage collectors with low pause times
>> suitable for realtime exist, but they impose a performance loss, so
>> maybe they are unsuitable for modern games.
> 
> There are several designs for RTGC, but so far the only one that can
> reliably achieve hard microsecond pause times (on appropriate
> hardware) is Brook's variation of Baker's treadmill design.  It's
> problem is not performance, but rather memory overhead - it requires a
> per-object header with 2 pointers.  You can amortize the header by
> choosing a reasonable grain size for small objects and you can play
> games with page indexing rather using pointers, but the overhead for
> small objects remains high.
> 
> George

Hi George,

Any paper, or just google it (Brook's variation of Baker's treadmill)?

Thanks,
Dimiter "malkia" Stanev.
From: George Neuner
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <limp94lsl2sp0mf7nsfu5ge73pv1qb3p84@4ax.com>
On Fri, 08 Aug 2008 15:48:50 -0700, "Dimiter \"malkia\" Stanev"
<······@gmail.com> wrote:

>George Neuner wrote:
>> On Thu, 7 Aug 2008 23:26:16 -0700 (PDT), Vend <······@virgilio.it>
>> wrote:
>> 
>>> If I understand correctly, garbage collectors with low pause times
>>> suitable for realtime exist, but they impose a performance loss, so
>>> maybe they are unsuitable for modern games.
>> 
>> There are several designs for RTGC, but so far the only one that can
>> reliably achieve hard microsecond pause times (on appropriate
>> hardware) is Brook's variation of Baker's treadmill design.  It's
>> problem is not performance, but rather memory overhead - it requires a
>> per-object header with 2 pointers.  You can amortize the header by
>> choosing a reasonable grain size for small objects and you can play
>> games with page indexing rather using pointers, but the overhead for
>> small objects remains high.
>> 
>> George
>
>Hi George,
>
>Any paper, or just google it (Brook's variation of Baker's treadmill)?
>
>Thanks,
>Dimiter "malkia" Stanev.

I don't have a cite offhand.  Baker's treadmill is easy to find, but
Baker favors read barriers which are unnecessary in a non-moving
collector.  Brook's variation replaced the read barrier with a write
barrier.

George
From: Vend
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <61227c75-c892-4d34-9c39-b8f85c643a15@v57g2000hse.googlegroups.com>
On 9 Ago, 01:47, George Neuner <········@comcast.net> wrote:
> On Fri, 08 Aug 2008 15:48:50 -0700, "Dimiter \"malkia\" Stanev"
>
>
>
> <······@gmail.com> wrote:
> >George Neuner wrote:
> >> On Thu, 7 Aug 2008 23:26:16 -0700 (PDT), Vend <······@virgilio.it>
> >> wrote:
>
> >>> If I understand correctly, garbage collectors with low pause times
> >>> suitable for realtime exist, but they impose a performance loss, so
> >>> maybe they are unsuitable for modern games.
>
> >> There are several designs for RTGC, but so far the only one that can
> >> reliably achieve hard microsecond pause times (on appropriate
> >> hardware) is Brook's variation of Baker's treadmill design.  It's
> >> problem is not performance, but rather memory overhead - it requires a
> >> per-object header with 2 pointers.  You can amortize the header by
> >> choosing a reasonable grain size for small objects and you can play
> >> games with page indexing rather using pointers, but the overhead for
> >> small objects remains high.
>
> >> George
>
> >Hi George,
>
> >Any paper, or just google it (Brook's variation of Baker's treadmill)?
>
> >Thanks,
> >Dimiter "malkia" Stanev.
>
> I don't have a cite offhand.  Baker's treadmill is easy to find, but
> Baker favors read barriers which are unnecessary in a non-moving
> collector.  Brook's variation replaced the read barrier with a write
> barrier.

Is this one: "Trading data space for reduced time and code space in
real-time garbage collection on stock hardware"?
From: George Neuner
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <7r6u9413te3kmd1gk0ia7v8cmndhdv9dlv@4ax.com>
On Sun, 10 Aug 2008 00:07:12 -0700 (PDT), Vend <······@virgilio.it>
wrote:

>On 9 Ago, 01:47, George Neuner <········@comcast.net> wrote:
>> On Fri, 08 Aug 2008 15:48:50 -0700, "Dimiter \"malkia\" Stanev"
>>
>>
>>
>> <······@gmail.com> wrote:
>> >George Neuner wrote:
>> >> On Thu, 7 Aug 2008 23:26:16 -0700 (PDT), Vend <······@virgilio.it>
>> >> wrote:
>>
>> >>> If I understand correctly, garbage collectors with low pause times
>> >>> suitable for realtime exist, but they impose a performance loss, so
>> >>> maybe they are unsuitable for modern games.
>>
>> >> There are several designs for RTGC, but so far the only one that can
>> >> reliably achieve hard microsecond pause times (on appropriate
>> >> hardware) is Brook's variation of Baker's treadmill design.  It's
>> >> problem is not performance, but rather memory overhead - it requires a
>> >> per-object header with 2 pointers.  You can amortize the header by
>> >> choosing a reasonable grain size for small objects and you can play
>> >> games with page indexing rather using pointers, but the overhead for
>> >> small objects remains high.
>>
>> >> George
>>
>> >Hi George,
>>
>> >Any paper, or just google it (Brook's variation of Baker's treadmill)?
>>
>> >Thanks,
>> >Dimiter "malkia" Stanev.
>>
>> I don't have a cite offhand.  Baker's treadmill is easy to find, but
>> Baker favors read barriers which are unnecessary in a non-moving
>> collector.  Brook's variation replaced the read barrier with a write
>> barrier.
>
>Is this one: "Trading data space for reduced time and code space in
>real-time garbage collection on stock hardware"?

No, sorry.  That's the right Brooks (Rodney A.), but that particular
paper is about tweaking Baker's copying collector with a write barrier
(Baker is great but he's in love with read-barriers which aren't
always appropriate or necessary).  Brooks did a similar analysis of
Baker's treadmill around 1990 - I just can't find it.

Wilson and Johnstone based a collector for C++ on a write-barrier
treadmill. See Wilson and Johnstone, 1993, "Truly real-time
non-copying garbage collection".  They might cite Brooks's work if it
happened early enough, but I haven't found this paper yet either.


It's actually hard to search for RTGC because most researchers
consider incremental collectors to be "real time", so most of the
material available is concerned only with "soft" RT where delays are
not welcome but occasional short ones can be tolerated.  Most RTGC
work is about impact on user experience or on maximizing system
throughput.  Very few researchers are working on "hard" RT where
unplanned delays can cause failures and cannot be tolerated at all.
GC time in an HRT system must be tightly bounded, may have to be
completely accounted for, and definitely has to be factored into the
design.

George
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <rem-2008aug11-004@yahoo.com>
> From: George Neuner <········@comcast.net>
> Very few researchers are working on "hard" RT where unplanned
> delays can cause failures and cannot be tolerated at all. GC time
> in an HRT system must be tightly bounded, may have to be completely
> accounted for, and definitely has to be factored into the design.

Are there any *absolutly-for-all-time* HRT systems? I imagine most
systems would work fine in practice if we always had at least 20
minutes warning before we ran out of memory. For example:
- Nuclear power plant: Call supervisor, have supervisor order
   station taken off the power grid, then drop in the control rods
   to slow the reaction to where the DUMB system can manage things
   just fine, then go into system logs to see which application is
   consuming memory but not returning it, and if it's critical the
   restart the system, or if non-critical then just disable that
   application until more memory is available.
- Jetliner in flight: Order pilot to immediately divert out from
   the storm, then put plane on autopilot, then re-boot the
   computer.
- Space shuttle coming in for a landing: Not to worry you'll be on
   the ground at Edwards AFB already before your computer runs out
   of memory.
- MAD (Mutual Assured Destruction) during actual attack: All your
   facilities will be vaporized in less than 20 minutes, before you
   might run out of memory, so out-of-memory condition isn't going
   to be a problem!!

I believe that a reference-count system for live collection (with
each allocation unit tagged as to which application asked for it to
be allocated), and a mark-and-sweep collector running in background
to reclaim anything that has reference loops and hence "leaks"
through the reference-count system, plus something that monitors
memory to see if "memory leaks" are running too far ahead of M+S
GC, in which case the 20-minute warning would be issued, plus a
"dead man's alarm" system to make sure the memory-monitor is still
actively running, should suffice.


Do I qualify as a "hacker" in the sense of somebody who can figure
out a non-obvious (to everyone else) ugly solution which nevertheless
solves the *real* problem and doesn't have too high a cost?

I'm still looking for employment. See the URL in my From: address
and click on the "Looking for Employment" link at the top.


-
Nobody in their right mind likes spammers, nor their automated assistants.
To open an account here, you must demonstrate you're not one of them.
Please spend a few seconds to study the 'vacation' message I received:

/------------------------------------------------------------------\
|    By the way, if I am not there or if I dont answer my phone,   |
|    than please leave me a message.                               |
\------------------------------------------------------------------/

Two of the words are mis-spelled. Enter them exactly as they appear (without
correction), separated by a space (total 5-15 chars), into this TextField:
          +---------------+
          |               |
          +---------------+
Ignore any bad punctuation you see. You may be asked about that later.
From: Greg Menke
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <86profowrm.fsf@apshai.pienet>
··················@spamgourmet.com.remove (Robert Maas, http://tinyurl.com/uh3t) writes:
>> From: George Neuner <········@comcast.net>
>> Very few researchers are working on "hard" RT where unplanned
>> delays can cause failures and cannot be tolerated at all. GC time
>> in an HRT system must be tightly bounded, may have to be completely
>> accounted for, and definitely has to be factored into the design.
>
> Are there any *absolutly-for-all-time* HRT systems? I imagine most
> systems would work fine in practice if we always had at least 20
> minutes warning before we ran out of memory. For example:
> - Nuclear power plant: Call supervisor, have supervisor order
>    station taken off the power grid, then drop in the control rods
>    to slow the reaction to where the DUMB system can manage things
>    just fine, then go into system logs to see which application is
>    consuming memory but not returning it, and if it's critical the
>    restart the system, or if non-critical then just disable that
>    application until more memory is available.
> - Jetliner in flight: Order pilot to immediately divert out from
>    the storm, then put plane on autopilot, then re-boot the
>    computer.
> - Space shuttle coming in for a landing: Not to worry you'll be on
>    the ground at Edwards AFB already before your computer runs out
>    of memory.
> - MAD (Mutual Assured Destruction) during actual attack: All your
>    facilities will be vaporized in less than 20 minutes, before you
>    might run out of memory, so out-of-memory condition isn't going
>    to be a problem!!


HRT is all about deadlines- 20 minutes is about the same as 20
milliseconds, if you can't make the processing deadline then you're
sunk.  So for the gc not to mess up the realtime processing then the
algorithm has to guarantee that it won't delay the app layer by "x
seconds" where x generally specifies the "hardness".  1 millisecond on
some given target OS on some target hardware might be hard enough
realtime for some apps.  If the gc algorithm can't make that but can
make 50 ms, then its use would generally be relegated to "softer" apps.
You can test the conformance by plotting the latency jitter of the app
wrt its deadlines over time.

Specifications about the worst-case latency that a gc algorithm incurs
feeds into the risk analysis which goes into the systems engineering.
Once (and if) the risks are evaluated then you can talk about safety
margins.  You can't just assume that a Sufficiently Smart, Well-Informed
and Equipped Operator is going to intervene- or that the operator will
even be alive to hit the reset button.  Taking the ops computer offline
during some maneuver isn't going to cut it- what are you going to do
with the accumulated ops state, control loops, etc?  Can't just magic it
out of nowhere with all the errors factored out when the computer comes
back online.

How do you guarantee the 20 minute limit?  Slope of the memory
allocation curve- whats the sampling rate?  What happens if the
allocation rate has a big spike at the worst possible moment due to
unforseen circumstances, taking you from 25% free to 1% with 10 +/- 30
seconds left before the app dies because of no memory?

There are several commandments levied where I work, top of the list is
"Thou Shalt Not Dynamically Allocate Memory".  Which means no allocation
and so no leaks.  The position is sometimes moderated to allow
start-time allocation of memory which is then incorporated into linked
lists and kept there.  Faced with a Lisp solution I would probably make
the core critical safe-keeping functions in plain old C (or C++ w/ no
'new'), and put Lisp off the realtime path.  I tend to like that
approach because C/C++ is miserable from a high-level perspective and
I'd like to have Common Lisp for that stuff.  Off the time critical
path, CL can do its thing with some latitude and its considerable
leverage does some good.


> I believe that a reference-count system for live collection (with
> each allocation unit tagged as to which application asked for it to
> be allocated), and a mark-and-sweep collector running in background
> to reclaim anything that has reference loops and hence "leaks"
> through the reference-count system, plus something that monitors
> memory to see if "memory leaks" are running too far ahead of M+S
> GC, in which case the 20-minute warning would be issued, plus a
> "dead man's alarm" system to make sure the memory-monitor is still
> actively running, should suffice.

All fine for a "regular" app where failure means the print job might be
delayed a bit, but it amounts to a huge non-linear and non-reversable
increase in risk (because the gc latency can hit almost any time no
matter what the warnings say) at some ill-defined point in the app's
execution path.

Gregm
From: George Neuner
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <90r1a4tg4a3iduutujlmevbiodo27f34nd@4ax.com>
On Mon, 11 Aug 2008 15:18:31 -0700,
··················@spamgourmet.com.remove (Robert Maas,
http://tinyurl.com/uh3t) wrote:

>> From: George Neuner <········@comcast.net>
>> Very few researchers are working on "hard" RT where unplanned
>> delays can cause failures and cannot be tolerated at all. GC time
>> in an HRT system must be tightly bounded, may have to be completely
>> accounted for, and definitely has to be factored into the design.
>
>Are there any *absolutly-for-all-time* HRT systems? I imagine most
>systems would work fine in practice if we always had at least 20
>minutes warning before we ran out of memory. 
>
>For example:
>- Nuclear power plant: Call supervisor, have supervisor order
>   station taken off the power grid, then drop in the control rods
>   to slow the reaction to where the DUMB system can manage things
>   just fine, then go into system logs to see which application is
>   consuming memory but not returning it, and if it's critical the
>   restart the system, or if non-critical then just disable that
>   application until more memory is available.
>- Jetliner in flight: Order pilot to immediately divert out from
>   the storm, then put plane on autopilot, then re-boot the
>   computer.
>- Space shuttle coming in for a landing: Not to worry you'll be on
>   the ground at Edwards AFB already before your computer runs out
>   of memory.
>- MAD (Mutual Assured Destruction) during actual attack: All your
>   facilities will be vaporized in less than 20 minutes, before you
>   might run out of memory, so out-of-memory condition isn't going
>   to be a problem!!
>
>I believe that a reference-count system for live collection (with
>each allocation unit tagged as to which application asked for it to
>be allocated), and a mark-and-sweep collector running in background
>to reclaim anything that has reference loops and hence "leaks"
>through the reference-count system, plus something that monitors
>memory to see if "memory leaks" are running too far ahead of M+S
>GC, in which case the 20-minute warning would be issued, plus a
>"dead man's alarm" system to make sure the memory-monitor is still
>actively running, should suffice.

Assuming you have 20 minutes or even 20 seconds is naive.  

HRT problems come in many forms, but the more interesting ones are the
problems that have very short result windows (typically measured in
milliseconds or even in microseconds) and where somebody gets hurt or
dies if an answer is not delivered within the required result window.

Examples involving nuclear plants melting down or almost anything with
planes (I am a pilot, btw) make for great stories but are bad examples
because the problems are almost always _not_ time critical.  They have
human backup for their computer controls, almost all mistakes are
correctable and there is usually plenty of time to make the
correction.  I once landed a Cessna 130 ahead of an approaching storm.
About 10 feet above the runway, going 90mph and about 2 seconds from
touchdown, I caught a gust that tilted the plane over about 30 degrees
and turned it about 45 degrees sideways.  In just a couple of seconds,
I was able to right the plane and put it down on the wheels.  Not a
perfect landing ... I bounced once or twice ... but not a disaster
either.  Passenger planes don't land on autopilot.

Instead of grandiose mega-disasters, think instead about things that
are actually likely to happen.  More and more we're seeing autonomous
robots in offices and warehouses and such.  Sooner or later they will
be used in close proximity to lots of people.  What do you think would
happen if a robot forklift maneuvering through a crowded store
"blinked" or robot vehicle changing lanes on the highway in traffic
"paused to think about it".

Most radiation therapy devices are computer controlled.  In the 80's
there were a series of lethal accidents with an electron beam therapy
device called the Therac-25.  Although the Therac's problems were not
caused by the machine "going away" while running, you can well imagine
the effect of unexpected pauses on modern machines which are more
powerful and more precisely controlled.  Think about a few megavolts
going into your optic nerves or your gonads while the machine that is
supposed to be irradiating some less sensitive part of you has a
sudden stroke.

George
From: Robert Maas, http://tinyurl.com/uh3t
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <rem-2008sep01-002@yahoo.com>
> From: George Neuner <········@comcast.net>
> Assuming you have 20 minutes or even 20 seconds is naive.

I'm not assuming that from the time the system stops working you
have 20 minutes until anyone gets hurt. I'm assuming that every
process that allocates new memory is throttled as to how rapidly it
can allocate new memory, and the normal condition of operation is
that there's at least an hour of allocating memory at maximum rate
with no memory ever getting recycled before the system runs out of
memory. Accordingly if for any reason in fact memory stops getting
recycled, as soon as we're to the point where worst-case is we run
out of memory in 20 minutes, we declare an emergency and start
doing a safe shutdown where nobody gets hurt. If during that 20
minutes whoever was hogging memory suddenly releases 40+ minutes
worth of memory, and we're back to at least 50 minutes before we
could possibly run out of memory, we cancel the emergency and wait
to see if we ever reach that 20-minute point again.

Of course if we can't safely shut down in my estimate of 20
minutes, then we set a more appropriate threshold for how many
minutes before we *might* (worst case) run out of memory, and
declare an emergency at *that* point, giving us just enough time
for a guaranteed safe shutdown in worst case of memory hog doesn't
ever return memory during that whole emergency time span.

The rest of what you say is talking past me because you totally
misunderstood what my 20 minutes are for.

> Instead of grandiose mega-disasters, think instead about things
> that are actually likely to happen.  More and more we're seeing
> autonomous robots in offices and warehouses and such.  Sooner or
> later they will be used in close proximity to lots of people.  What
> do you think would happen if a robot forklift maneuvering through a
> crowded store "blinked" or robot vehicle changing lanes on the
> highway in traffic "paused to think about it".

With 20 minutes advanced warning that the robot will run out of
memory and need to *PAUSE* (blink), I am sure the robot can be
guided to a safe shut-down that hurt anybody. Even if the robot
vehicle is stuck in a traffic jam where it takes more than 20
minutes to get to the next off-ramp, surely it can signal an
emergency with a bright red flashing light and just **STOP** in the
middle of traffic in less than 20 minutes and **STAY** there until
the problem is fixed. No way it'll suddenly decide to change lanes
20 minutes after it already knows it's supposed to **STOP** and do
a system shut-down and wait for assistance to re-start the system.

> Most radiation therapy devices are computer controlled.

Same thing here. I'm sure if there's 20 minutes advanced warning
that the machine is running low of memory, if it's near the end of
the procedure then it can be completed safely but no later
procedure started, while if it's near the beginning the procedure
can be safely aborted.
From: Greg Menke
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <86r68454xb.fsf@apshai.pienet>
··················@spamgourmet.com.remove (Robert Maas, http://tinyurl.com/uh3t) writes:
>> From: George Neuner <········@comcast.net>
>> Assuming you have 20 minutes or even 20 seconds is naive.
>
> I'm not assuming that from the time the system stops working you
> have 20 minutes until anyone gets hurt. I'm assuming that every
> process that allocates new memory is throttled as to how rapidly it
> can allocate new memory, and the normal condition of operation is
> that there's at least an hour of allocating memory at maximum rate
> with no memory ever getting recycled before the system runs out of
> memory. Accordingly if for any reason in fact memory stops getting
> recycled, as soon as we're to the point where worst-case is we run
> out of memory in 20 minutes, we declare an emergency and start
> doing a safe shutdown where nobody gets hurt. If during that 20
> minutes whoever was hogging memory suddenly releases 40+ minutes
> worth of memory, and we're back to at least 50 minutes before we
> could possibly run out of memory, we cancel the emergency and wait
> to see if we ever reach that 20-minute point again.
>
> Of course if we can't safely shut down in my estimate of 20
> minutes, then we set a more appropriate threshold for how many
> minutes before we *might* (worst case) run out of memory, and
> declare an emergency at *that* point, giving us just enough time
> for a guaranteed safe shutdown in worst case of memory hog doesn't
> ever return memory during that whole emergency time span.
>
> The rest of what you say is talking past me because you totally
> misunderstood what my 20 minutes are for.

A failure model where software runs out of memory and necessitates major
intervention (with the costs, risks and loss time/state/whatever) is a
non-starter.  Just because the software gives you 20 minutes or 200
minutes of warning doesn't mean that you'll be able to have a guaranteed
shutdown whatever that means (who's making the guarantee, and on what
basis?).  If you're flying a spacecraft and have to safe-hold it because
of an out-of-memory warning then you have a big design problem.

But lets assume software written with the 20 minute allocation warning,
and that there is no avenue by which sudden allocations can starve the
app short of that deadline.

- How do we handle fail allocations that exceed the defined rate- or are
  we causing the allocations to be delayed to maintain the rate?

- Do we know all program states that cause allocations and can we
  constrain them all?

- What if the controllers change control loop parameters so sampling &
  processing rates change and thus affect allocation rates- how much
  margin in the allocation rate do we have before we start failing to
  allocate?  What happens when we have the spacecraft in orbit and the
  parameters change such that the 20 minute warning is now 5 minutes?
  What happens when we have a ram controller failure and half of our
  memory disappears?

- Who pays for all the extra memory when the complement of ram is barely
  enough for the software as it is?  Every ounce of ram costs weight and
  power and means more stuff to fail- gotta justify it all.

- What happens at 15 minutes past the warning and suddenly lots of
  memory is released, and the memory recovery routines delay the control
  loops?  Do you also specify a worst-case impact of gc on the program
  flow?


What happens if we get the 20 minute warning and spend 15 minutes
safe-holding the system, only to have the memory recovered at minute 16-
so all that effort was wasted?  What if it takes a full day to shut
something down and start-up takes several days and lots of
equipment/scheduling?

Who is going to explain to the scientists that their observations were
cancelled and data lost over a warning that got cancelled?

I think you underappreciate the consequences of declaring and cancelling
emergencies- neither is lightly done or undone and upper management
tends to be interested.

Gregm
From: Andrew Reilly
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <6gc75fFf1jidU1@mid.individual.net>
On Mon, 11 Aug 2008 15:18:31 -0700, Robert Maas, http://tinyurl.com/uh3t
wrote:

> Are there any *absolutly-for-all-time* HRT systems?

Of course.

> I imagine most
> systems would work fine in practice if we always had at least 20 minutes
> warning before we ran out of memory.

Just a few examples, off the top of my head:

Anything controlling medical equipment.

In fact, any "control" application that is not trivial in terms of risk 
or liability.  This includes significant chunks of most modern cars, 
systems driving multi-kilowatt amplifiers for live music perofmances, and 
most industrial plant control.  You may not care if DVD or CD playback 
glitches occasionally on your personal PC (although it *should* annoy you 
mightily), but you can't have that in the middle of a Rolling Stones 
concert.  Admittedly concert performances aren't usually an always, 
forever system, but there are many, many control systems that are 
required to be operational always, forever.  Without changing 
perspectives too much from the concert PA example, emergency PA systems 
have that requirement.

I'm pretty sure that the people actually responsible for any of the 
examples in your post (nuclear reactor, flight control (including space 
craft and missile guidance)) would hold a different view than yours about 
the ability to take their systems off-line at twenty minutes notice.  You 
might want to read up on what is involved in restarting a steel mill or 
aluminium smelter, should either ever be forced off line...

Cheers,

-- 
Andrew
From: Scott Burson
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <0d2209a8-7a52-4661-b189-2ce3fc78e422@k36g2000pri.googlegroups.com>
On Aug 8, 3:30 pm, George Neuner <········@comcast.net> wrote:
> There are several designs for RTGC, but so far the only one that can
> reliably achieve hard microsecond pause times (on appropriate
> hardware) is Brook's variation of Baker's treadmill design.

There's a new one.  Check out Pizlo et al. in the PLDI 2008
proceedings.

-- Scott
From: George Neuner
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <m5is94l633ie7g11c8ibc6m469kusv5bem@4ax.com>
On Sat, 9 Aug 2008 09:25:41 -0700 (PDT), Scott Burson
<········@gmail.com> wrote:

>On Aug 8, 3:30 pm, George Neuner <········@comcast.net> wrote:
>> There are several designs for RTGC, but so far the only one that can
>> reliably achieve hard microsecond pause times (on appropriate
>> hardware) is Brook's variation of Baker's treadmill design.
>
>There's a new one.  Check out Pizlo et al. in the PLDI 2008
>proceedings.
>
>-- Scott

Is it viable for *hard* real time or soft RT?  Most RTGCs are only
good for soft RT.

George
From: Scott Burson
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <82a9cd8b-aebf-455f-be3f-03f44a08464e@v26g2000prm.googlegroups.com>
On Aug 9, 6:50 pm, George Neuner <········@comcast.net> wrote:
> On Sat, 9 Aug 2008 09:25:41 -0700 (PDT), Scott Burson
>
> >There's a new one.  Check out Pizlo et al. in the PLDI 2008
> >proceedings.
>
> Is it viable for *hard* real time or soft RT?  Most RTGCs are only
> good for soft RT.

Is a 1 millisecond worst-case pause time hard enough for you?

-- Scott
From: Bakul Shah
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <48A0C191.5090305@bitblocks.com>
Scott Burson wrote:
> On Aug 9, 6:50 pm, George Neuner <········@comcast.net> wrote:
>> On Sat, 9 Aug 2008 09:25:41 -0700 (PDT), Scott Burson
>>
>>> There's a new one.  Check out Pizlo et al. in the PLDI 2008
>>> proceedings.
>> Is it viable for *hard* real time or soft RT?  Most RTGCs are only
>> good for soft RT.
> 
> Is a 1 millisecond worst-case pause time hard enough for you?
> 
> -- Scott
> 

The PLDI paper you refer to says none of the three GC algorithms
are *provably* hard real time. The 1ms worst-case pause seems to
be a measured quantity for a particular benchmark. Still, pretty
clever algorithms!
From: Slobodan Blazeski
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <8cf196e7-4338-4045-b02f-9a97c8883d1c@p25g2000hsf.googlegroups.com>
On Aug 9, 6:25 pm, Scott Burson <········@gmail.com> wrote:
> On Aug 8, 3:30 pm, George Neuner <········@comcast.net> wrote:
>
> > There are several designs for RTGC, but so far the only one that can
> > reliably achieve hard microsecond pause times (on appropriate
> > hardware) is Brook's variation of Baker's treadmill design.
>
> There's a new one.  Check out Pizlo et al. in the PLDI 2008
> proceedings.
>
> -- Scott

Thanks for the link, interesthing reading.
Does anybody has experience with Cheney copying GC in lisp
implementations?  Mark and sweep seems to be more favoured but Cheney
is easier to implement and upgrade to  generational one.

bobi
From: George Neuner
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <ngf4a4l1bl2pjbhk61ehr7qt9d824hqgo1@4ax.com>
On Sat, 9 Aug 2008 09:25:41 -0700 (PDT), Scott Burson
<········@gmail.com> wrote:

>On Aug 8, 3:30 pm, George Neuner <········@comcast.net> wrote:
>> There are several designs for RTGC, but so far the only one that can
>> reliably achieve hard microsecond pause times (on appropriate
>> hardware) is Brook's variation of Baker's treadmill design.
>
>There's a new one.  Check out Pizlo et al. in the PLDI 2008
>proceedings.
>
>-- Scott

Sorry ... Pizlo's work is interesting, but like most everything else
it is SRT, not HRT.  Being fast does not make a collector HRT - having
a set of operations for which both the time and frequency of use is
completely predictable given the application makes a collector HRT.

Baker's treadmill is completely predictable.  Brook's version (on
uniprocessor at least) reduces time spent in GC and changes where it
is incurred WRT Baker, but does not alter predictability of the
design.

Pizlo's collectors all have potentially unbounded object copy times
during which the object is inaccessible (odd because Pizlo claims to
use Brook's indirection method which does not require this) and the
STOPLESS and CHICKEN collectors require handshaking with all threads
observing the object to be copied - it is unclear whether the CLOVER
collector does as well.  STOPLESS does not guarantee termination,
CHICKEN does not guarantee to copy all live objects, and Pizlo did not
present the version of CLOVER which guarantees both (the version
presented has no termination guarantee).

George
From: D Herring
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <z9KdnbhuSb236D_VnZ2dnUVZ_s_inZ2d@comcast.com>
George Neuner wrote:
> Baker's treadmill is completely predictable.  Brook's version (on
> uniprocessor at least) reduces time spent in GC and changes where it
> is incurred WRT Baker, but does not alter predictability of the
> design.

In his treadmill paper, Baker mentions a difficulty with allocating 
variable-sized objects.  Has that been resolved?  Has anyone tried 
running multiple treadmills in parallel, with the kth having 2^k-byte 
chunks?

Just curious,
Daniel
From: George Neuner
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <ltl8a45qe7lvrpiljqr99cvcg75oggo170@4ax.com>
On Wed, 13 Aug 2008 01:46:07 -0400, D Herring
<········@at.tentpost.dot.com> wrote:

>George Neuner wrote:
>> Baker's treadmill is completely predictable.  Brook's version (on
>> uniprocessor at least) reduces time spent in GC and changes where it
>> is incurred WRT Baker, but does not alter predictability of the
>> design.
>
>In his treadmill paper, Baker mentions a difficulty with allocating 
>variable-sized objects.  Has that been resolved?  Has anyone tried 
>running multiple treadmills in parallel, with the kth having 2^k-byte 
>chunks?
>
>Just curious,

Yes, treadmill deals only with fixed sized blocks so you have to round
up allocations to the block size and expect some waste - that's the
price for predictable operation.

As far as using multiple treadmills, Wilson's and Johnstone's C++
collector use treadmills in several typical allocation sizes and a
catch-all region for large objects.

The thing about HRT systems is that they are not likely to ever
allocate large objects while in operation - any large objects needed
will be pre-allocated at the beginning and reused as necessary.  What
makes HRT programming tough is not having convenient small temporary
allocations.  HRT systems that require dynamic allocation mostly use
one of 2 methods: preallocate all objects (sometimes managed by list)
or mark-release heap(s) which forces allocation to obey either stack
or linear (throwaway) discipline because "release" nukes the whole
heap region.

George
From: Dan Weinreb
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <8e96b926-0374-407b-98fc-75a27ea0a083@y21g2000hsf.googlegroups.com>
On Aug 8, 2:26 am, Vend <······@virgilio.it> wrote:
> On 6 Ago, 04:54, Paul Donnelly <·············@sbcglobal.net> wrote:
>
>
>
> > Francogrex <······@grex.org> writes:
> > > This quote below is from 'Successful Lisp:How to Understand and Use
> > > Common Lisp' by David B. Lamkins:
> > > "If you have dreams of writing the next great videogame, you've
> > > probably already discovered that you need a language that lets you
> > > program "close to the machine" -- If so, Lisp will disappoint you."
> > > I don't understand this completely. Does this mean that:
> > > A- It's harder (or less practical) to, for example, write a videogame
> > > software using common lisp than with other languages (C, CPP etc)? or
> > > B- Does he mean you cannot do it as good (like it will be a crappy
> > > game)? or
> > > C- You cannot do it at all in common lisp?
> > > I have no intention to write the next generation videogame, just
> > > wondering about the quote and the capabilities of lisp and also, has
> > > anyone ever written a videogame purely in lisp? Thanks.
>
> > It looked at first like a variation of the "Lisp is slow" myth, or the
> > "GC causes unpredicatable pauses" complaint, which is technically
> > true, though I am skeptical about whether the pauses are long or
> > frequent enough to disrupt gameplay--a 3d game does not need hard
> > real-time, but in Chapter 1 he debunks both of these.
>
> But in some kinds of videogames a 1-2 seconds pause is unacceptable.
>
> If I understand correctly, garbage collectors with low pause times
> suitable for realtime exist, but they impose a performance loss, so
> maybe they are unsuitable for modern games.
>
> > So all I can
> > figure is that he means it literally, saying that one needs low-level
> > access to the machine.
>
> > All you *really* need is the ability to interface with OpenGL and
> > something to read the keyboard, mouse, and joystick. You can do that
> > with CFFI and C libraries, with CFFI and a shim to make those C
> > libraries nicer, or by embedding Lisp in a host that will manage those
> > libraries. You could write a nice looking game more easily in Lisp
> > than in any of the usual languages, offloading whatever low-level or
> > micro-optimized stuff you need to a support codebase written in C.
>
> > The quote doesn't make much sense to me.
>
>

It's definitely possible to write interactive software, with very nice
graphics, and excellent response time, and no discernable GC pauses.

InspireData is written in LispWorks.  It uses OpenGL, and the
LispWorks package that lets you get at the operating system's menus
and drop-downs and such.  It's a shrink-wrapped product costing less
than $100 and has been very successful.

http://www.inspiration.com/productinfo/inspiredata/index.cfm

There were a very few places that they had to write some C code,
mainly to enhance the Lisp-to-OpenGL slightly.

They didn't do anything very special about the GC.  Evidently the
LispWorks GC just works very well.

Heavy-duty video games use very complex graphics, dealing with the
capabilities of powerful graphics processors.  I have not heard of any
Lisp program that does this.  However, the heavy lifting for the
graphics is being done by the graphics card, not by the main
processor.  So could the part of the game that runs in the main
processor be written in Lisp?  I don't know enough out game programs
to say, but it doesn't seem implausible.
From: Marek Kubica
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <g858h7$rva$2@hoshi.visyn.net>
Hi,

On Thu, 07 Aug 2008 23:26:16 -0700, Vend wrote:

> But in some kinds of videogames a 1-2 seconds pause is unacceptable.
> 
> If I understand correctly, garbage collectors with low pause times
> suitable for realtime exist, but they impose a performance loss, so
> maybe they are unsuitable for modern games.

There is Ypsilon Scheme <http://code.google.com/p/ypsilon/> which reduces 
GC pause times. Maybe it is related to the fact that it is made by 
LittleWing folks which are normally doing fast paced pinball games :)

regards,
Marek
From: Marek Kubica
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <g858vt$rva$3@hoshi.visyn.net>
On Sat, 16 Aug 2008 00:55:03 +0000, Marek Kubica wrote:

> There is Ypsilon Scheme <http://code.google.com/p/ypsilon/> which
> reduces GC pause times. Maybe it is related to the fact that it is made
> by LittleWing folks which are normally doing fast paced pinball games :)

Oh, wait, there's more: <http://egachine.berlios.de/sgachine/> (I know it 
is Scheme but there surely are also CL ones available somewhere).

regards,
Marek
From: Geoffrey Summerhayes
Subject: Re: Program "close to the machine"
Date: 
Message-ID: <dfe5a34c-1e9f-41c4-b48c-ec3feddc502c@a70g2000hsh.googlegroups.com>
On Aug 15, 9:02 pm, Marek Kubica <·····@xivilization.net> wrote:
> On Sat, 16 Aug 2008 00:55:03 +0000, Marek Kubica wrote:
> > There is Ypsilon Scheme <http://code.google.com/p/ypsilon/> which
> > reduces GC pause times. Maybe it is related to the fact that it is made
> > by LittleWing folks which are normally doing fast paced pinball games :)
>
> Oh, wait, there's more: <http://egachine.berlios.de/sgachine/> (I know it
> is Scheme but there surely are also CL ones available somewhere).

Well, considering that modern games seem to be using more
sophisticated memory caching algorithms for resource management,
scripting engines like Lua having their own GC algorithms, plus
having examples of companies that have used Lisp to develop
successful games, it seems all this discussion over garbage
collection in Lisp is like the old professor who said,

"I know the damn thing works in practice, but does it work in theory?"

----
Geoff