From: Fernando Rodr�guez
Subject: Silly GC question
Date: 
Message-ID: <J8ib5.117$Ve3.1868@m2newsread.uni2.es>
Excuse me if the answer for this is too obvious, but can't the GC work on
a different thread (while the app is runing) to avoid GC pauses, or is it
absolutely necesary for the application to stop making changes to memory
while the GC works? O:-)

TIA

--


---------------------------------------
Fernando Rodr�guez

From: Martin Cracauer
Subject: Re: Silly GC question
Date: 
Message-ID: <8kki7o$17o0$1@counter.bik-gmbh.de>
"Fernando Rodr�guez" <···@mindless.com> writes:

>Excuse me if the answer for this is too obvious, but can't the GC work on
>a different thread (while the app is runing) to avoid GC pauses, or is it
>absolutely necesary for the application to stop making changes to memory
>while the GC works? O:-)

The problem is that the working thread may manipulate data that is
relevant to the GC.

Missing some memory that might be freed is not so bad, but the working
thread may shift a block of data containing pointers to objects in the
opposite direction than the GC is scanning, so that the GC missed
references to these objects and will kill them, although they are
still in use.

These problems can be solved, but require very expensive locking that
slows down non-GC activity and probably compiler support (with
resulting larger code).  So if you have threads that are spread on
multiple CPU's in your Lisp system, you are probably better off
parallelizing your working code.

There is research done, I'm sure you can find material in Raul
Wilson's GC paper collection
http://www.cs.utexas.edu/users/oops/papers.html

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <········@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/
From: Barry Margolin
Subject: Re: Silly GC question
Date: 
Message-ID: <Gzkb5.3$BL6.325@burlma1-snr2>
In article <··················@m2newsread.uni2.es>,
Fernando Rodr�guez <···@mindless.com> wrote:
>Excuse me if the answer for this is too obvious, but can't the GC work on
>a different thread (while the app is runing) to avoid GC pauses, or is it
>absolutely necesary for the application to stop making changes to memory
>while the GC works? O:-)

It depends on how the implementation's memory management and GC are
designed.

When most current Lisp implementations were designed, Unix and Windows
didn't have threads, so the designers couldn't take advantage of them and
didn't design their memory management with them in mind.  Doing so now
could require a significant redesign.

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Robert Monfera
Subject: Re: Silly GC question
Date: 
Message-ID: <396E538E.EDAA4214@fisec.com>
Barry Margolin wrote:

> When most current Lisp implementations were designed, Unix and 
> Windows didn't have threads, so the designers couldn't take advantage
> of them and didn't design their memory management with them in mind.
> Doing so now could require a significant redesign.

It sounds like blaming the lack of investment in Lisp compilers during
the last 10 years, so I hope it is not the main factor.  Also, during
the design of Corman Lisp, ACL for Windows 5.0 and Lispworks for Windows
lightweight threads were being thought of (all these support them), and
these were new engines (except maybe ACL which inherited ma lot from
release 4).

Robert
From: Barry Margolin
Subject: Re: Silly GC question
Date: 
Message-ID: <cvtb5.50$BL6.1418@burlma1-snr2>
In article <·················@fisec.com>,
Robert Monfera  <·······@fisec.com> wrote:
>Barry Margolin wrote:
>
>> When most current Lisp implementations were designed, Unix and 
>> Windows didn't have threads, so the designers couldn't take advantage
>> of them and didn't design their memory management with them in mind.
>> Doing so now could require a significant redesign.
>
>It sounds like blaming the lack of investment in Lisp compilers during
>the last 10 years, so I hope it is not the main factor.  Also, during

I doubt that most of the vendors have redesigned their memory management
during this time.  I suspect the last major revamp that was done by the
vendors that have been around since the 80's was generational GC.  Since
then, I think most of the changes have been at higher levels, such as UI
features.

>the design of Corman Lisp, ACL for Windows 5.0 and Lispworks for Windows
>lightweight threads were being thought of (all these support them), and
>these were new engines (except maybe ACL which inherited ma lot from
>release 4).

Most Lisps provide application-level threading, but I don't think they
generally make use of the OS's threading features.  Application threading
allows for multiple threads of Lisp code, but the GC runs underneath all
this.

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Christopher Browne
Subject: Threading Noninvestment
Date: 
Message-ID: <slrn8msqr2.nsd.cbbrowne@knuth.brownes.org>
Centuries ago, Nostradamus foresaw a time when Barry Margolin would say:
>In article <·················@fisec.com>,
>Robert Monfera  <·······@fisec.com> wrote:
>>Barry Margolin wrote:
>>
>>> When most current Lisp implementations were designed, Unix and 
>>> Windows didn't have threads, so the designers couldn't take advantage
>>> of them and didn't design their memory management with them in mind.
>>> Doing so now could require a significant redesign.
>>
>>It sounds like blaming the lack of investment in Lisp compilers during
>>the last 10 years, so I hope it is not the main factor.  Also, during
>
>I doubt that most of the vendors have redesigned their memory management
>during this time.  I suspect the last major revamp that was done by the
>vendors that have been around since the 80's was generational GC.  Since
>then, I think most of the changes have been at higher levels, such as UI
>features.
>
>>the design of Corman Lisp, ACL for Windows 5.0 and Lispworks for Windows
>>lightweight threads were being thought of (all these support them), and
>>these were new engines (except maybe ACL which inherited ma lot from
>>release 4).
>
>Most Lisps provide application-level threading, but I don't think they
>generally make use of the OS's threading features.  Application threading
>allows for multiple threads of Lisp code, but the GC runs underneath all
>this.

... And it's really a trade-off:

--> Asking the OS kernel to do a thread switch is a _lot_ more expensive
    than the Lisp implementation internalizing it.

--> OS threading may allow farming out work to multiple CPUs on an SMP
    system, but provides _NO_ performance benefits unless you've got
    multiple CPUs.

--> The _other_ main reason I'd expect OS threading to prove useful is if
    it allows you have Lisp code communicate, via threads, with other
    specific non-Lisp code.

These considerations don't fit with the complexity of figuring out out to
manage GC with OS threads turning out to be _massively_ worthwhile.

I know a few people that feel that SMP is likely to become _massively_
important, but I don't really believe this.  Motherboards offering
more than two Pentium CPUs (of whatever generation) get very expensive
very quickly, so it just does not seem economical to me.  It makes more
sense to me that faster network hardware will encourage the use, for
heavy computational work, of Beowulf-like "clusters," where OS
threading seems unlikely to me to be _vastly_ interesting.

Anyone feel free to disagree with whichever bits you'd like; 
the killer question is: Just which OS threading model do you think
is most likely to be dominant five years from now, and thus worth
investing implementation time on?

I'd be completely unshocked to hear that implementing a Lisp version of
_any_ OS threading scheme would cost on the order of $1M; unless that
results in selling $Millions more worth of compilers as a result, it
will be of questionable value.  Alternatively, from the "free software"
perspective, if the functionality _isn't_ heavily used, then it would
have been better to invest efforts in something else, say, in improving
the CORBA implementation.
-- 
·····@freenet.carleton.ca - <http://www.hex.net/~cbbrowne/lsf.html>
Rules of the Evil Overlord #28. "My pet monster will be kept in a
secure cage from which it cannot escape and into which I could not
accidentally stumble." <http://www.eviloverlord.com/>
From: William Deakin
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <396ED25A.4CCC45C8@pindar.com>
Christopher Browne wrote:
> I know a few people that feel that SMP is likely to become _massively_
> important, but I don't really believe this.  Motherboards offering
> more than two Pentium CPUs (of whatever generation) get very expensive
> very quickly, so it just does not seem economical to me.  
I think this is an interesting: if I were a large bank, telecoms
company, computer hardware manufacturer, or whatever, I might want
something a bit more beefy than dual pentium box. 

Where I work there are 25+ servers running Unix with multiple
processors. 
Any we're not that big.

> It makes more sense to me that faster network hardware will encourage the > use, for heavy computational work, of Beowulf-like "clusters," where OS
> threading seems unlikely to me to be _vastly_ interesting.
Yes! Roll on fibrenetwork (or whatever it will be called).

Cheers,

Will
From: Frank A. Adrian
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <esVb5.1119$AC5.492045@news.uswest.net>
William Deakin <········@pindar.com> wrote in message
······················@pindar.com...
> Christopher Browne wrote:
> > I know a few people that feel that SMP is likely to become _massively_
> > important, but I don't really believe this.  Motherboards offering
> > more than two Pentium CPUs (of whatever generation) get very expensive
> > very quickly, so it just does not seem economical to me.
> I think this is an interesting: if I were a large bank, telecoms
> company, computer hardware manufacturer, or whatever, I might want
> something a bit more beefy than dual pentium box.

And if you do, you buy a machine that actually can stay up for years at a
time - a mainframe.  You don't piddle about with a silly wonky UNIX box,
either.  Sorry, but I've been working with AS/400's and S/390's for the last
9 months or so.  Those things are RELIABLE.  Unix still can't touch them WRT
uptime.  And we won't even discuss Windows - it is to laugh.

The only sad part is that it's hard to find Lisps for either :-(.  BTW I
once looked at possibly porting a Lisp to an AS/400.  Most of the systems
I've seen the source for seem to assume 32-bit pointers and the 128-bit
AS/400 pointers throw them for a loop.  256-bits!  Now that's a cons cell.

faa
From: Fernando Rodr�guez
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <VyUb5.12$zx3.383@m2newsread.uni2.es>
"Frank A. Adrian" <·······@uswest.net> escribi� en el mensaje
··························@news.uswest.net...
> William Deakin <········@pindar.com> wrote in message
> ······················@pindar.com...
> And if you do, you buy a machine that actually can stay up for years at
a
> time - a mainframe.

And you spend all those years coding in cobol, rexx and God knows what
else... ;-)
From: Tim Bradshaw
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <ey3k8enybp9.fsf@cley.com>
* Frank A Adrian wrote:
> William Deakin <········@pindar.com> wrote in message
>> I think this is an interesting: if I were a large bank, telecoms
>> company, computer hardware manufacturer, or whatever, I might want
>> something a bit more beefy than dual pentium box.

> And if you do, you buy a machine that actually can stay up for years at a
> time - a mainframe.  You don't piddle about with a silly wonky UNIX box,
> either.  Sorry, but I've been working with AS/400's and S/390's for the last
> 9 months or so.  Those things are RELIABLE.  Unix still can't touch them WRT
> uptime.  And we won't even discuss Windows - it is to laugh.

Plenty of banks & telcos buy large Unix boxes. And actually, they're
quite reliable now, too, thanks to worse-is-better (to bring it back
on topic...)

--tim
From: Vinodh Das
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <8kpt11$mdj$1@news6.jaring.my>
>
> And if you do, you buy a machine that actually can stay up for years at a
> time - a mainframe.  You don't piddle about with a silly wonky UNIX box,
> either.  Sorry, but I've been working with AS/400's and S/390's for the
last
> 9 months or so.  Those things are RELIABLE.  Unix still can't touch them
WRT
> uptime.  And we won't even discuss Windows - it is to laugh.
>
> The only sad part is that it's hard to find Lisps for either :-(.  BTW I
> once looked at possibly porting a Lisp to an AS/400.

Do you feel that reliability of AS/400s can be duplicated by Compaq's
Nonstop (tm) Himalaya servers (which apparently will run Tru64 Unix by 2001:
http://www.tandem.com/pres_rel/intg64pl/intg64pl.htm )?  I don't think that
Lisp is available on this platform either, but it seems that there may be
fewer hurdles for existing vendors (if they were interested in a porting
effort) to overcome.

-Vinodh
From: Fernando Rodr�guez
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <1D1c5.270$zx3.3834@m2newsread.uni2.es>
"Vinodh Das" <·······@pd.jaring.my> escribi� en el mensaje
·················@news6.jaring.my...
> Do you feel that reliability of AS/400s can be duplicated by Compaq's
> Nonstop (tm) Himalaya servers (which apparently will run Tru64 Unix by
2001:
> http://www.tandem.com/pres_rel/intg64pl/intg64pl.htm )?  I don't think
that
> Lisp is available on this platform either, but it seems that there may
be

Isn't Open Genera available for it? :-?
From: Paolo Amoroso
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <iZ5xOYRvmWR+fytSvSbQKvMMLlhs@4ax.com>
On Sat, 15 Jul 2000 17:58:21 GMT, "Fernando Rodr�guez" <···@mindless.com>
wrote:

> Isn't Open Genera available for it? :-?

Probably not. But the Symbolics, Inc. home page:

  http://www.symbolics.com/

states that they are contemplating porting Genera to other platforms, and
they solicit suggestions from users.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Arun Welch
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <3970ae57.1088873@news.earthlink.net>
On Fri, 14 Jul 2000 23:40:11 -0700, "Frank A. Adrian"
<·······@uswest.net> wrote:

>And if you do, you buy a machine that actually can stay up for years at a
>time - a mainframe.  You don't piddle about with a silly wonky UNIX box,
>either.  Sorry, but I've been working with AS/400's and S/390's for the last
>9 months or so.  

>The only sad part is that it's hard to find Lisps for either :-(. 

Somewhere in a box at the back of the garage I think I've still got a
tape of the Interlisp port to the 360... Of all the Interlisp's I've
used, this one had the worst user interface :-).

...arun
From: Duane Rettig
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <4wvinmd6u.fsf@beta.franz.com>
·····@anzus.com (Arun Welch) writes:

> On Fri, 14 Jul 2000 23:40:11 -0700, "Frank A. Adrian"
> <·······@uswest.net> wrote:
> 
> >And if you do, you buy a machine that actually can stay up for years at a
> >time - a mainframe.  You don't piddle about with a silly wonky UNIX box,
> >either.  Sorry, but I've been working with AS/400's and S/390's for the last
> >9 months or so.  
> 
> >The only sad part is that it's hard to find Lisps for either :-(. 
> 
> Somewhere in a box at the back of the garage I think I've still got a
> tape of the Interlisp port to the 360... Of all the Interlisp's I've
> used, this one had the worst user interface :-).
> 
> ...arun

And on my shelf I have a tape of an old version of Allegro CL for S/370.
The 360/370 architecture has had several lisps ported to it; most have
died due to lack of interest.  And I'm not talking about interest in a
technical sense, but in a "are you interested enough to put your money
down?" sense.


-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Arun Welch
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <39712aeb.1296228@news.earthlink.net>
On 15 Jul 2000 12:00:09 -0700, Duane Rettig <·····@franz.com> wrote:

>And on my shelf I have a tape of an old version of Allegro CL for S/370.
>The 360/370 architecture has had several lisps ported to it; most have
>died due to lack of interest.  And I'm not talking about interest in a
>technical sense, but in a "are you interested enough to put your money
>down?" sense.
>

I seem to remember that the pricing scheme for commercial Lisps for
high-performance machines (360's, Crays, etc) used to be significantly
higher than that for "regular" machines, so the vendor's perceived
lack of interest might have been self-inflicted. 

...arun
From: Duane Rettig
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <4vgy6my5w.fsf@beta.franz.com>
·····@anzus.com (Arun Welch) writes:

> On 15 Jul 2000 12:00:09 -0700, Duane Rettig <·····@franz.com> wrote:
> 
> >And on my shelf I have a tape of an old version of Allegro CL for S/370.
> >The 360/370 architecture has had several lisps ported to it; most have
> >died due to lack of interest.  And I'm not talking about interest in a
> >technical sense, but in a "are you interested enough to put your money
> >down?" sense.
> >
> 
> I seem to remember that the pricing scheme for commercial Lisps for
> high-performance machines (360's, Crays, etc) used to be significantly
> higher than that for "regular" machines, so the vendor's perceived
> lack of interest might have been self-inflicted. 

True enough in fact, but incorrect in spirit; you'd have to replace "Lisp"
in your paragraph with "software" to be truly correct.  15 to 20 years
ago _any_ software on mainframes that was less than $10K-$40K (USD) was
looked at suspiciously as junk.  Also, remember that back then, the
"regular" machines were not workstations but minicomputers, and the
"real" computers were the mainframes.  The workstations were considered
toys at the time, and just starting to come into their own.  At least,
that is what those of us who worked for mainframe manufacturers thought
at the time ...

I do agree that the wane of the mainframe was somewhat self-inflicted,
but inevitable, due to the trends in hardware.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Frank A. Adrian
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <_Iec5.2478$I23.422857@news.uswest.net>
Duane Rettig <·····@franz.com> wrote in message
··················@beta.franz.com...
> I do agree that the wane of the mainframe was somewhat self-inflicted,
> but inevitable, due to the trends in hardware.

What wane?  Sales of mainframes continue to grow.  Maybe not at the rate of
those souped up microcontrollers everyone has on their desktop :-O, but they
continue to grow nonetheless.

faa
From: Duane Rettig
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <4snt8es5o.fsf@beta.franz.com>
"Frank A. Adrian" <·······@uswest.net> writes:
> Duane Rettig <·····@franz.com> wrote in message
> ··················@beta.franz.com...
> > I do agree that the wane of the mainframe was somewhat self-inflicted,
> > but inevitable, due to the trends in hardware.
> 
> What wane?  Sales of mainframes continue to grow.  Maybe not at the rate of
> those souped up microcontrollers everyone has on their desktop :-O, but they
> continue to grow nonetheless.

I should explain what I mean by wane, since it has a couple of meanings
and I am using the second meaning in my dictionary.  I deliberately chose
to stay away from the words "death" and "decline"; being part of the lisp
industry has sensitized me to this kind of nearsightedness.  And indeed,
the first definition of wane does mean "to lose size, become smaller,
diminish" [World Book Dictionary, 1982], and is most used in the phrase
"wax and wane".  However, the meaning I intended was the second: "to lose
power, influence, or importance; (ex: Many great empires have waned.)"
[ibid].

I could not have intelligently used the first definition of the term,
unless I thought the mainframe industry was dead or dying; In modern
business, if a company isn't growing by at least 7% to 10%, and if it
doesn't have a "sugar daddy" (a source of income that has nothing to
do with the business at hand), then it is bound for destruction.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: William Deakin
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <39732635.E751F32B@pindar.com>
Duane Rettig wrote:
> I should explain what I mean by wane, since it has a couple of meanings
> and I am using the second meaning in my dictionary... 
And I thought you were refering the the fact that the mainframe-moon was
descending into Neptune.

;)will
From:  e 4 5 5 @ y a h o o . c o m
Subject: Mainframe programming (Re: Threading Noninvestment)
Date: 
Message-ID: <6ih1ns0irvdq3sd28k9ienbfbr8rn6rgr5@4ax.com>
On Fri, 14 Jul 2000 23:40:11 -0700, "Frank A. Adrian" <·······@uswest.net>
wrote:

>The only sad part is that it's hard to find Lisps for either :-(.  BTW I

For those who want to do mainframe programming and don't want to use Cobol
etc., a good compromise might be to use Smalltalk.  Smalltalk is one of IBM's
favorite programming languages, and you can get an extremely powerful and
robust Smalltalk development system from them.

Lisp vs Smalltalk is a complicated tradeoff.  Most Lisp programmers would be
dismayed at the lack of their favorite language features, but might find some
long term advantages that might not seem obvious at first.  One big advantage
is that you can hire talented programmers who use any language at all, and
they can train themselves to use Smalltalk in a short time, much shorter than
Lisp.  Another is the fact that big companies such as IBM have an
overwhelming preference for Smalltalk over Lisp, so it's much easier to work
with those companies, and therefore much easier to earn money from them.

In the biggest and most important differences between Lispy languages and C++
et al, Smalltalk is in the Lisp camp.  It has garbage collection, dynamic
typing, closures, etc.  And it's easy to add new features, which is exactly
what a lot of Lisp programmers would probably do.
From: Frank A. Adrian
Subject: Re: Mainframe programming (Re: Threading Noninvestment)
Date: 
Message-ID: <do8c5.813$E97.364943@news.uswest.net>
e 4 5 5 @ y a h o o . c o m <······@nospam.com> wrote in message
·······································@4ax.com...
> On Fri, 14 Jul 2000 23:40:11 -0700, "Frank A. Adrian" <·······@uswest.net>
> wrote:
>
> >The only sad part is that it's hard to find Lisps for either :-(.  BTW I
>
> For those who want to do mainframe programming and don't want to use Cobol
> etc., a good compromise might be to use Smalltalk.  Smalltalk is one of
IBM's
> favorite programming languages, and you can get an extremely powerful and
> robust Smalltalk development system from them.

Except they stopped selling VAST for the AS/400 last year.  I wouldn't be
surprised if S/390 support is next to go.

> Lisp vs Smalltalk is a complicated tradeoff.  Most Lisp programmers would
be
> dismayed at the lack of their favorite language features, but might find
some
> long term advantages that might not seem obvious at first.  One big
advantage
> is that you can hire talented programmers who use any language at all, and
> they can train themselves to use Smalltalk in a short time, much shorter
than
> Lisp.  Another is the fact that big companies such as IBM have an
> overwhelming preference for Smalltalk over Lisp, so it's much easier to
work
> with those companies, and therefore much easier to earn money from them.

It is a complicated tradeoff.  Having been a Smalltalk programmer (in a past
life), I can attest to the superiority of Smalltalk over C, C++, or Java
hideousness.  But I still like Lisp better.  I do wonder, though, why
Smalltalk seems to be more in favor than Lisp.  I don't think that it's an
issue of Smalltalk being easier to learn to use.  I found that both Lisp and
Smalltalk came very naturally.  Maybe it's the parens :-).  Maybe because
it's the fact that some sort of GUI and database interface comes bundled
with every Smalltalk system, but you may not get them with a Lisp system
without paying extra.  In any case, I still find Lisp more powerful and more
in line with how I think.  Note that this is a *personal* preference.  As I
say, "The worst day programing with a dynamic language is better than the
best day programing with a static language."

> In the biggest and most important differences between Lispy languages and
C++
> et al, Smalltalk is in the Lisp camp.  It has garbage collection, dynamic
> typing, closures, etc.  And it's easy to add new features, which is
exactly
> what a lot of Lisp programmers would probably do.

Yes.  I liked extending the language and generating code when I used
Smalltalk.  It's just a lot more convenient to do so in Lisp.

faa
From: Barry Margolin
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <icGb5.77$BL6.1817@burlma1-snr2>
In article <·················@pindar.com>,
William Deakin  <········@pindar.com> wrote:
>Where I work there are 25+ servers running Unix with multiple
>processors. 
>Any we're not that big.

Unless the server is running just a single application program, you still
get benefit from multiple processors even if the language doesn't support
farming threads out to different processors.  If multiple application
processes are running, they can each be on a different CPU.

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: William Deakin
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <39730E0F.E0A7567E@pindar.com>
Barry Margolin wrote:
> Unless the server is running just a single application program, you still
> get benefit from multiple processors even if the language doesn't support
> farming threads out to different processors.
True. However, this is not quite what I was talking about (I must have
explained myself badly).

> If multiple application processes are running, they can each be on a 
> different CPU.
Agreed. It just that if I write a server application and want some kind
of concurrent access to a single lisp process some kind SMP would be
nice.

Cheers,

Will
From: Tim Bradshaw
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <ey3wvikx2hs.fsf@cley.com>
* William Deakin wrote:
> Agreed. It just that if I write a server application and want some kind
> of concurrent access to a single lisp process some kind SMP would be
> nice.

It's always seemed to me that one of Lisp's great advantages is that
you can deal with really large complex data structures, without it
becoming a just a nightmare, because the low-level memory management
issues are largely solved by Lisp, and dynamic typing helps a lot too.

Obviously you can have multiple lisp images running on a big machine,
perhaps communicating by some kind of message passing or OS
shared-memory segment, but if you do that then you lose a great deal
of the Lisp win, because it becomes hard to have some large shared
data structure which all the images can see without either writing
your own memory management system or serializing access to the
structure.

It might be the case that the only large shared structure that's
really interesting is the relational database, and that's kind of a
solved problem, or at least trying to get market share from Oracle
might be hard.  It seems to me an RDBMS must be quite easy to sell
because it's fairly easy to see how all sorts of mundane commercial
data will go into it.  And even if you have a lisp which will run on a
multiprocessor you still need to do lots of work to get it to meet the
ACID criteria -- transactions for instance look like pretty hard work
to do efficiently, though probably easier in Lisp than C++...

--tim
From: William Deakin
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <397325D1.AB8D86C1@pindar.com>
Tim wrote:
> Obviously you can have multiple lisp images running on a big machine,
> perhaps communicating by some kind of message passing or OS
> shared-memory segment, but if you do that then you lose a great deal
> of the Lisp win, because it becomes hard to have some large shared
> data structure which all the images can see without either writing
> your own memory management system or serializing access to the
> structure.
(Puts on stripy shirt) Maybe this is what I was thinking about. However,
I would have thought that having written such a memory management system
it would be of some small value.
 
> It might be the case that the only large shared structure that's
> really interesting is the relational database, and that's kind of a
> solved problem, or at least trying to get market share from Oracle
> might be hard.
Yes, *if* the only large shared structure is a relational database. 
(The fact that I can't think of anymore is more an example of my lack of
imagination than definite proof that such a thing does not exist.)

> It seems to me an RDBMS must be quite easy to sell because it's fairly 
> easy to see how all sorts of mundane commercial data will go into it.  
But the world of computing revolves round such mundane commercial data
:)

> And even if you have a lisp which will run on a multiprocessor you still 
> need to do lots of work to get it to meet the  ACID criteria -- 
> transactions for instance look like pretty hard work to do efficiently, 
(sigh) true

> though probably easier in Lisp than C++...
No argument here,

:)will
From: Vinodh Das
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <8l1944$bpd$1@news6.jaring.my>
Tim Bradshaw <···@cley.com

>                  And even if you have a lisp which will run on a
> multiprocessor you still need to do lots of work to get it to meet the
> ACID criteria -- transactions for instance look like pretty hard work
> to do efficiently, though probably easier in Lisp than C++...
>
> --tim

I wondered whether you (or any of the other readers in this group) know of
businesses or products that make use of Lisp for transaction processing.

With the availability of ORBs from both high-end vendors, it seems to be
a natural application domain.  I'm not sure whether this accurate and, so,
I'd like to hear what you all think.


-Vinodh
From: Tim Bradshaw
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <ey37laj3ebi.fsf@cley.com>
* Vinodh Das wrote:
> I wondered whether you (or any of the other readers in this group) know of
> businesses or products that make use of Lisp for transaction processing.

I'm not aware of any, though I'm sure there are some.

> With the availability of ORBs from both high-end vendors, it seems to be
> a natural application domain.  I'm not sure whether this accurate and, so,
> I'd like to hear what you all think.

Perhaps I don't know what an ORB is, but aren't they just very
glamorised and buzzword-compliant foreign function interfaces? If so
then you still have the problem of either writing a TP system in Lisp
which is probably hard if you want efficiency (as it's hard in any
language...), or using Lisp to talk to some backend database.

--tim
From: Philip Lijnzaad
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <u7zonfbq0d.fsf@o2-3.ebi.ac.uk>
Tim> Perhaps I don't know what an ORB is, but aren't they just very
Tim> glamorised and buzzword-compliant foreign function interfaces? 

an ORB (Object Request Broker) is the communication component used when using
CORBA (prepend Common, append Architecture), which can indeed be used as a
foreign function interface. But the main aim of CORBA is to enable
distributed objects in a standardized, language and platform independent
fashion. The FFI is a nice side effect of this (although I don't think it's
being used for that very often).

CORBA, not starting with a 'J' or an 'X', doesn't seem to have achieved the
status of buzzword, nor do I think it ever will. 
                                                                      Philip
-- 
Ban GM foods! Long live the Mesolithicum, pesticides and starvation
-----------------------------------------------------------------------------
Philip Lijnzaad, ········@ebi.ac.uk \ European Bioinformatics Institute,rm A2-24
+44 (0)1223 49 4639                 / Wellcome Trust Genome Campus, Hinxton
+44 (0)1223 49 4468 (fax)           \ Cambridgeshire CB10 1SD,  GREAT BRITAIN
PGP fingerprint: E1 03 BF 80 94 61 B6 FC  50 3D 1F 64 40 75 FB 53
From: Jason Trenouth
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <2pk8ns462o5fqe0s5otmrfr34sgovee4c2@4ax.com>
On 18 Jul 2000 11:47:29 +0100, Tim Bradshaw <···@cley.com> wrote:

> * Vinodh Das wrote:
> > I wondered whether you (or any of the other readers in this group) know of
> > businesses or products that make use of Lisp for transaction processing.
> 
> I'm not aware of any, though I'm sure there are some.
> 
> > With the availability of ORBs from both high-end vendors, it seems to be
> > a natural application domain.  I'm not sure whether this accurate and, so,
> > I'd like to hear what you all think.
> 
> Perhaps I don't know what an ORB is, but aren't they just very
> glamorised and buzzword-compliant foreign function interfaces? If so
> then you still have the problem of either writing a TP system in Lisp
> which is probably hard if you want efficiency (as it's hard in any
> language...), or using Lisp to talk to some backend database.

TP systems can be as hard or as easy as you want them to be depending on the
quality of service that you want. Common Lisp's exception handling might make
some parts easier.

Allegedly, the _really_ hard bit is convincing customers who care about TP to
trust your new system instead one of the 'big' three (or four):

	Transarc's Encina
	IBM's CICS
	BEA's Tuxedo
	( Microsoft's MTS )

__Jason
From: Vinodh Das
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <8l22ps$pr9$1@news6.jaring.my>
Tim Bradshaw <···@cley.com> wrote in message
····················@cley.com...
> * Vinodh Das wrote:
> > I wondered whether you (or any of the other readers in this group) know
of
> > businesses or products that make use of Lisp for transaction processing.
>
> I'm not aware of any, though I'm sure there are some.
>
> > With the availability of ORBs from both high-end vendors, it seems to be
> > a natural application domain.  I'm not sure whether this accurate and,
so,
> > I'd like to hear what you all think.
>
> Perhaps I don't know what an ORB is, but aren't they just very
> glamorised and buzzword-compliant foreign function interfaces? If so
> then you still have the problem of either writing a TP system in Lisp
> which is probably hard if you want efficiency (as it's hard in any
> language...), or using Lisp to talk to some backend database.
>
> --tim

I'm not very clear myself on the specifics of what CORBA is supposed to do.
There is, apparently, as extension to CORBA called the Object Transaction
Service (an example of a OTS compliant product is Hitachi's TPBroker
http://www.hitachisoftware.com/tpbroker/  or Arjuana's  OTSArjuna (this site
is easier to understand) http://www.arjuna.com/products/OTS/tech/ ).

 I was looking for a layer of abstraction to hide the complexities of
transaction processing from a (potential) Lisp application.  But now it
seems that no vendor offers OTS extensions for their Lisp CORBA (or am I
wrong?).

-Vinodh Das
From: Vinodh Das
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <8l3eb3$ror$1@news4.jaring.my>
Vinodh Das <·······@pd.jaring.my> wrote in message
·················@news6.jaring.my...
>
> Tim Bradshaw <···@cley.com> wrote in message
> ····················@cley.com...
> >
> > Perhaps I don't know what an ORB is, but aren't they just very
> > glamorised and buzzword-compliant foreign function interfaces? If so
> > then you still have the problem of either writing a TP system in Lisp
> > which is probably hard if you want efficiency (as it's hard in any
> > language...), or using Lisp to talk to some backend database.
> >
> > --tim
>
> I'm not very clear myself on the specifics of what CORBA is supposed to
do.
> There is, apparently, as extension to CORBA called the Object Transaction
> Service (an example of a OTS compliant product is Hitachi's TPBroker
> http://www.hitachisoftware.com/tpbroker/  or Arjuana's  OTSArjuna (this
site
> is easier to understand) http://www.arjuna.com/products/OTS/tech/ ).
>
>  I was looking for a layer of abstraction to hide the complexities of
> transaction processing from a (potential) Lisp application.  But now it
> seems that no vendor offers OTS extensions for their Lisp CORBA (or am I
> wrong?).
>
> -Vinodh Das
>
>

Sorry for being so terse and opaque. Let me give you a realistic example:

Consider a parent company P that has three subsidiaries S1, S2 and S3.  Each
subsidiary has two further subsidiaries (S1a, S1b, S2a, S2b, S3a, S3b).
There are 10 separate companies in all.  Each company (including the parent)
has 9 branches in addition to the main office.  There are 100 separate
reporting entities in all.  Many of the reporting entities are scattered
over several countries that have different ways of doing business and
preparing accounts.  Each reporting entity is equipped with thin clients
that send all their accounting transactions to a central server where they
are committed to a central database.

Let's say you have to fulfill a statutory requirement to prepare
consolidated accounts.  Consolidated accounts look at the group of companies
as a whole - as a single entity.  For example, sales and purchases between
members of the group are cancelled out (because you can't sell things to
yourself for a profit).  Some transactions require complex adjustments.
It's ok to do these things by hand, but on a large scale it's better to
automate most of the work.

Lisp has been clean and tidy for producing the consolidated accounts.  But
what about the front end extending from the thin client to the central
database?  Is it possible to write the front end in Lisp (along with the
thin client) and communicate with the central database solely with a CORBA
OTS (or equivalent) with Lisp bindings ?

As Jason Trenouth pointed out, there are several vendors that sell TP
systems, but I think that they are rather heavy for what I have in mind.  I
know that C++ CORBA OTS products exist, but I feel uneasy mixing languages.
I'm not really sure what good practice is in this area, so I would like to
hear reader's opinions.

-Vinodh
From: Craig Brozefsky
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <87n1jebqip.fsf@piracy.red-bean.com>
"Vinodh Das" <·······@pd.jaring.my> writes:

> Lisp has been clean and tidy for producing the consolidated accounts.  But
> what about the front end extending from the thin client to the central
> database?  Is it possible to write the front end in Lisp (along with the
> thin client) and communicate with the central database solely with a CORBA
> OTS (or equivalent) with Lisp bindings ?

We have found that using a java enabled browser for the thin client,
communicating with the backend via a simplified RPC mechanism sitting
on top of HTTP is suitable.  If you had more time to develop that
portion of the infrastructure, either using something like CORBA, or
XML/RPC, or SOAP, then you could perhaps get better transaction
control.  Depending on the level at which you wanted to make the split
between the backend and the thin clients, you could select an RPC
mechanism of suitable sophistication to match.  We did basic java
serialization of CLOS objects in responses to HTTP requests.  The CVS
version of IMHO[1] has a very simple Free Software implementation of
java serialization.  There are more sophisticated packages available
but they don't have source available.

> As Jason Trenouth pointed out, there are several vendors that sell TP
> systems, but I think that they are rather heavy for what I have in mind.  I
> know that C++ CORBA OTS products exist, but I feel uneasy mixing languages.
> I'm not really sure what good practice is in this area, so I would like to
> hear reader's opinions.

Would would be the purpose of a full TP system here?  Would you really
need something beyond what a you could buil yourself trivially, or
would be provided by a reasonable database backend?


[1] http://alpha.onshore.com/lisp-software

-- 
Craig Brozefsky               <·····@red-bean.com>
Lisp Web Dev List  http://www.red-bean.com/lispweb
---  The only good lisper is a coding lisper.  ---
From: Jason Trenouth
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <h58bnsolsbi2hfj5o28egffbcgu7cskv7e@4ax.com>
On Wed, 19 Jul 2000 13:24:05 +0800, "Vinodh Das" <·······@pd.jaring.my> wrote:

> [Do I need CORBA OTS in order to use Lisp to update a remote database?]

No.

Distributed transaction management is only really needed if you have multiple
transactional resources (eg databases) that you need to update together.

Your example has only a single database (presumably with its own transaction
management). A solution using existing Common Lisp technology should be
straightforward:

	client (eg on PC):

Common Lisp GUI (eg CAPI or CLIM) invoking operations on CORBA-defined
interfaces.

	middleware:

Common Lisp ORB (eg HCL-ORB or ORBLink)

	server (eg on Unix):

Common Lisp application implementing CORBA-defined interfaces and updating the
database (eg via Common SQL).

Given the CORBA interface you can swap the CL client for a Java one, possibly
in a browser. The middleware would then include a Java ORB (in the browser).

CORBA is just an LCD distributed OO standard for invoking and implementing
methods across languages.

__Jason
From: Paolo Amoroso
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <wop0OWHOuuwTjYASUWmXZ+n0=eGd@4ax.com>
On Tue, 18 Jul 2000 17:45:20 +0800, "Vinodh Das" <·······@pd.jaring.my>
wrote:

> I wondered whether you (or any of the other readers in this group) know of
> businesses or products that make use of Lisp for transaction processing.

This paper might be relevant:

  "SDTP - A Multilevel-Secure Distributed Transaction Processing System"
  Fred Gilham and David Shih (SRI International)
  gilham | shih AT sdl DOT sri DOT com
  Proceedings of the Lisp User Group Meeting '99
  October 1999

  Abstract:
  In this paper we describe SDTP,a multilevel-secure distributed 
  transaction-processing system that was written largely in Lisp [CMU CL 
  and PostgreSQL under FreeBSD - Paolo], and two applications built on top 
  of the SDTP system. We also discuss the experience of building the 
  system.
  We feel the system is of interest because
  * It is moderately large.
  * It attempts to implement and extend a significant published standard 
    (the X/Open DTP standard [look ma, non CORBA :) - Paolo]).
  * It uses a wide variety of facilities.
  * It illustrates some of the advantages of using Lisp.


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
From: Pierre R. Mai
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <87lmyy8git.fsf@orion.bln.pmsf.de>
Paolo Amoroso <·······@mclink.it> writes:

> > I wondered whether you (or any of the other readers in this group) know of
> > businesses or products that make use of Lisp for transaction processing.
> 
> This paper might be relevant:
> 
>   "SDTP - A Multilevel-Secure Distributed Transaction Processing System"
>   Fred Gilham and David Shih (SRI International)
>   gilham | shih AT sdl DOT sri DOT com
>   Proceedings of the Lisp User Group Meeting '99
>   October 1999

For those not in posession of the ELUGM Proceedings (which are
available from Franz' sales department, see www.franz.com for contact
information), the paper is also available online at the SRI site
under:

http://www.sdl.sri.com/dsa/publis.html

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Bob Riemenschneider
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <tpzondetsr.fsf@coyote.csl.sri.com>
····@acm.org (Pierre R. Mai) writes:

> Paolo Amoroso <·······@mclink.it> writes:
> 
> > > I wondered whether you (or any of the other readers in this group) know of
> > > businesses or products that make use of Lisp for transaction processing.
> > 
> > This paper might be relevant:
> > 
> >   "SDTP - A Multilevel-Secure Distributed Transaction Processing System"
> >   Fred Gilham and David Shih (SRI International)
> >   gilham | shih AT sdl DOT sri DOT com
> >   Proceedings of the Lisp User Group Meeting '99
> >   October 1999
> 
> For those not in posession of the ELUGM Proceedings (which are
> available from Franz' sales department, see www.franz.com for contact
> information), the paper is also available online at the SRI site
> under:
> 
> http://www.sdl.sri.com/dsa/publis.html

Just thought I'd mention that the SRI web site has a number of other
papers describing this work, but it may not be entirely clear which
ones they are from the titles.  If you take a look at

  -- Mark Moriconi, Xiaolei Qian, R. A. Riemenschneider, and Li
     Gong, "Secure Software Architectures", and

  -- F. Gilham, R. A. Riemenschneider, and V. Stavridou, "Secure
     Interoperability of Secure Distributed Databases: An
     Architecture Verification Case Study"

as well as the LUGM paper, you'll get a pretty good overview.  (If
you're interested in the security proof, you might want to take a look
at "Checking the Correctness of Architectural Transformation Steps via
Proof-Carrying Architectures" too.)

							-- rar
From: Tim Bradshaw
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <ey33dlczx7i.fsf@cley.com>
* Christopher Browne wrote:
> I know a few people that feel that SMP is likely to become _massively_
> important, but I don't really believe this.  Motherboards offering
> more than two Pentium CPUs (of whatever generation) get very expensive
> very quickly, so it just does not seem economical to me.  It makes more
> sense to me that faster network hardware will encourage the use, for
> heavy computational work, of Beowulf-like "clusters," where OS
> threading seems unlikely to me to be _vastly_ interesting.

I don't think that multiprocessor machines will become dominant in any
kind of desktop marketplace any time soon (except in the
uninteresting-for-this-purpose sense that machines already have
amazing special-purpose processors in graphics chips and so on).

But who cares about the desktop market, really?  I'm sure some people
do, but I don't see any real hope for Lisp winning big there.

But there's another marketplace.  Consider Sun and Oracle for a start.
Where do Sun make their money?  Large multiprocessor enterprise
servers.  What machines does Oracle run on?  Large multiprocessor
enterprise servers.  And those people are making plenty of money.

And all of that stuff is large shared-memory multiprocessor machines,
other systems have just died in this market, for reasonably good
reasons.  Sure, there are shared-nothing supercomputers, but that
market is tiny and mostly supported by government money: not a really
good place to spend effort.

--tim
From: Robert Monfera
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <3970824E.A4CB8D8B@fisec.com>
Tim Bradshaw wrote:

> But there's another marketplace.  Consider Sun and Oracle for a
> start.
> Where do Sun make their money?  Large multiprocessor enterprise
> servers.  What machines does Oracle run on?  Large multiprocessor
> enterprise servers.  And those people are making plenty of money.

You hit the nail in the head.

SAP and other Enterprise Resource Planning, Customer Relationship
Management and data mining / EIS applications lie on the top of these,
making additional piles of money.  It is not uncommon for corporations
to pay hundreds of millions of dollars on an implementation effort. 
These typically run on dozens of processors per server, and use 4GB -
24GB physical memory.  With Lisp, they may not need that much, but the
current CL implementations seem ever better suited for the desktop.

Robert
From: Tim Bradshaw
Subject: Re: Threading Noninvestment
Date: 
Message-ID: <ey3d7kexlnh.fsf@cley.com>
* Robert Monfera wrote:
> SAP and other Enterprise Resource Planning, Customer Relationship
> Management and data mining / EIS applications lie on the top of these,
> making additional piles of money.  It is not uncommon for corporations
> to pay hundreds of millions of dollars on an implementation effort. 
> These typically run on dozens of processors per server, and use 4GB -
> 24GB physical memory.  With Lisp, they may not need that much, but the
> current CL implementations seem ever better suited for the desktop.

Memory must be very cheap cf other costs -- actually all HW is very
cheap cf other costs for these kinds of applications.  My guess is
that development costs are dominant, and since development costs is
what Lisp brings down it could win big in this kind of market.
Whether it will is another matter: historically Lisp people have been
very good at chasing last year's big money-maker.

--tim
From: Joe Marshall
Subject: Re: Silly GC question
Date: 
Message-ID: <zonl26r6.fsf@alum.mit.edu>
"Fernando Rodr�guez" <···@mindless.com> writes:

> Excuse me if the answer for this is too obvious, but can't the GC work on
> a different thread (while the app is runing) to avoid GC pauses, or is it
> absolutely necesary for the application to stop making changes to memory
> while the GC works? O:-)

The app has to synchronize with the GC periodically.

An `incremental' GC allows the `mutator' (the app) to run in parallel
with the GC.  Incremental GC algorithms are complicated and don't seem
enhance performance for `regular' users, so most Lisp implementations
on stock hardware don't bother with them.
From: Lieven Marchand
Subject: Re: Silly GC question
Date: 
Message-ID: <m3sntet1lm.fsf@localhost.localdomain>
"Fernando Rodr�guez" <···@mindless.com> writes:

> Excuse me if the answer for this is too obvious, but can't the GC work on
> a different thread (while the app is runing) to avoid GC pauses, or is it
> absolutely necesary for the application to stop making changes to memory
> while the GC works? O:-)

Sure. Apart from the survey paper by Wilson already mentioned, you can
consult the book "Garbage Collection, Algorithms for Automatic Dynamic
Memory Management" by Jones and Lins. You have barrier methods, but
these have to have hardware support like the Symbolics or Explorer had
to be effective. Things like the Appel-Ellis-Li, the GC for Concurrent
Caml Light by Doligez and Leroy and Baker's threadmill seem perfectly
workable.


-- 
Lieven Marchand <···@bewoner.dma.be>
When C++ is your hammer, everything looks like a thumb.      Steven M. Haflich
From: Rob Warnock
Subject: Re: Silly GC question
Date: 
Message-ID: <8km1gk$531tg$1@fido.engr.sgi.com>
Lieven Marchand  <···@bewoner.dma.be> wrote:
+---------------
| You have barrier methods, but these have to have hardware support
| like the Symbolics or Explorer had to be effective.
+---------------

Well, perhaps this might have been "obvious" at one time, but I suggest
that this old canard is worth re-examining in light of recent CPU speeds
and the continually worsening ratio of memory latency to CPU speed. Even
purely *software* read or write barriers may well be worth the overhead
these days, especially if it allows you to make effective *use* of a larger
number of CPUs.

Likewise, ISTR reading some papers [sorry, can't find the refs] that suggest
that explicit software card-marking write barriers (for generational copying
collectors) may now be *more* efficient than hardware-based write barriers,
if the latter are (1) only at page-sized granularity, and (2) require an
operating system trap on a write attempt of an unmarked page. A recommended
card size of 256 or 512 bytes sticks in my memory...


-Rob

-----
Rob Warnock, 41L-955		····@sgi.com
Applied Networking		http://reality.sgi.com/rpw3/
Silicon Graphics, Inc.		Phone: 650-933-1673
1600 Amphitheatre Pkwy.		PP-ASEL-IA
Mountain View, CA  94043
From: Pekka P. Pirinen
Subject: Re: Silly GC question
Date: 
Message-ID: <ixaeff2py6.fsf@harlequin.co.uk>
····@rigden.engr.sgi.com (Rob Warnock) writes:
> Even purely *software* read or write barriers may well be worth the
> overhead these days, especially if it allows you to make effective
> *use* of a larger number of CPUs.

On conventional hardware, there aren't that many alternatives.  Using
VM page protection can work with some OSs and GC algorithms.
LispWorks uses software barriers with good results (in the real-time
version, too).  It helps that compile-time analysis can often omit the
barrier code.

> Likewise, ISTR reading some papers [sorry, can't find the refs] that suggest
> that explicit software card-marking write barriers (for generational copying
> collectors) may now be *more* efficient than hardware-based write barriers,
> if the latter are (1) only at page-sized granularity, and (2) require an
> operating system trap on a write attempt of an unmarked page.

For refs, one could check <URL:http://www.xanalys.com/software_tools/m
m/glossary/c.html#card.marking>.  However, it's missing a good paper
on this presented at ISMM98, that we haven't added to the bibliography
yet. -- No, it's there, <URL:http://www.xanalys.com/software_tools/mm/
bib/full.html#akpy98>, just forgot to link it to the card-marking
article.  It's a good technique.
-- 
Pekka P. Pirinen, Adaptive Memory Management Group, Harlequin Limited
Graduate students are like geese: We imprint on the first good idea we
see, and spend the rest of our careers chasing it.  - Alan Kay
From: Pekka P. Pirinen
Subject: Re: Silly GC question
Date: 
Message-ID: <ixhf9sub0h.fsf@harlequin.co.uk>
Lieven Marchand <···@bewoner.dma.be> writes:
> "Fernando Rodr�guez" <···@mindless.com> writes:
> > Excuse me if the answer for this is too obvious, but can't the GC work on
> > a different thread (while the app is runing) to avoid GC pauses, or is it
> > absolutely necesary for the application to stop making changes to memory
> > while the GC works? O:-)
> 
> Sure. Apart from the survey paper by Wilson already mentioned, you can
> consult the book "Garbage Collection, Algorithms for Automatic Dynamic
> Memory Management" by Jones and Lins.

Excellent book.

> You have barrier methods, but these have to have hardware support
> like the Symbolics or Explorer had to be effective. Things like the
> Appel-Ellis-Li, the GC for Concurrent Caml Light by Doligez and
> Leroy and Baker's treadmill seem perfectly workable.

Those all have barriers as well.  The Appel-Ellis-Li work was about
efficient hardware barriers on standard architectures (using the VM
page protection), so you don't need special hardware.  Functional
Developer (ex-Harlequin Dylan) uses the AEL scheme, and it's quite
efficient on Pentium hardware.

However, most work to reduce pause times has opted for incremental
collection (stop all threads but for shorter periods), rather than
true concurrent collection.  This is because concurrent collection is
harder to code, and might well have a larger overhead due to added
synchronization costs.  Of the above, only Doligez-Leroy-Gonthier is a
full concurrent technique, and it helps that ML has lots of immutable
objects.
-- 
Pekka P. Pirinen, Adaptive Memory Management Group, Harlequin Limited
 The Memory Management Reference: articles, bibliography, glossary, news
 <URL:http://www.harlequin.com/mm/reference/>
From: David Bakhash
Subject: Re: Silly GC question
Date: 
Message-ID: <c297lapn0tj.fsf@nerd-xing.mit.edu>
"Fernando Rodr�guez" <···@mindless.com> writes:

> can't the GC work on a different thread (while the app is runing) to
> avoid GC pauses, or is it absolutely necesary for the application to
> stop making changes to memory while the GC works? O:-)

As others have mentioned, it can.  I don't know the details, but I've
asked a similar question before:

Why is Java's GC a separate thread while every CL implementation I've
used seems to pause during GC?

I guess it's kinda okay that CL implementations GC the way they do.  For 
UI stuff, it can certainly be annoying -- just like it can be annoying
when my XEmacs GCs for too long (though I havn't noticed it for a
while).  The tradeoff is that it's much harder to implement this
GC-in-a-separate-thread thing, and it's probably less efficient overall, 
if you look at the overall speed of the app, and the time spent in GC.
But for UI stuff, it might be useful.  

Another way about it is to delve into your implement's GC mechanism, and 
see if you can tweak it to improve its performance, or to make it less
of intrusive.  

dave
From: Barry Margolin
Subject: Re: Silly GC question
Date: 
Message-ID: <Gnrb5.42$BL6.1147@burlma1-snr2>
In article <···············@nerd-xing.mit.edu>,
David Bakhash  <·····@alum.mit.edu> wrote:
>Why is Java's GC a separate thread while every CL implementation I've
>used seems to pause during GC?

Are historical contingencies so hard to understand?  Lisp implementations
were written in the 80's, before threading was commonly available.  Java
implementations were written in the late 90's, by which time threading was
a standard feature of every OS.

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Tim Bradshaw
Subject: Re: Silly GC question
Date: 
Message-ID: <ey3aeflzlwt.fsf@cley.com>
* David Bakhash wrote:

> Why is Java's GC a separate thread while every CL implementation I've
> used seems to pause during GC?

Most stock-hardware lisp systems were written before threading was
available from the OS.  Even now OS threading is nothing like
standardised (ask anyone who has tried to write portable nontrivial
posix threads code...) unless you live on a monoculture like Windows
(and even then there are *major* differences between win9x and NT
which can cause applications to totally fail to run on one or the
other).

Ground-up implementations, like Genera, had GC in separate threads.

> I guess it's kinda okay that CL implementations GC the way they do.  For 
> UI stuff, it can certainly be annoying -- just like it can be annoying
> when my XEmacs GCs for too long (though I havn't noticed it for a
> while).  The tradeoff is that it's much harder to implement this
> GC-in-a-separate-thread thing, and it's probably less efficient overall, 
> if you look at the overall speed of the app, and the time spent in GC.
> But for UI stuff, it might be useful.  

I think this is being overcome by events.  Generational GCs and fast
machines mean that GC pauses can be easily less than 1/10 sec even for
a stop-the-world GC, probably much less if you tune.  I never really
notice GC any more even in Emacs which has about as bad a GC as you
could want.

There are other reasons to want GC to be concurrent with processing --
my pet one being multiprocessor machines -- but I suspect that the
long pauses of old are not really a problem now.  Of course, people
get fussy... when I first used Lisp we used to take tea breaks while
it GCd.

--tim
From: Paul F. Dietz
Subject: Re: Silly GC question
Date: 
Message-ID: <396E6FBA.1ED42BA4@interaccess.com>
Tim Bradshaw wrote:
 
> There are other reasons to want GC to be concurrent with processing --
> my pet one being multiprocessor machines -- but I suspect that the
> long pauses of old are not really a problem now.

Maybe it would do less violence to current lisp implementations
to make the garbage collector itself run with several OS threads,
but still stop all non-GC activity while it runs.

	Paul
From: Pierre R. Mai
Subject: Re: Silly GC question
Date: 
Message-ID: <87bt00zt84.fsf@orion.bln.pmsf.de>
"Paul F. Dietz" <·····@interaccess.com> writes:

> Tim Bradshaw wrote:
>  
> > There are other reasons to want GC to be concurrent with processing --
> > my pet one being multiprocessor machines -- but I suspect that the
> > long pauses of old are not really a problem now.
> 
> Maybe it would do less violence to current lisp implementations
> to make the garbage collector itself run with several OS threads,
> but still stop all non-GC activity while it runs.

And what advantage would that give you?

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Barry Margolin
Subject: Re: Silly GC question
Date: 
Message-ID: <OhGb5.79$BL6.1836@burlma1-snr2>
In article <··············@orion.bln.pmsf.de>,
Pierre R. Mai <····@acm.org> wrote:
>"Paul F. Dietz" <·····@interaccess.com> writes:
>
>> Tim Bradshaw wrote:
>>  
>> > There are other reasons to want GC to be concurrent with processing --
>> > my pet one being multiprocessor machines -- but I suspect that the
>> > long pauses of old are not really a problem now.
>> 
>> Maybe it would do less violence to current lisp implementations
>> to make the garbage collector itself run with several OS threads,
>> but still stop all non-GC activity while it runs.
>
>And what advantage would that give you?

It would presumably provide an advantage on multi-processor machines, since
the GC would be sped up proportionally to the number of processors being
used.

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Pierre R. Mai
Subject: Re: Silly GC question
Date: 
Message-ID: <87wvioxyyj.fsf@orion.bln.pmsf.de>
Barry Margolin <······@genuity.net> writes:

> In article <··············@orion.bln.pmsf.de>,
> Pierre R. Mai <····@acm.org> wrote:
> >"Paul F. Dietz" <·····@interaccess.com> writes:
> >
> >> Tim Bradshaw wrote:
> >>  
> >> > There are other reasons to want GC to be concurrent with processing --
> >> > my pet one being multiprocessor machines -- but I suspect that the
> >> > long pauses of old are not really a problem now.
> >> 
> >> Maybe it would do less violence to current lisp implementations
> >> to make the garbage collector itself run with several OS threads,
> >> but still stop all non-GC activity while it runs.
> >
> >And what advantage would that give you?
> 
> It would presumably provide an advantage on multi-processor machines, since
> the GC would be sped up proportionally to the number of processors being
> used.

For that advantage to exist, two preconditions would have to be met:

- The effort required to do garbage collection must be a large part of
  the total effort expended by the application.  I severely doubt that
  any non-trivial application will qualify here, since the garbage
  generated must first have been allocated, and _used_.  Practical
  evidence suggests that applications that spend more than 1/4 of
  their runtime in GC have serious problems in the design department
  (or use a very badly tuned GC).  If you can only speed up less than
  a quarter of your application, the speedup can at most be a factor
  of 1.33.  There are better ways of achieving that speedup than going
  the SMP way, even if you only consider hardware solutions.

- The work done by the garbage collector can be parallelized very
  well.  While I haven't looked into this in detail, I have my doubts
  that this is the case.  In any case even with perfect
  parallelization, the overall speedup will be:

  # CPUs:        1     2     4     8     16     32     64
  Speedup:    1.00  1.14  1.23  1.28   1.31   1.32   1.33

So even under perfect conditions, I don't think that a speedup of 1.14
or 1.23 will even pay for the hardware, let alone the additional
software costs.  Hence my question...

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Joseph Oswald
Subject: Re: Silly GC question
Date: 
Message-ID: <o5gu2ds6mmv.fsf@is05.fas.harvard.edu>
····@acm.org (Pierre R. Mai) writes:
> [quoting Barry Margolin]
> > It would presumably provide an advantage on multi-processor machines, since
> > the GC would be sped up proportionally to the number of processors being
> > used.
> 
> For that advantage to exist, two preconditions would have to be met:
> 

[Amdahl's law snipped]

I'm no expert in GC, but in my blue-sky view, the big potential gain of 
applying multiple processors to GC is in *latency*: your "computation 
processor" doesn't have to stop to collect its own garbage, because the
other processor has swept up. Just like a teenager living at home.

I.e., you give up big quasi-unpredictable pauses for lots of little 
quasi-predictable pauses when your cleaning processor has locked you out
of some structure for a short while.

Otherwise, it seems that Pierre is right, that you would really have
to be spending much of your *computational effort* (as opposed to 
memory bandwidth) in GC to get a win on overall speed.

--Joe Oswald
From: Barry Margolin
Subject: Re: Silly GC question
Date: 
Message-ID: <wZFc5.12$R%.710@burlma1-snr2>
In article <···············@is05.fas.harvard.edu>,
Joseph Oswald  <········@is05.fas.harvard.edu> wrote:
>I.e., you give up big quasi-unpredictable pauses for lots of little 
>quasi-predictable pauses when your cleaning processor has locked you out
>of some structure for a short while.

On the other hand, you may impose a performance penalty on every structure
access to check for GC lock-out.  However, I haven't thought hard about the
design of a concurrent GC, so it might be possible to implement this
cheaply using hardware read barriers rather than locking checks.

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Jens Kilian
Subject: Re: Silly GC question
Date: 
Message-ID: <sfn1jfriqu.fsf@bstde026.bbn.hp.com>
Barry Margolin <······@genuity.net> writes:
> On the other hand, you may impose a performance penalty on every structure
> access to check for GC lock-out.  However, I haven't thought hard about the
> design of a concurrent GC, so it might be possible to implement this
> cheaply using hardware read barriers rather than locking checks.

There is at least one concurrent GC which limits the lock-outs to a specific
(and comparatively short) phase of its cycle.  I don't remember if it is
capable of real-time operation.  A paper describing this algorithm and
proves its correctness is at

    http://www.acm.org/pubs/citations/proceedings/plan/174675/p70-doligez/

HTH,
	Jens.
-- 
··········@acm.org                 phone:+49-7031-464-7698 (HP TELNET 778-7698)
  http://www.bawue.de/~jjk/          fax:+49-7031-464-7351
PGP:       06 04 1C 35 7B DC 1F 26 As the air to a bird, or the sea to a fish,
0x555DA8B5 BB A2 F0 66 77 75 E1 08 so is contempt to the contemptible. [Blake]
From: Robert Monfera
Subject: Re: Silly GC question
Date: 
Message-ID: <396E6EB0.8CA83024@fisec.com>
Tim Bradshaw wrote:

> I think this is being overcome by events.  Generational GCs and fast
> machines mean that GC pauses can be easily less than 1/10 sec even for
> a stop-the-world GC, probably much less if you tune.

As processors got faster and more numerous (in a machine), so become
physical memory and images bigger.  Don't they cancel each other's
effect?

Robert
From: Tim Bradshaw
Subject: Re: Silly GC question
Date: 
Message-ID: <ey3sntcy6vd.fsf@cley.com>
* Robert Monfera wrote:

> As processors got faster and more numerous (in a machine), so become
> physical memory and images bigger.  Don't they cancel each other's
> effect?

No, I don't think so, not entirely.  I think (I may be wrong) that
some things have significantly changed the landscape: memory has got
really very cheap even in relative terms, which means that even quite
large systems can now live entirely in core -- never having to touch
the disk is a huge win.  Generational GCs also mean that you never, or
hardly ever, have to do the old `GC the whole image' thing.  As well
as this the minimum time for a GC has dropped into the noise (well
below 0.1 second), so that a generational GC which is well-matched to
the problem should never have to pause for any human-noticeable time.
10-15 years ago, I think the minimum GC time was probably a second or
something, which really was noticeable.

--tim
From: Barry Margolin
Subject: Re: Silly GC question
Date: 
Message-ID: <zgGb5.78$BL6.1607@burlma1-snr2>
In article <·················@fisec.com>,
Robert Monfera  <·······@fisec.com> wrote:
>Tim Bradshaw wrote:
>
>> I think this is being overcome by events.  Generational GCs and fast
>> machines mean that GC pauses can be easily less than 1/10 sec even for
>> a stop-the-world GC, probably much less if you tune.
>
>As processors got faster and more numerous (in a machine), so become
>physical memory and images bigger.  Don't they cancel each other's
>effect?

Not as much as you may think.  Generational GC works by only scanning small
sections of memory most of the time.  No matter how big the entire memory
footprint gets, the size of the youngest generations can be kept constant,
so as processors get faster the time to scan it decreases.  On the other
hand, the time it takes to fill each generation also may decrease
similarly, so you may get more frequent scans.

-- 
Barry Margolin, ······@genuity.net
Genuity, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Martin Cracauer
Subject: Re: Silly GC question
Date: 
Message-ID: <8kmvc6$11j1$1@counter.bik-gmbh.de>
David Bakhash <·····@alum.mit.edu> writes:

>Why is Java's GC a separate thread while every CL implementation I've
>used seems to pause during GC?

In Java you can change the (virtual) machine to restrict memory
accesses to be in front of the GC or harmless.  In Lisp you need
compiler support.

[...]
>Another way about it is to delve into your implement's GC mechanism, and 
>see if you can tweak it to improve its performance, or to make it less
>of intrusive.  

A GC running in a parallel thread will always slow normal non-GC code
down.  That means, the application as whole needs more time, although
latency issues therein will be reduced.

Martin
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <········@bik-gmbh.de> http://www.bik-gmbh.de/~cracauer/
FreeBSD - where you want to go. Today. http://www.freebsd.org/