From: Rayan Zachariassen
Subject: Re: LISP compiler?
Date: 
Message-ID: <89Nov7.234038est.2758@neat.cs.toronto.edu>
·····@apple.com (Paul Snively) writes:

>A future version of MACL WILL do a treewalking dead-code stripper, reducing 
>application sizes tremendously.

yah, well, this is nice but a bit shortsighted as a general solution.

- What I really want is to be able to run small standalone lisp applications.
I want to write cat in list (for example) without it ending up as a 2Meg
binary.  This is addressed by the pruner you mention (someone from Lucid
told me they have one or are planning one too).

- I want to run *many* different standalone lisp programs.  Imagine
30% of unix commands being implemented in lisp.  See where the code stripping
idea falls short?  You'll have all this code duplicated in most commands.

Solution?  Put the Lisp runtime support into a shared image, which the
application link to when they start up, like (sun/svr4) shared libraries.
That is in fact what it is, the Lisp equivalent to libc.  All lisp applications
would share the runtime text, and have independent data pages per process.

One would think this isn't too complicated to do.

rayan

From: Paul Snively
Subject: Re: LISP compiler?
Date: 
Message-ID: <5081@internal.Apple.COM>
In article <·····················@neat.cs.toronto.edu> 
·····@cs.toronto.edu (Rayan Zachariassen) writes:
> ·····@apple.com (Paul Snively) writes:
> 
> >A future version of MACL WILL do a treewalking dead-code stripper, 
reducing 
> >application sizes tremendously.
> 
> yah, well, this is nice but a bit shortsighted as a general solution.
> 
> - What I really want is to be able to run small standalone lisp 
applications.
> I want to write cat in list (for example) without it ending up as a 2Meg
> binary.  This is addressed by the pruner you mention (someone from Lucid
> told me they have one or are planning one too).
> 
> - I want to run *many* different standalone lisp programs.  Imagine
> 30% of unix commands being implemented in lisp.  See where the code 
stripping
> idea falls short?  You'll have all this code duplicated in most commands.
> 
> Solution?  Put the Lisp runtime support into a shared image, which the
> application link to when they start up, like (sun/svr4) shared libraries.
> That is in fact what it is, the Lisp equivalent to libc.  All lisp 
applications
> would share the runtime text, and have independent data pages per 
process.
> 
> One would think this isn't too complicated to do.

Not only is it difficult to do under the current Macintosh operating 
system, it's impossible.

Sorry 'bout that.

__________________________________________________________________________
Just because I work for Apple Computer, Inc. doesn't mean that they 
believe what I believe or vice-versa.
__________________________________________________________________________
C++ -- The language in which only friends can access your private members.
__________________________________________________________________________
From: Mark Interrante
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <21192@uflorida.cis.ufl.EDU>
>> ·····@apple.com (Paul Snively) writes:
>> 
>> >A future version of MACL WILL do a treewalking dead-code stripper, 
>reducing 
>> >application sizes tremendously.

Now that we are talking about future versions of MACL I am interested
in knowing what apples vision of MACL's future is. (Broad strokes)

What do other MACL users need and want?  Here is a short list of
things that I would like to see incorporated into MACL.  I view MACL
as the prototyping language for Mac applications.  Apple almost
developed smalltalk for the very same reason.  As such I need a fast
nice object system such as Apple CLOS, not PCL.  I need all the great
examples that MACL has given out converted and consolidated to work
with the new system.  I just want high level objects that correspond
to MAC interface parts.  In addition, I would like to see improved
support for printing, and version7 stuff.  

I think it is important that Apple keep MACL progressing in the
direction of new features like CLOS and CLIM.  CLIM is important.
Apple is in the interface/easy-to-use camp and CLIM is a UIMS, it seems
like a natural fit.  BUT, Apple needs to devote at least one person to
enhancing the current tools.  The inspector doesnt allow editing yet,
FRED hasnt changed, the manual needs revising, and the interface generator
tool is embarassing in comparison to the NeXT.  

These are just my suggestions.

-----------------------------------------------------------------------------
Mark Interrante   		  Software Engineering Research Center
···@beach.cis.ufl.edu		  CIS Department, University of Florida 32611
-----------------------------------------------------------------------------
"X is just raster-op on wheels" - Bill Joy, January 1987
From: Paul Snively
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <5130@internal.Apple.COM>
In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> I view MACL
> as the prototyping language for Mac applications.

Apple views MACL as an environment for developing shippable applications, 
not merely for prototyping them.

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> As such I need a fast
> nice object system such as Apple CLOS, not PCL.

Of course we'll have our own CLOS, just not in 1.3. :-)

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> I need all the great
> examples that MACL has given out converted and consolidated to work
> with the new system.

By "new system," I presume you mean CLOS.  Of course we'll do that too.

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> I just want high level objects that correspond
> to MAC interface parts.

We've already got that, and will keep it.

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> In addition, I would like to see improved
> support for printing, and version7 stuff.

What exactly does this mean?  We will definitely support System 7.  What 
do you need relative to printing?

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> I think it is important that Apple keep MACL progressing in the
> direction of new features like CLOS and CLIM.  CLIM is important.
> Apple is in the interface/easy-to-use camp and CLIM is a UIMS, it seems
> like a natural fit.

We certainly agree about CLOS.  CLIM may or may not be offered as an 
option for the system.  It's not a priority for us, as we already have a 
user environment that we'd like to keep consistent, and which is, in our 
collective opinions, in many ways better than CLIM.

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> The inspector doesnt allow editing yet

It does if it can find the source code.

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> FRED hasnt changed

Actually, it has, but I need to know more about what you want from it.

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> the manual needs revising

Done, as of 1.3.

In article <·····@uflorida.cis.ufl.EDU> ···@serc.cis.ufl.edu (Mark 
Interrante) writes:
> the interface generator
> tool is embarassing in comparison to the NeXT.

Also vastly improved as of 1.3.

MACL 1.3 should be at APDA within the next two weeks or so (I just got the 
frozen code a few days ago).

__________________________________________________________________________
Just because I work for Apple Computer, Inc. doesn't mean that they 
believe what I believe or vice-versa.
__________________________________________________________________________
C++ -- The language in which only friends can access your private members.
__________________________________________________________________________
From: Paul Snively
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <5149@internal.Apple.COM>
In article <····@internal.Apple.COM> ·····@apple.com (that's me) wrote:
> CLIM may or may not be offered as an 
> option for the system.  It's not a priority for us, as we already have a 
> user environment that we'd like to keep consistent, and which is, in our 
> collective opinions, in many ways better than CLIM.

Ouch.  The first sentence is true; everything after that, I'm told, is 
quasi-religious nonsense.  Apple is more interested now than we ever have 
been before in connectivity (witness our relatively-new TCP/IP support and 
our already-announced-but-not-yet-shipping X-Windows server for the Mac 
OS).

Mea culpa, mea maxima culpa.  My apologies to CLIM fans everywhere.  By 
the way, in the future, I'll make every attempt to address technical 
concerns regarding MACL here (since that's my job), and every attempt to 
let the MACL product manager address "future directions" issues (since 
that's her job).

__________________________________________________________________________
Just because I work for Apple Computer, Inc. doesn't mean that they 
believe what I believe or vice-versa.
__________________________________________________________________________
C++ -- The language in which only friends can access your private members.
__________________________________________________________________________
From: alan geller
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <616@pyuxf.UUCP>
As long as this newsgroup is temporarily becoming a forum for requesting
enhancements to Mac Allegro Common Lisp, for the 2.0 release:

1) Would it be possible to use an incremental garbage collector, rather than
	the current stop-and-copy?  Given that MACL 2.0 will usually be
	running under System 7.0, we'll have lots of virtual address space
	for from-space and to-space, maybe even multiple generations.
	For running MACL as a prototyping environment, this isn't a big
	deal, but in a real application it annoys users when their cursor
	changes to 'GC' and their system freezes for 10 seconds.
	Even if there were a 'MACL 2.0 extended' that had an incremental
	garbage collector as an extra-cost option, that would be better than
	nothing.

2) Serious rumor has it that MACL 2.0 will support CLOS directly, rather than
	by using a generic implementation like PCL.  I applaud this, but I
	have a couple of requests along this line:  please make sure that
	print and describe are implemented as generic functions!  For those
	of us who use objects extensively, this is very important.  Since
	The current Object Lisp does this right, I assume that MACL 2.0 will
	also, but I just want to stress its importance.

Alan Geller
Regal Data Systems
...!{rutgers,princeton}!bellcore!pyuxf!asg
From: Barry Margolin
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <31689@news.Think.COM>
In article <···@pyuxf.UUCP> ···@pyuxf.UUCP (alan geller) writes:
>2) Serious rumor has it that MACL 2.0 will support CLOS directly, rather than
>	by using a generic implementation like PCL.  I applaud this, but I
>	have a couple of requests along this line:  please make sure that
>	print and describe are implemented as generic functions!

The current state of things in X3J13 specifies that DESCRIBE and PRINT call
the generic functions DESCRIBE-OBJECT and PRINT-OBJECT.  So, it's not CLOS
unless they do this.
Barry Margolin, Thinking Machines Corp.

······@think.com
{uunet,harvard}!think!barmar
From: John Carroll
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <1659@gannet.cl.cam.ac.uk>
In article <···@pyuxf.UUCP> ···@pyuxf.UUCP (alan geller) writes:
>
>2) Serious rumor has it that MACL 2.0 will support CLOS directly, rather than
>	by using a generic implementation like PCL.  I applaud this, but I
>
>...

Why are people waiting for MACL 2.0 in order to get CLOS on the Mac? 2.0
must some way off since 1.3 has only just been released.

PROCYON Common Lisp already has a directly-supported implementation of CLOS
(which is integrated into all the graphics stuff). Procyon's CLOS came out
in the early summer and they claim it to be fast.

(US distributors are Expertelligence  805 9671797
Rest of world are Procyon Research Ltd  +44 223 421221)

John Carroll
···@cl.cam.ac.uk
From: Frank Malczewski
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <6567@merlin.usc.edu>
In article <····@gannet.cl.cam.ac.uk> ···@cl.cam.ac.uk (John Carroll) writes:
>In article <···@pyuxf.UUCP> ···@pyuxf.UUCP (alan geller) writes:
>>
>>2) Serious rumor has it that MACL 2.0 will support CLOS directly, rather than
>>	by using a generic implementation like PCL.  I applaud this, but I
>>
>>...
>
>Why are people waiting for MACL 2.0 in order to get CLOS on the Mac? 2.0
>must some way off since 1.3 has only just been released.
>
>PROCYON Common Lisp already has a directly-supported implementation of CLOS
>(which is integrated into all the graphics stuff). Procyon's CLOS came out
>in the early summer and they claim it to be fast.
>
>(US distributors are Expertelligence  805 9671797
>Rest of world are Procyon Research Ltd  +44 223 421221)
>
>John Carroll
>···@cl.cam.ac.uk



Yes, Procyon CL has just about everything, at a price.  It is quite expensive
in comparison to MACL, so some of us choose to wait a (hopefully not too long)
while for MACL to catch up.

h
u
g
s

&

k
i
s
s
e
s

mailer of my life
--

  -- Frank Malczewski		(········@girtab.usc.edu)
From: Andrew Ginter
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <2159@cs-spool.calgary.UUCP>
In article <···@pyuxf.UUCP>, ···@pyuxf.UUCP (alan geller) writes:
> 1) Would it be possible to use an incremental garbage collector, rather than
> 	the current stop-and-copy?  Given that MACL 2.0 will usually be
> 	running under System 7.0, we'll have lots of virtual address space
> 	for from-space and to-space, maybe even multiple generations.

A real time (parallel or incremental) garbage collector is roughly
twice as costly as a comparable stop-and-collect collector.  The real
time version has to collect more frequently than stop-and-copy because
the real time version has to start it's collection well before memory
is exhausted.  A real time "two space copying" collector is even more
expensive because of the cost of checking for a forwarding pointer on
every reference to GC managed memory.  Real time collectors penalize
real performance while improving some aspects of perceived
performance.

So if you're going to add a real time collector, make it an option.

Andrew Ginter, 403-282-2984, ·······@CPSC.UCALGARY.CA, ······@UNCAMULT.BITNET
From: Mike Bonham
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <4@wglen.UUCP>
In article <····@cs-spool.calgary.UUCP>, ·······@cpsc.ucalgary.ca (Andrew Ginter) writes:
> 
> A real time (parallel or incremental) garbage collector is roughly
> twice as costly as a comparable stop-and-collect collector.  The real
> time version has to collect more frequently than stop-and-copy because
> the real time version has to start it's collection well before memory
> is exhausted.  A real time "two space copying" collector is even more
> expensive because of the cost of checking for a forwarding pointer on
> every reference to GC managed memory.  Real time collectors penalize
> real performance while improving some aspects of perceived
> performance.
> 
> So if you're going to add a real time collector, make it an option.
> 
As Mr. Ginter correctly points out, the total number of CPU cycles expended
to perform incremental garbage collection is substantially greater than for
an equivalent stop-and-collect algorithm.

But as he also mentions (but perhaps doesn't emphasize enough, IMHO), if an
application involves an interactive user interface (as many Lisp programs
do!!!), then incremental GC can greatly improve the *perceived* responsiveness.
This is primarily due to two factors:

Firstly, the system can do garbage collection while waiting for the next
user-supplied command or input event, when CPU cycles would otherwise go
to waste.

Secondly, when the next user-event does come in, incremental garbage collectors
can be suspended immediately, whereas stop-and-collect garbage collectors
must run to completion (or else be aborted and rolled-back, if that's
possible and preferable).

There is no reason one can't have a two-collector system: a background,
incremental GC can clean up whatever it can during user interface rest-stops,
but when heavy computations run completely out of space the heavy-duty,
highly efficient, stop-and-collect GC is called in.

There *are* incremental GC methods (eg. linear mark-and-sweep) that don't
impose the continuous run-time penalty of checking for forwarding pointers.
Fragmentation can be dealt with periodically during a stop-and-copy GC.

Properly-done background incremental GC ought to be a win in typical
interactive applications, and no great burden in compute-bound applications, 

-- Mike Bonham       CPSC.UCalgary.CA!wglen!bonham
-- 
From: Carl R. Manning
Subject: Re: LISP compiler? (really future of MACL) (Really Incremental GC)
Date: 
Message-ID: <5145@wheat-chex.ai.mit.edu>
In article <·@wglen.UUCP> ······@wglen.UUCP (Mike Bonham) writes:
>...if an
>application involves an interactive user interface (as many Lisp programs
>do!!!), then incremental GC can greatly improve the *perceived* responsiveness.
>...
>-- Mike Bonham       CPSC.UCalgary.CA!wglen!bonham
>-- 
If you're just dealing with human interfaces rather than real time
requirements, then incremental garbage collection isn't the only way
to improve perceived performance --- instead you can try to schedule the long
scavenges during otherwise unresponsive periods, e.g. during
compute-bound operations or during idle periods.  For more on this idea,
see Section 4: Scavenge Scheduling, (p 29) of: 

   Design of the Opportunistic Garbage Collector
   Paul R. Wilson and Thomas G. Moher
   OOPSLA'89 Proceedings, 1-6 October, 1989
   SIGPLAN Notices vol24 n10, p 23-35

(This paper also includes some other ideas for implementing
generational GC on stock hardware and some performance measurements.)

(Just passing on a reference -- I have no experience using it.)

···········@ai.mit.edu
From: Paul Wilson
Subject: Re: LISP compiler? (really Incremental & Opportunistic GCs)
Date: 
Message-ID: <1989Nov28.202758.5021@Neon.Stanford.EDU>
In article <····@wheat-chex.ai.mit.edu> ···········@ai.mit.edu writes:
>In article <·@wglen.UUCP> ······@wglen.UUCP (Mike Bonham) writes:
>If you're just dealing with human interfaces rather than real time
>requirements, then incremental garbage collection isn't the only way
>to improve perceived performance --- instead you can try to schedule the long
>scavenges during otherwise unresponsive periods, e.g. during
>compute-bound operations or during idle periods.  For more on this idea,
>see Section 4: Scavenge Scheduling, (p 29) of: 
>
>   Design of the Opportunistic Garbage Collector
>   Paul R. Wilson and Thomas G. Moher
>   OOPSLA'89 Proceedings, 1-6 October, 1989
>   SIGPLAN Notices vol24 n10, p 23-35
>
>(This paper also includes some other ideas for implementing
>generational GC on stock hardware and some performance measurements.)
>
>(Just passing on a reference -- I have no experience using it.)
>
>···········@ai.mit.edu

  I should probably point out that ours is not the first gc to improve
  perceived responsiveness by opportunistic scheduling.  On the other
  hand, we use opportunism both to improve responsiveness *and* to
  actually increase efficiency.

  Jon L White has reminded me that the Interlisp gc used the strategy
  of preferentially scheduling gc work during long user pauses ("think
  time").

  Also, the Symbolics systems make the background scavenger of their
  incremental gc work harder during pauses than during program execution.
  (Even though the system is incremental, performance can be noticeably
  degraded by the background scavenger, so it's better for it to do
  its job when the user doesn't care.)

  Our gc is different in that it schedules gc's preferentially when
  there is likely to be less work to do. It's a copying collector,
  so the work it does is proportional to the amount of *live* data.

  By scheduling scavenges between major computations, we not only
  hide pauses but *reduce* them, since there are usually fewer
  live objects (holding intermediate results of ongoing computations)
  then.  This didn't happen with the Interlisp gc, because it
  wasn't a copy collector.  It also doesn't happen with Symbolics
  gc because it is the scheduling of *flips* (root set scans) that
  primarily determines the amount of data seen as live by the gc.


  There are really several different issues here.  One is efficiency,
  another is disruptiveness, and another is real-time response.  Only
  a fine-grained incremental gc can guarantee short pauses and
  real-time response.  A non-incremental generational gc can usually
  come close enough for most people's purposes.  (If you're willing
  to do a little tuning, you may be able to guarantee acceptable
  pauses for a  particular application by forcing scavenges of
  the right numbers of generations at the right times.  Our system
  attempts to do this acceptably well automatically, but may fail.)

  Appel, Ellis, and Li's gc avoids the continual checking for
  forwarding pointers that a fine-grained incremental gc requires.
  I wouldn't quite call it real-time, though, because the unit of
  work is fairly large (copying the referents of all of the pointers
  in a page) and these units may come in bursts.  It may not work
  as well for the youngest generation of a generational system,
  because a higher percentage of pages in younger generations is
  likely to be touched in a short period of time.  So you still
  may need tuning or opportunism, and you might want to use
  a hybrid scheme.  (For example, opportunistic generation scavenging
  for the youngest generation, and Appel-Ellis-Li style incremental
  scavenging of older generations.)  For difficult real-time
  applications, you'll still need a fine-grained incremental gc
  with its pointer-checking overhead.


  Another issue our paper dealt with is the number of generations a
  generational gc should have.  We maintain that it's at least 3.
  Our own numbers at this point are still rather preliminary, but
  we've gotten several anecdotal reports lately of real-world
  programs that a two-generations scheme falls down on.  (That is,
  they have too much data with intermediate lifetimes that force
  full gc's too often.)  Having more generations can have drawbacks
  too, but they seem less severe, and can usually be handled with
  a teeny bit of opportunistic scheduling.  (Note: your mileage
  *will* vary.  A two-generation scheme seems to be absolutely
  fine for most people's programs.  It's just some troublemakers
  out there... :-).


  Also, our current view is that a mixed strategy is probably best
  for tracking intergenerational pointers.  For most programs,
  Ungar's remembered set scheme incurs less overhead than our
  "card marking," but is occasionally bad for large, long-lived objects.
  (E.g., you may get 4-megabyte array in your remembered set and
  scan it entirely at every scavenge!).  Probably the way to go
  is to segregate large objects into a separate "large object
  area" as Tektronix Smalltalk does, and to use card marking in
  this area (only) to avoid scanning the whole objects.

      -- Paul 


Paul R. "Garbage Man" Wilson                         
Software Systems Laboratory               lab ph.: (312) 996-9216
U. of Illin. at C. EECS Dept. (M/C 154)   ······@carcoar.stanford.edu
Box 4348   Chicago,IL 60680 
Paul R. Wilson                         
Software Systems Laboratory               lab ph.: (312) 996-9216
U. of Illin. at C. EECS Dept. (M/C 154)   ······@carcoar.stanford.edu
Box 4348   Chicago,IL 60680 
From: Carl R. Manning
Subject: Re: LISP compiler? (really future of MACL) (Really Incremental GC)
Date: 
Message-ID: <5145@whea?TI-chex.ai.mit.edu>
In article <·@wglen.UUCP> ······@wglen.UUCP (Mike Bonham) writes:
>...if an
>application involves an interactive user interface (as many Lisp programs
>do!!!), then incremental GC can greatly improve the *perceived* responsiveness.
>...
>-- Mike Bonham       CPSC.UCalgary.CA!wglen!bonham
>-- 
If you're just dealing with human interfaces rather than real time
requirements, then incremental garbage collection isn't the only way
to improve perceived performance --- instead you can try to schedule the long
scavenges during otherwise unresponsive periods, e.g. during
compute-bound operations or during idle periods.  For more on this idea,
see Section 4: Scavenge Scheduling, (p 29) of: 

   Design of the Opportunistic Garbage Collector
   Paul R. Wilson and Thomas G. Moher
   OOPSLA'89 Proceedings, 1-6 October, 1989
   SIGPLAN Notices vol24 n10, p 23-35

(This paper also includes some other ideas for implementing
generational GC on stock hardware and some performance measurements.)

(Just passing on a reference -- I have no experience using it.)

···········@ai.mit.ed
From: Carl R. Manning
Subject: Re: LISP compiler? (really future of MACL) (Really Incremental GC)
Date: 
Message-ID: <5145@wheaTI-chex.ai.mit.edu>
In article <·@wglen.UUCP> ······@wglen.UUCP (Mike Bonham) writes:
>...if an
>application involves an interactive user interface (as many Lisp programs
>do!!!), then incremental GC can greatly improve the *perceived* responsiveness.
>...
>-- Mike Bonham       CPSC.UCalgary.CA!wglen!bonham
>-- 
If you're just dealing with human interfaces rather than real time
requirements, then incremental garbage collection isn't the only way
to improve perceived performance --- instead you can try to schedule the long
scavenges during otherwise unresponsive periods, e.g. during
compute-bound operations or during idle periods.  For more on this idea,
see Section 4: Scavenge Scheduling, (p 29) of: 

   Design of the Opportunistic Garbage Collector
   Paul R. Wilson and Thomas G. Moher
   OOPSLA'89 Proceedings, 1-6 October, 1989
   SIGPLAN Notices vol24 n10, p 23-35

(This paper also includes some other ideas for implementing
generational GC on stock hardware and some performance measurements.)

(Just passing on a reference -- I have no experience using it.)

···········@ai.mit.ed
From: Ken Dickey
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <5117@tekcrl.LABS.TEK.COM>
> A real time (parallel or incremental) garbage collector is roughly
> twice as costly as a comparable stop-and-collect collector.  The real
> time version has to collect more frequently than stop-and-copy because
> the real time version has to start it's collection well before memory
> is exhausted.  A real time "two space copying" collector is even more
> expensive because of the cost of checking for a forwarding pointer on
> every reference to GC managed memory.  Real time collectors penalize
> real performance while improving some aspects of perceived
> performance.

This is not the case if you use your VM hardware.  See "Real-time
Concurrent Collection on Stock Multiprocessors" by Appel, Ellis, &
Lee; Princeton U. tech. report: CS-TR-133-88.  The method used also
allows multiple allocators.


-Ken Dickey		····@mrloog.WR.TEK.COM
From: Andrew Ginter
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <2203@cs-spool.calgary.UUCP>
In article <····@tekcrl.LABS.TEK.COM>, Ken Dickey writes:
> > A real time (parallel or incremental) garbage collector is roughly
> > twice as costly as a comparable stop-and-collect collector ...
>
> This is not the case if you use your VM hardware.  See "Real-time
> Concurrent Collection on Stock Multiprocessors" by Appel, Ellis, &
> Lee; Princeton U. tech. report: CS-TR-133-88.

Appel, Ellis & Li's technique reduces the cost of checking for
forwarding pointers.  It does nothing to address the performance
penalty associated with the increased frequency of garbage collection
in a real time or incremental system (Philip L. Waldler, CACM, Sept/76).
Waldler concludes that parallel collectors consume twice the resources
of serial collectors, almost all of the time, even when using algorithms
without any forwarding pointers.

Andrew Ginter, 403-282-2984, ·······@CPSC.UCALGARY.CA, ······@UNCAMULT.BITNET
From: Paul Wilson
Subject: Re: LISP compiler? (really future of MACL)
Date: 
Message-ID: <1989Dec6.221540.17239@Neon.Stanford.EDU>
In article <····@cs-spool.calgary.UUCP> ·······@cpsc.ucalgary.ca (Andrew Ginter) writes:
>In article <····@tekcrl.LABS.TEK.COM>, Ken Dickey writes:
>> > A real time (parallel or incremental) garbage collector is roughly
>> > twice as costly as a comparable stop-and-collect collector ...
>>
>> This is not the case if you use your VM hardware.  See "Real-time
>> Concurrent Collection on Stock Multiprocessors" by Appel, Ellis, &
>> Lee; Princeton U. tech. report: CS-TR-133-88.
>
>Appel, Ellis & Li's technique reduces the cost of checking for
>forwarding pointers.  It does nothing to address the performance
>penalty associated with the increased frequency of garbage collection
>in a real time or incremental system (Philip L. Waldler, CACM, Sept/76).
>Waldler concludes that parallel collectors consume twice the resources
>of serial collectors, almost all of the time, even when using algorithms
>without any forwarding pointers.

  Maybe not.  You need more memory, sure, but it doesn't have to be real
  memory.  The locality characteristics of garbage-collected systems
  are pretty strange, and you can use an incremental garbage collector
  to *improve* locality dynamically.  (See Jon L. White's paper on
  the basic idea in the 1980 Lisp conference proceedings, and Bob
  Courts' incorporation of this principle into a generational gc 
  [CACM, Sept. 88].)

  I'm not saying incremental gc's are cheap, but I'm pretty sure
  nobody's done all of the relevant experiments.


Paul R. Wilson                         
Software Systems Laboratory               lab ph.: (312) 996-9216
U. of Illin. at C. EECS Dept. (M/C 154)   ······@carcoar.stanford.edu
Box 4348   Chicago,IL 60680 
From: Ken Dickey
Subject: Re: real time collector costs
Date: 
Message-ID: <5241@tekcrl.LABS.TEK.COM>
In article <····@cs-spool.calgary.UUCP> ·······@cpsc.ucalgary.ca (Andrew Ginter) writes:

>In article <····@tekcrl.LABS.TEK.COM>, Ken Dickey writes:
>> > A real time (parallel or incremental) garbage collector is roughly
>> > twice as costly as a comparable stop-and-collect collector ...
>>
>> This is not the case if you use your VM hardware.  See "Real-time
>> Concurrent Collection on Stock Multiprocessors" by Appel, Ellis, &
>> Lee; Princeton U. tech. report: CS-TR-133-88.
>
>Appel, Ellis & Li's technique reduces the cost of checking for
>forwarding pointers.  It does nothing to address the performance
>penalty associated with the increased frequency of garbage collection
>in a real time or incremental system (Philip L. Waldler, CACM, Sept/76).
>Waldler concludes that parallel collectors consume twice the resources
>of serial collectors, almost all of the time, even when using algorithms
>without any forwarding pointers.
>
>Andrew Ginter, 403-282-2984, ·······@CPSC.UCALGARY.CA, ······@UNCAMULT.BITNET


In looking at Walder's model, I see that the `twice as costly' result
applies only to the mark-sweep collection strategy presented in the
paper.  There are other assumptions violated as well (GC now is 2-3%
of CPU, not 10-30%, because these assumptions have changed).

Increased time to `context-switch' between the collector and the
mutator is the only time difference involved between Appel-Ellis-Li
and stop-and-copy.  The reported benchmark was "the sequential
real-time version is 11% slower than the more efficient stop-and-copy,
while the concurrent version was 18% faster" (see above, p. 15).

All of the above is moot if a particular collector is fast enough for
your particular real-time system.  Not everyone uses CONS.

-Ken
From: Ken Dickey
Subject: Re: LISP compiler?
Date: 
Message-ID: <5031@tekcrl.LABS.TEK.COM>
Reply-To: ····@mrloog.WR.TEK.COM (Ken Dickey)
Followup-To: a.out
Distribution: 
Organization: Tektronix, Inc., Beaverton,  OR.
Keywords: Lisp, Scheme, linking

In article <····@internal.Apple.COM> ·····@apple.com (Paul Snively) writes:
>In article <·····················@neat.cs.toronto.edu> 
>·····@cs.toronto.edu (Rayan Zachariassen) writes:
>> ·····@apple.com (Paul Snively) writes:
>> 
>> >A future version of MACL WILL do a treewalking dead-code stripper, 
>reducing 
>> >application sizes tremendously.
>> 
>> yah, well, this is nice but a bit shortsighted as a general solution.
>> 
>> - What I really want is to be able to run small standalone lisp 
>applications.
>> I want to write cat in list (for example) without it ending up as a 2Meg
>> binary.  This is addressed by the pruner you mention (someone from Lucid
>> told me they have one or are planning one too).
>> 
>> - I want to run *many* different standalone lisp programs.  Imagine
>> 30% of unix commands being implemented in lisp.  See where the code 
>stripping
>> idea falls short?  You'll have all this code duplicated in most commands.
>> 
>> Solution?  Put the Lisp runtime support into a shared image, which the
>> application link to when they start up, like (sun/svr4) shared libraries.
>> That is in fact what it is, the Lisp equivalent to libc.  All lisp 
>applications
>> would share the runtime text, and have independent data pages per 
>process.
>> 
>> One would think this isn't too complicated to do.
>
>Not only is it difficult to do under the current Macintosh operating 
>system, it's impossible.
>

Let's seperate out the 2 concerns:

  The key to being able to link an application--rather than GC and save
  an image--is not to call EVAL or the compiler.  Either of these must
  have a full runtime support since arbitrary code may be introduced.
  Without them, linking is relatively straight-forward.

  The Mac's (lack of) OS problem (no multitasking, & friends) are of a
  more severe nature.


A couple of date points:
  
  Standalone Mac applications can be generated by LightShip's
  MacScheme+ToolSmith. 

  Cadence Research System's Chez Scheme creates sharable binaries
  (Under Unix; I don't know if their VMS or Domain systems do this).
  They have an `Application Builder' which generates standalone
  binaries.


I hope this helps.

-Ken Dickey			····@mrloog.WR.TEK.COM
From: Lou Steinberg
Subject: Re: LISP compiler?
Date: 
Message-ID: <Nov.10.13.33.46.1989.3812@bearcat.rutgers.edu>
In article <····@tekcrl.LABS.TEK.COM> ····@tekchips.LABS.TEK.COM (Ken Dickey) writes:

>   The key to being able to link an application--rather than GC and save
>   an image--is not to call EVAL or the compiler.  Either of these must
>   have a full runtime support since arbitrary code may be introduced.
>   Without them, linking is relatively straight-forward.
> 
> -Ken Dickey			····@mrloog.WR.TEK.COM

Yes, but....

Even if you don't call EVAL, your runtimes may have surprising calls
to eval in them.  E.g. in common lisp, if you use READ, and don't
remove the #.  read macro (which causes read-time evaluation of the
next s-expr read) from your read-table, then the "application maker"
can't remove any part of lisp.

Worse, what about functions like APPLY?  If you just give compiled
functions to apply you are OK, but APPLY can also take a LAMBDA
expression, i.e. it needs to be able to call the interpreter,
depending on the type of argument it is given.  It may not be so easy
for a compiler to determine whether a given argument will, at runtime,
be a lambda expression.  And suppose the argument is a symbol as in
(apply 'foo ...).  How easily can the compiler tell that at runtime
foo's definition will be compiled code rather than uncompiled code?

Well, suppose we do without APPLY.  Unpleasant, but maybe worth it?
But then what about all the other functions that take a function as an
argument, like MAPCAR and SORT?

About the only solution I see is to invent some kind of declaration
that lets the programmer inform the compiler / application maker that
it can safely remove EVAL.
-- 
					Lou Steinberg

uucp:   {pretty much any major site}!rutgers!aramis.rutgers.edu!lou 
arpa:   ···@cs.rutgers.edu
From: Paul Snively
Subject: Re: LISP compiler?
Date: 
Message-ID: <5140@internal.Apple.COM>
In article <·························@bearcat.rutgers.edu> 
···@bearcat.rutgers.edu (Lou Steinberg) writes:
[All kinds of stuff about functional arguments]

Actually, it's not as bad as it sounds; the whole point behind having a 
treewalker is to be able to determine whether or not the parameters to 
some of these "metafunctions" are completely (deterministically) specified 
within the context of the Lisp environment or not; (EVAL (READ)) is just 
the most obvious example of one that isn't.

I'm not sure that the example of passing a LAMBDA expression to APPLY is 
valid; there's nothing whatsoever to prevent LAMBDA expressions from being 
compiled (I'm pretty sure that LAMBDA expressions in MACL are compiled, at 
any rate).

__________________________________________________________________________
Just because I work for Apple Computer, Inc. doesn't mean that they 
believe what I believe or vice-versa.
__________________________________________________________________________
C++ -- The language in which only friends can access your private members.
__________________________________________________________________________
From: Tim Moore
Subject: Re: LISP compiler?
Date: 
Message-ID: <1989Nov10.154943.18376@hellgate.utah.edu>
In article <····@internal.Apple.COM> ·····@apple.com (Paul Snively) writes:
>I'm not sure that the example of passing a LAMBDA expression to APPLY is 
>valid; there's nothing whatsoever to prevent LAMBDA expressions from being 
>compiled (I'm pretty sure that LAMBDA expressions in MACL are compiled, at 
>any rate).

The problem is that the argument to APPLY and FUNCALL is a lambda
expression, which can be an arbitrary list, consed up at runtime. With
the appropriate use of THE, you could specify that this would never
happen, or the compiler might be able to figure that out itself
through data flow analysis. Good luck with the general case.

This is another example of why saying 
'(lambda (foo bar) (list foo bar)) 
when you mean
#'(lambda (foo bar) (list foo bar))
is bad; the compiler can't do anything with the former (not to mention
lexical scoping issues.)
Tim Moore                     ·····@cs.utah.edu {bellcore,hplabs}!utah-cs!moore
"Ah, youth. Ah, statute of limitations."
		-John Waters
From: Barry Margolin
Subject: Re: LISP compiler?
Date: 
Message-ID: <31645@news.Think.COM>
In article <······················@hellgate.utah.edu> ··················@cs.utah.edu (Tim Moore) writes:
>The problem is that the argument to APPLY and FUNCALL is a lambda
>expression, which can be an arbitrary list, consed up at runtime.

This particular feature is being taken out of ANSI Common Lisp.  The
definition of the FUNCTION data type is being tightened, and it will refer
to a specific, distinguishable data type.  The FUNCTION special form and
the SYMBOL-FUNCTION function will be guaranteed to return objects of this
type.  Functions that take functional arguments will accept either a
function or a symbol (automatically calling SYMBOL-FUNCTION on the latter);
they will no longer take lambda lists.  Lambda lists will be coercible to
functions with (COERCE <lambda exp> 'FUNCTION); this is defined to be
equivalent to (EVAL '(FUNCTION <lambda exp>)).

So, returning to the issue of the code walker that removes unnecessary
runtime functions, APPLY, FUNCALL, MAP, etc. will not force inclusion of
everything.  EVAL and COMPILE definitely will, and COERCE may if its second
argument is non-constant or the symbol FUNCTION.  READ also may force
inclusion of everything, unless you've disabled "#.".
Barry Margolin, Thinking Machines Corp.

······@think.com
{uunet,harvard}!think!barmar
From: Lou Steinberg
Subject: Re: LISP compiler?
Date: 
Message-ID: <Nov.13.16.59.55.1989.4053@bearcat.rutgers.edu>
In article <·····@news.Think.COM> ······@leander.think.com (Barry Margolin) writes:

> Functions that take functional arguments will accept either a
> function or a symbol (automatically calling SYMBOL-FUNCTION on the latter);
> they will no longer take lambda lists.

You still have to be sure somehow that the user hasn't consed up a
runtime list and used setf of symbol-definition to store it as the
definition of some symbol.

(By the way, as at least one poster has pointed out, in my original
message when I spoke about "lambda expressions" causing problems, what
I meant was "lists consed up at runtime being used as function
definitions".  My sloppy wording appears to have confused several
people, for which I apologize.)
-- 
					Lou Steinberg

uucp:   {pretty much any major site}!rutgers!aramis.rutgers.edu!lou 
arpa:   ···@cs.rutgers.edu
From: Barry Margolin
Subject: Re: LISP compiler?
Date: 
Message-ID: <31672@news.Think.COM>
In article <·························@bearcat.rutgers.edu> ···@bearcat.rutgers.edu (Lou Steinberg) writes:
>In article <·····@news.Think.COM> ······@leander.think.com (Barry Margolin) writes:
>> Functions that take functional arguments will accept either a
>> function or a symbol (automatically calling SYMBOL-FUNCTION on the latter);
>> they will no longer take lambda lists.
>You still have to be sure somehow that the user hasn't consed up a
>runtime list and used setf of symbol-definition to store it as the
>definition of some symbol.

Nope, that won't be permitted.  When you do 

	(setf (symbol-function <symbol>) <expression>)

the value of <expression> will have to be a function object; a list will
not be acceptable in portable code.  If <expression> is a list, then you'll
have to do

	(setf (symbol-function <symbol>) (coerce <expression> 'function))
Barry Margolin, Thinking Machines Corp.

······@think.com
{uunet,harvard}!think!barmar
From: John Gateley
Subject: Linking
Date: 
Message-ID: <97923@ti-csl.csc.ti.com>
In a bunch of articles, people have been mentioning the problem
with linking: read, apply, eval/compile etc. They have missed
a couple, and actually, the ones they missed are the heart of the
problem:

Intern, and symbol-function.

Intern is particularly bad, because you can intern the string
"CAR" in the lisp package, and get a symbol which has a function
definition. Intern is what makes read and all the other guys bad.

symbol-function is kind of bad. It means that you have to include
all the function definition for all symbols used as data in you program.
But, unless intern appears, all the symbols are there to be examined.

John
·······@m2.csc.ti.com
From: Barry Margolin
Subject: Re: Linking
Date: 
Message-ID: <31669@news.Think.COM>
In article <·····@ti-csl.csc.ti.com> ·······@m2.UUCP (John Gateley) writes:
>symbol-function is kind of bad. It means that you have to include
>all the function definition for all symbols used as data in you program.
>But, unless intern appears, all the symbols are there to be examined.

You were right about INTERN (and its brother, FIND-SYMBOL), but I don't
think SYMBOL-FUNCTION is as bad.  It's likely that if a symbol with a
function definition is used as data that its function definition is the
interesting thing about it, so those function definitions are probably not
wasting space.  CAR is probably the only exception, and only in automotive
applications.

SYMBOL-FUNCTION is no worse than FUNCALL, APPLY, or any other function that
takes a functional argument, since they can all invoke SYMBOL-FUNCTION
implicitly.

Making Lisp easy to compile into small binaries was never a goal of X3J13.
Much of what makes Lisp as great as it is goes contrary to that goal.
Barry Margolin, Thinking Machines Corp.

······@think.com
{uunet,harvard}!think!barmar
From: Dan Weinreb
Subject: Re: LISP compiler?
Date: 
Message-ID: <1989Nov10.072242.16801@odi.com>
At least two implementations of Lisp on time-sharing systems in which
the runtime was shared between different user processes (only one copy
on disk, only one copy in main memory) existed over ten years ago:
PDP-10 Maclisp running under the ITS time-sharing system on PDP-10's
and DecSystem-20's at MIT, and Multics Maclisp running under Multics
on Honeywell 6180's at several sites.  Both worked essentially like
Sun shared libraries.  Unfortunately, my knowledge about modern Lisp
implementations is less detailed.