From: David Steuber
Subject: Tell Me About Lisp Machines
Date: 
Message-ID: <87llx0unvc.fsf@verizon.net>
I have seen the numerous LispM related threads but the are quite over
my head.  About all I've gleaned is that a Lisp Machine is something
that has a CPU optimized for running Lisp code and an OS written in
Lisp.  Beyond that, I know next to nothing.  I have heard references
to big, slow, and hot.  I assume that refers to the ancient hardware
technology.

I would be interested in knowing what is so special about these
machines.  There seem to be many fond memories of them.  I would be
particularly interested in screen shots and photos of the hardware.

I would also assume that today's hardware can run Lisp programs many
times faster than a Lisp Machine.  I've seen talk about emulation
environments and such.  I would expect the market to be more open to
running programs written in Lisp on Unix, Mac, or Windows in a way
that is rather more transparent than running a Java app.

I know that debate on LispMs has been going on for centuries now.
Mainly, I am interested in a simple summary for the complete
neophyte.

TIA.

-- 
(describe 'describe)

From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <el2sf72g.fsf@ccs.neu.edu>
David Steuber <·············@verizon.net> writes:

> I have seen the numerous LispM related threads but the are quite over
> my head.  About all I've gleaned is that a Lisp Machine is something
> that has a CPU optimized for running Lisp code and an OS written in
> Lisp.  Beyond that, I know next to nothing.  I have heard references
> to big, slow, and hot.  I assume that refers to the ancient hardware
> technology.

``Ancient''?!  There are some of us that used to *work* with Lisp
machines on a daily basis that are still alive.  A few of us still
have HAIR!

> I would be interested in knowing what is so special about these
> machines.  There seem to be many fond memories of them.  I would be
> particularly interested in screen shots and photos of the hardware.

Type `lisp machine' into the Google image search and ogle away.

> I would also assume that today's hardware can run Lisp programs many
> times faster than a Lisp Machine.  

Yes.

> I am interested in a simple summary for the complete neophyte.

Google.
From: David Steuber
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <877k8kf6cd.fsf@verizon.net>
Joe Marshall <···@ccs.neu.edu> writes:

> > to big, slow, and hot.  I assume that refers to the ancient hardware
> > technology.
> 
> ``Ancient''?!  There are some of us that used to *work* with Lisp
> machines on a daily basis that are still alive.  A few of us still
> have HAIR!

Is it grey yet? :-p

> > I am interested in a simple summary for the complete neophyte.
> 
> Google.

Someday, I hope to be able to write programs that are just as
concise.  I may need a little more functionality, but I guess that is
a matter of inventing more new words. :-p

Thanks.

-- 
(describe 'describe)
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <wugkdrnt.fsf@ccs.neu.edu>
David Steuber <·············@verizon.net> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > > to big, slow, and hot.  I assume that refers to the ancient hardware
> > > technology.
> > 
> > ``Ancient''?!  There are some of us that used to *work* with Lisp
> > machines on a daily basis that are still alive.  A few of us still
> > have HAIR!
> 
> Is it grey yet? :-p

Not all of it.

> > > I am interested in a simple summary for the complete neophyte.
> > 
> > Google.
> 
> Someday, I hope to be able to write programs that are just as
> concise.  I may need a little more functionality, but I guess that is
> a matter of inventing more new words. :-p

Just choose the appropriate domain-specific language.
From: Barry Margolin
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <%L4za.2$Kf1.102@paloalto-snr1.gtei.net>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>David Steuber <·············@verizon.net> writes:
>
>> I have seen the numerous LispM related threads but the are quite over
>> my head.  About all I've gleaned is that a Lisp Machine is something
>> that has a CPU optimized for running Lisp code and an OS written in
>> Lisp.  Beyond that, I know next to nothing.  I have heard references
>> to big, slow, and hot.  I assume that refers to the ancient hardware
>> technology.
>
>``Ancient''?!  There are some of us that used to *work* with Lisp
>machines on a daily basis that are still alive.  A few of us still
>have HAIR!

No new Lisp Machine hardware has been developed for at least 10 years.  In
the computer industry, that's not quite prehistoric, but it certainly
qualifies as very old.

>> I would also assume that today's hardware can run Lisp programs many
>> times faster than a Lisp Machine.  
>
>Yes.

I think when Sparcstation 2's came out, they were competitive with
Symbolics 3600-series Lisp Machines that were out at the same time, but I
think the XL-series gave the advantage back to the Lispms for a couple of
years.

These days, I'd imagine some machines can run a Lisp Machine emulator at
least as fast as the real Lispms.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: c hore
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ca167c61.0305221928.663cb0fb@posting.google.com>
> ``Ancient''?!  There are some of us that used to *work* with Lisp
> machines on a daily basis that are still alive.  A few of us still
> have HAIR!

Reminds me of the name of that band (of Gabriel) --- "Not Dead Yet".

I wonder though who among the list of Lisp luminaries have passed on,
(besides, presumably, Church).
From: Franz Kafka
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <OjQya.675$KU3.413@news02.roc.ny.frontiernet.net>
"David Steuber" <·············@verizon.net> wrote in message
···················@verizon.net...
> I have seen the numerous LispM related threads but the are quite over
> my head.  About all I've gleaned is that a Lisp Machine is something
> that has a CPU optimized for running Lisp code and an OS written in
> Lisp.  Beyond that, I know next to nothing.  I have heard references
> to big, slow, and hot.  I assume that refers to the ancient hardware
> technology.
>
> I would be interested in knowing what is so special about these
> machines.  There seem to be many fond memories of them.  I would be
> particularly interested in screen shots and photos of the hardware.
>
> I would also assume that today's hardware can run Lisp programs many
> times faster than a Lisp Machine.  I've seen talk about emulation
> environments and such.  I would expect the market to be more open to
> running programs written in Lisp on Unix, Mac, or Windows in a way
> that is rather more transparent than running a Java app.
>
> I know that debate on LispMs has been going on for centuries now.
> Mainly, I am interested in a simple summary for the complete
> neophyte.
>
> TIA.
>
> --
> (describe 'describe)
>

The basics is this.

The OS of a Lisp Machine is written in Lisp.

In an ideal Lisp Machine the compiler would have
been implemented in hardware--(the GC, type checking,
support for Lists, support for CLOS classes), I don't
know how ideal the actual processors were.

But, with VLSI tools some one could make that ideal CPU,
or embed it in a board for Mac/Win/Linux.

The cool thing about them is that you could rewrite the
OS if you knew Lisp. Linux is written in C. However,
since Lisp is interactive--you could interactivly rewrite, or
debug parts of the OS.

& all of the tools to use Lisp were written in Lisp. Disk Access, Network
Access, Graphics, I/O, File Access, Compiler Tools, Editors, Debugers. And
they all ran at once.

So you could edit/debug/run Lisp code in the same environment.

The most killer part was the VM with 8Meg RAM Phyisical you could get 100+
Meg RAM logical VM

The only part of the Lispm that was not in Lisp was the Microcode. The
Machine Code/Assebly Code was Lisp and every other part was in Lisp too.

;( I don't see why people argue about not building an emulator :(
it would clear up a lot of myths about Lisp being 2 big and slow
for systems programming.

It would stare those myths in their face and say, byte me you myth Lisp can
do a lot--look at me, I'm an OS & I'm written in Lisp.
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <3cj8f6ax.fsf@ccs.neu.edu>
"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> The most killer part was the VM with 8Meg RAM Phyisical you could get 100+
> Meg RAM logical VM

Virtual memory is not a technique specific to the LispM.
From: Franz Kafka
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <KxQya.8651$L37.3131@news01.roc.ny.frontiernet.net>
"Joe Marshall" <···@ccs.neu.edu> wrote in message
·················@ccs.neu.edu...
> "Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com>
writes:
>
> > The most killer part was the VM with 8Meg RAM Phyisical you could get
100+
> > Meg RAM logical VM
>
> Virtual memory is not a technique specific to the LispM.

The Lispm did a better job. Better Garbage Collection. Hardware Type
Checking. More VM per Physical RAM. A better swap
file; it does not crash often :)

The LispM did the best VM this side of Monday. We need to learn from how the
LispM did it, and the fact that it was written in Lisp, and that Lisp
programmers could rewrite/improve it also helped alot.
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <r86sdqhi.fsf@ccs.neu.edu>
"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> "Joe Marshall" <···@ccs.neu.edu> wrote in message
> ·················@ccs.neu.edu...
> > "Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com>
> writes:
> >
> > > The most killer part was the VM with 8Meg RAM Phyisical you could get
> 100+
> > > Meg RAM logical VM
> >
> > Virtual memory is not a technique specific to the LispM.
> 
> The Lispm did a better job. Better Garbage Collection. Hardware Type
> Checking. More VM per Physical RAM. A better swap
> file; it does not crash often :)

While I appreciate your enthusiasm for the LispM, the virtual memory
system wasn't that innovative.  The GC was aware of the VM, but the VM
didn't really need to know much about the GC.  (The VM arranged for
the GC to have a peek at a page as it was going out to disk, but I
think that's about it.)  

More VM than PM was common.

> The LispM did the best VM this side of Monday. We need to learn from how the
> LispM did it, and the fact that it was written in Lisp, and that Lisp
> programmers could rewrite/improve it also helped alot.

The virtual memory in the LispM was written in microcode, not in
Lisp.
From: Scott McKay
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <DY3za.226262$pa5.223881@rwcrnsc52.ops.asp.att.net>
"Joe Marshall" <···@ccs.neu.edu> wrote in message
·················@ccs.neu.edu...
> "Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com>
writes:
>
> > "Joe Marshall" <···@ccs.neu.edu> wrote in message
> > ·················@ccs.neu.edu...
> > > "Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail .
com>
> > writes:
> > >
> > > > The most killer part was the VM with 8Meg RAM Phyisical you could
get
> > 100+
> > > > Meg RAM logical VM
> > >
> > > Virtual memory is not a technique specific to the LispM.
> >
> > The Lispm did a better job. Better Garbage Collection. Hardware Type
> > Checking. More VM per Physical RAM. A better swap
> > file; it does not crash often :)
>
> While I appreciate your enthusiasm for the LispM, the virtual memory
> system wasn't that innovative.  The GC was aware of the VM, but the VM
> didn't really need to know much about the GC.  (The VM arranged for
> the GC to have a peek at a page as it was going out to disk, but I
> think that's about it.)
>
> More VM than PM was common.

The Symbolics Lisp Machine, of all the computers I have ever
used, had the best implementation of large VM I have ever used.

> > The LispM did the best VM this side of Monday. We need to learn from how
the
> > LispM did it, and the fact that it was written in Lisp, and that Lisp
> > programmers could rewrite/improve it also helped alot.
>
> The virtual memory in the LispM was written in microcode, not in
> Lisp.

This is not true of the Symbolics machine.  The VM was written
entirely in (mostly) ordinary Lisp.
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <4r3ndroi.fsf@ccs.neu.edu>
"Scott McKay" <···@attbi.com> writes:

> The Symbolics Lisp Machine, of all the computers I have ever
> used, had the best implementation of large VM I have ever used.

> "Joe Marshall" <···@ccs.neu.edu> wrote in message
>
> > The virtual memory in the LispM was written in microcode, not in
> > Lisp.
> 
> This is not true of the Symbolics machine.  The VM was written
> entirely in (mostly) ordinary Lisp.

I stand corrected.

My experience was with the CADR, Lambda, and Explorer series.
From: c hore
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ca167c61.0305221914.29954c1e@posting.google.com>
"Scott McKay" wrote
> "Joe Marshall" wrote
> > More VM than PM was common.
> 
> The Symbolics Lisp Machine, of all the computers I have ever
> used, had the best implementation of large VM I have ever used.

From Symbolics, I became used to the idea that VM >> PM,
and that in the equation VM=n*PM, n > 1, n could be whatever you
wanted, depending on the characteristics of your applications.

Now, when I install FreeBSD, the Help documentation
seems to recommend that swap space be about 2-3x
RAM.  It seems strange to me that there is a recommendation
for a specific value, or range or values, for n.
Am I misreading that recommendation?
Is that recommendation more of a minimum n, and that FreeBSD
VM would cope just fine if you chose n much larger than 2-3, say.
From: Christopher Browne
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <bak9gu$m5tt$1@ID-125932.news.dfncis.de>
In the last exciting episode, ·······@yahoo.com (c hore) wrote:
> Now, when I install FreeBSD, the Help documentation
> seems to recommend that swap space be about 2-3x
> RAM.  It seems strange to me that there is a recommendation
> for a specific value, or range or values, for n.
> Am I misreading that recommendation?
> Is that recommendation more of a minimum n, and that FreeBSD
> VM would cope just fine if you chose n much larger than 2-3, say.

The point isn't that having n >> 3 would make the system behave badly.

It instead is that if you *actually use* that much VM, performance
will Suck Badly.

If you have 1 GB of physical memory, and add to that 9GB of swap, but
never really use the swap, then the swap space is wasted.

If, on the other hand, you are actively using the 9GB of swap, then
the system will definitely be thousands of times slower than it would
be if you had 10GB of physical memory to work with.

I find the claims about LispM "VM efficiency" quite curious.  The only
way in which I can imagine it working well with the huge multiples
claimed is if the garbage collectors involved were steadily working at
the task of improving "locality of reference," so that objects that
that get used get moved together, those that don't disappear into
"swap," and that even there, locality of "less common reference" means
that if you need to draw one object in from disk, you probably pulled
the neighbours that you were about to also access in the same page.

Of course, I'd rather have lots of real RAM.  (For which reason I
upgraded this machine to have 1GB today...)
-- 
let name="cbbrowne" and tld="acm.org" in String.concat ·@" [name;tld];;
http://www.ntlug.org/~cbbrowne/linuxxian.html
"I really only meant to point out how nice InterOp was for someone who
doesn't  have the weight of the  Pentagon behind him.   I really don't
imagine that the Air Force will ever be  able to operate like a small,
competitive enterprise like GM or IBM." -- Kent England
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <llwxbxjc.fsf@ccs.neu.edu>
Christopher Browne <········@acm.org> writes:

> I find the claims about LispM "VM efficiency" quite curious.  

Me, too.

> The only way in which I can imagine it working well with the huge multiples
> claimed is if the garbage collectors involved were steadily working at
> the task of improving "locality of reference," so that objects that
> that get used get moved together, those that don't disappear into
> "swap," and that even there, locality of "less common reference" means
> that if you need to draw one object in from disk, you probably pulled
> the neighbours that you were about to also access in the same page.

The GC was a copying GC, and (at least on the LMI Lambda) it had a
limited stack which made it approximately depth-first (unless the
stack overflowed).  This tended to move common objects towards the
same page.  It was a fairly important effect.  

I tended to write code that `consed' more than the other hackers, so I
relied on the GC to a greater extent.  Over time, I noticed a distinct
decrease in performance.  It turns out that since I was GC'ing more
frequently, it did a better job at localizing my data.  But there was
a latent bug in the virtual memory system.  The VM used a simple hash
table to implement the mapping of virtual memory to physical memory
when the hardware memory map overflowed.  The hashing algorithm was a
simple one, and it hadn't been modified from the original CADR.
However, the Lambda had an order of magnitude more RAM than the CADR.
The hashing algorithm didn't scale, so between the GC moving objects
closer together and there being more pages in the hash table, there
started being an unexpected number of hash collisions and the virtual
memory performance suffered as a result.

I rewrote the VM hashing function to scale to the larger amount of
memory and to more evenly disperse close virtual pages.  When the GC
moved objects closer to each other, this increased the efficiency of
the VM hash table and improved the performance of the machine.

It seems that depth-first scavenging can get you around a 10%
performance increase over breadth-first.  (Someone else measured this,
too and got about the same number.)

TI went even further.  They `flipped' newspace into oldspace very
early in the GC cycle, but refrained from scavening until later.  Thus
the bulk of the objects were moved into newspace because the `mutator'
touched them.  Presumably, this is better than depth-first because it
uses the actual reference patterns of the user process.  I don't know
if that made up for the added complexity and the added overhead of
having a lot of oldspace around.

In any case, the real performance gain came because the ephemeral set
approximates the working set of the processor and both are smaller
than RAM, so you can GC without going to disk.  If your working set
exceeds your RAM space, you are going to thrash, no matter how clever
you think you are.
From: Mario S. Mommer
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <fzadddkcbg.fsf@cupid.igpm.rwth-aachen.de>
Joe Marshall <···@ccs.neu.edu> writes:
> I rewrote the VM hashing function to scale to the larger amount of
> memory and to more evenly disperse close virtual pages.

...and you did that interactively, I gather. No need even to reboot
the thing. ;)

Mario.
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ptm9agb5.fsf@ccs.neu.edu>
Mario S. Mommer <········@yahoo.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> > I rewrote the VM hashing function to scale to the larger amount of
> > memory and to more evenly disperse close virtual pages.
> 
> ...and you did that interactively, I gather. No need even to reboot
> the thing. ;)

The LMI Lambda had pageable microcode, but the VM hashing code was
wired down.  Sigh.

I did, however, turn my Lisp Machine into a DES cracking machine by
swapping out the Lisp register set and using the register memory for
S-boxes.  I'd swap back Lisp after running several thousand rounds of
DES.  I added that as a dynamic macroinstruction without having to
reboot.
From: Franz Kafka
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <h3rza.180$4t.35@news01.roc.ny.frontiernet.net>
"Joe Marshall" <···@ccs.neu.edu> wrote in message
·················@ccs.neu.edu...

>
> I did, however, turn my Lisp Machine into a DES cracking machine by
> swapping out the Lisp register set and using the register memory for
> S-boxes.  I'd swap back Lisp after running several thousand rounds of
> DES.  I added that as a dynamic macroinstruction without having to
> reboot.

That's intresting. Tell me more :)

Can you e-mail me some code example about how to do this? Can this be done
with a Symbolics machine?

Thanx, I'm just curious.

BTW, in my address ignore all the white space but keep the _'s
this reduces spam to about 5/messages per. month :)
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <r86pjrrw.fsf@ccs.neu.edu>
> "Joe Marshall" <···@ccs.neu.edu> wrote in message
> > I did, however, turn my Lisp Machine into a DES cracking machine by
> > swapping out the Lisp register set and using the register memory for
> > S-boxes.  I'd swap back Lisp after running several thousand rounds of
> > DES.  I added that as a dynamic macroinstruction without having to
> > reboot.

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:
> 
> That's intresting. Tell me more :)

The LMI Lambda was implemented with 4Kx1 bit static RAM for the
register set.  The microcode could only address 2K of it, so the top
bit was connected to a control register.  Normally, this would be set
to zero and never changed.  If you set it to 1, you'd get an empty set
of registers.  This is where I put the S-Boxes.

The Lambda also had pageable microcode.  You could hand write some
microcode and copy it down into the microcode swap space.  Several
unused macroinstructions would dispatch to a table in the swap space
so you could dynamically add microcode.

> Can you e-mail me some code example about how to do this? 

No.  That was about 18 years ago and I've long since forgotten the
details. 

> Can this be done with a Symbolics machine?

I don't know.
From: c hore
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ca167c61.0305241927.6c60ab3@posting.google.com>
Joe Marshall wrote:
> It seems that depth-first scavenging can get you around a 10%
> performance increase over breadth-first.  (Someone else measured this,
> too and got about the same number.)
> 
> TI went even further.  They `flipped' newspace into oldspace very
> early in the GC cycle, but refrained from scavening until later.  Thus
> the bulk of the objects were moved into newspace because the `mutator'
> touched them.  Presumably, this is better than depth-first because it
> uses the actual reference patterns of the user process.  I don't know
> if that made up for the added complexity and the added overhead of
> having a lot of oldspace around.
> 
> In any case, the real performance gain came because the ephemeral set
> approximates the working set of the processor and both are smaller
> than RAM, so you can GC without going to disk.  If your working set
> exceeds your RAM space, you are going to thrash, no matter how clever
> you think you are.

That approach of allowing the normal scavenger to be disabled (under
user or program control) and instead rely on the mutator to
discover and cause copying of the "active" subset of the reachable
set was also developed and adapted for the Symbolics garbage
collector, although not by Symbolics, but by an independent
party as part a research study.

As I recall, the results were similar, it did help, but I don't
remember the figures anymore.  The idea, in both the TI
and Symbolics implementation, was to copy not just the reachable
objects but the reachable-and-active objects.  The assumption was
that being accessed by the mutator is an indication of
"activeness".  I suppose there are cases where is this
the wrong assumption, where what you just accessed will be
the furthest in the future to be accessed again.
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <r86pbyjw.fsf@ccs.neu.edu>
·······@yahoo.com (c hore) writes:

> "Scott McKay" wrote
> > "Joe Marshall" wrote
> > > More VM than PM was common.
> > 
> > The Symbolics Lisp Machine, of all the computers I have ever
> > used, had the best implementation of large VM I have ever used.
> 
> From Symbolics, I became used to the idea that VM >> PM,
> and that in the equation VM=n*PM, n > 1, n could be whatever you
> wanted, depending on the characteristics of your applications.

At LMI, we had a much smaller address space (2^25 32-bit words), so
naturally we had excuses why you didn't need that much virtual
memory.  Of course it was a lot of BS, but there was one point that
did make sense.  You usually end up using huge amounts of virtual
memory when you are dealing with huge array structures, like bitmaps,
etc.  It is convenient to just allocate them in memory somewhere and
ignore the fact that they don't actually fit in RAM.  LMI's argument
was that this is sub-optimal because you are relying on the VM paging
algorithm to manage access to your structures, but it knows nothing
about the access patterns and is likely to perform worse than manual
management.  (That's probably even true, but on the other hand, the
ease of programming might make up for the loss in performance.)
From: Lord Isildur
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <Pine.GSO.4.55L-032.0305231536590.4788@unix2.andrew.cmu.edu>
This recommendation is almost arbitrary. Its origins trace back almost
20 years to some guidelines that were drawn up at berkeley _then_..
some guys did some experiments on a couple timesharing VAXen in the early
80s and came up with the rule of thumb of swap being roughly 2-3 times the
size of core.. They found that any bigger and it was a waste of disk, and
any smaller and sometimes somebody ran out..
The installation scripts for all the BSD-derived systems
generally use the same general rule of thumb, even though most users are
effectively single user workstations now who have half a gig of core and
dont really need a gig of swap.. :) the values are only recommendations,
you can run a system with no swap at all if you like.

isildur

On Thu, 22 May 2003, c hore wrote:
> From Symbolics, I became used to the idea that VM >> PM,
> and that in the equation VM=n*PM, n > 1, n could be whatever you
> wanted, depending on the characteristics of your applications.
>
> Now, when I install FreeBSD, the Help documentation
> seems to recommend that swap space be about 2-3x
> RAM.  It seems strange to me that there is a recommendation
> for a specific value, or range or values, for n.
> Am I misreading that recommendation?
> Is that recommendation more of a minimum n, and that FreeBSD
> VM would cope just fine if you chose n much larger than 2-3, say.
>
From: Madhu
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <m3llww35mh.fsf@robolove.meer.net>
Helu

* "Scott McKay" <·······················@rwcrnsc52.ops.asp.att.net> :

| The Symbolics Lisp Machine, of all the computers I have ever
| used, had the best implementation of large VM I have ever used.

This technology wouldnt carry over if say each lisp "process" was
given its own address space (like UNIX processes are), would it?

Was its performance on large VM the consequence of its shared
address space? (Other than a consequence of it being designed to
be bit efficient)


Regards
Madhu


--
Open system or closed system, enlightenment or ideology, those are the
questions.	         	    "John C. Mallery" <····@ai.mit>
From: Christopher C. Stacy
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <uy90wbheh.fsf@dtpq.com>
>>>>> On Sat, 24 May 2003 12:18:30 GMT, Madhu  ("Madhu") writes:

 Madhu> Helu
 Madhu> * "Scott McKay" <·······················@rwcrnsc52.ops.asp.att.net> :

 Madhu> | The Symbolics Lisp Machine, of all the computers I have ever
 Madhu> | used, had the best implementation of large VM I have ever used.

 Madhu> This technology wouldnt carry over if say each lisp "process" was
 Madhu> given its own address space (like UNIX processes are), would it?

 Madhu> Was its performance on large VM the consequence of its shared
 Madhu> address space? (Other than a consequence of it being designed to
 Madhu> be bit efficient)

"This technology" consists of algorithms implemented by programs.
I don't know what it would mean to completely change around a 
(not clearly identified) piece of software in some completely
unspecified way in some vague context, and have it "carry over".

But if you're asking if a good virtual memory system and process
scheduler could be written for a different architecture,
incorporating the ideas from Genera, the answer is: yes.

What are you asking, and why are you asking?
From: Madhu
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <m3addb3l8i.fsf@robolove.meer.net>
Helu

* ······@dtpq.com (Christopher C. Stacy) <·············@dtpq.com> :
| >>>>> On Sat, 24 May 2003 12:18:30 GMT, Madhu  ("Madhu") writes:
|  Madhu> * "Scott McKay" <·······················@rwcrnsc52.ops.asp.att.net> :
|  Madhu> | The Symbolics Lisp Machine, of all the computers I have ever
|  Madhu> | used, had the best implementation of large VM I have ever used.
| 
|  Madhu> This technology wouldnt carry over if say each lisp "process" was
|  Madhu> given its own address space (like UNIX processes are), would it?
| 
|  Madhu> Was its performance on large VM the consequence of its shared
|  Madhu> address space? (Other than a consequence of it being designed to
|  Madhu> be bit efficient)
|
| "This technology" consists of algorithms implemented by programs.
| I don't know what it would mean to completely change around a 
| (not clearly identified) piece of software in some completely
| unspecified way in some vague context, and have it "carry over".

| But if you're asking if a good virtual memory system and process
| scheduler could be written for a different architecture,
| incorporating the ideas from Genera, the answer is: yes.
|
| What are you asking, and why are you asking?

In an earlier article, which I quote below:

| From: "Scott McKay" <···@attbi.com>
| Message-ID: <·····················@sccrnsc03>
| Date: Fri, 18 Apr 2003 02:18:22 GMT
...
| One of the things we killed ourselves on in the Lisp Machine OS
| was getting trap-handling fast and making the paging system work
| well.  I can't get any of my "modern" computers to work well if
| there is more than about twice as much virtual memory as there
| is physical memory.  On 3600s and Ivories, there would typically
| be around 2 to 4 megawords (10 to 20 megabytes), but the paging
| area would be 80 to 100 megabytes; this configuration performed
| quite well indeed, even in the presence of garbage collection on
| that virtual address space.
|
| I fear all this knowledge has been lost to the world.

It is mentioned that modern machines do not have good performance
in the face of large VMs while lisp machines did.

Now, modern computers tend not to have a shared address space,
while lisp machines did. Because of their multiple address
spaces, modern systems have seperate page tables for each
processes - Quickly leading to trashing on larger VMs and poor VM
performance. This has been the case despite huge investments in
cache research and incredible cache optimizations: TLBs still
trash when you switch processes.

My question as stated above was whether the superior performance
of the lisp machines on Huge VMs a direct consequence of avoiding
multiple address spaces? I suspect the answer is yes.

I thought this multiple address space design was a crippling
factor, even if individiual ideas can be applied from Genera to
modern systems, they wouldnt make em quite as good.  My interest
in asking the above question was just to understand and verify
the claim. (If you were expecting me to clamour for moving
the Genera VM technologies to UNIX, I'm sorry to disappoint you
:> )


Regards
Madhu

--
From: Christopher C. Stacy
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <uaddbkaa7.fsf@dtpq.com>
>>>>> On Sun, 25 May 2003 00:57:49 GMT, Madhu  ("Madhu") writes:

 Madhu> My question as stated above was whether the superior performance
 Madhu> of the lisp machines on Huge VMs a direct consequence of avoiding
 Madhu> multiple address spaces? I suspect the answer is yes.

The Lisp Machine was a timesharing system used in the same way that
most Unix/Windows machines are today.  The reason that it is described
as a "single user" machine is that there was only provision for one user
to access the window system and the keyboard.  (That's the same as on
Microsoft Windows.)  But it was possible to have multiple interactive
users on the machine by coming in over the network, for example, 
but that wasn't intended nor commonly practiced because the point was
that you had the entire machine to "yourself".  (But you had all the
various network servers and background daemons on your workstation.)
Lisp Machines were commonly used as (file, mail, DNS, routing, etc.)
servers, and a major use for them was as dedicated application servers.  
Also, when the machines were first available, it was fantastic and
unheard of to have such a powerful computer for a single user.

So the question is: Why might the VM on Genera perform so much better
(as vaguely reported) than on today's workstations and servers,
when they are similar kinds of preemptive multitasking systems?

I don't think it was due to having a single address space, because
although it _maybe_ had one less mapping table to indirect through
to service a page fault, the Lisp Machine still had to perform other 
kinds of fairly expensive operations when it did a context switch.

I suspect the answer might be more along the lines of how the 
garbage collector dynamically improved locality of reference.

Or it might also simply be due to the Genera implementors having paid
a lot of attention to the area where others lately have not bothered.
Many of the key Lisp Machine hackers had extensive previous experience
developing timeshared VM operating systems (notably, ITS and Multics).
From: Gorbag
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <BAF5866C.4BD3%gorbagNOSPAM@NOSPAMmac.com>
On 5/24/03 7:52 PM, in article ·············@dtpq.com, "Christopher C.
Stacy" <······@dtpq.com> wrote:

> The reason that it is described
> as a "single user" machine is that there was only provision for one user
> to access the window system and the keyboard.

Umm. Well, I think another reason might be the total lack of any protection
between those users. If you open a stream with a password, other users can
get access to that stream. If you redefine CONS, it isn't just your programs
that are affected. Generally "multiple user" machines try to make each user
have a virtual single user machine; the LispM instead allowed multiple users
to share resources (and not always politely :-), not get a virtual private
machine.
From: Christopher C. Stacy
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ud6i7cm87.fsf@dtpq.com>
>>>>> On Sat, 24 May 2003 20:28:28 -0700, Gorbag  ("Gorbag") writes:

 Gorbag> On 5/24/03 7:52 PM, in article ·············@dtpq.com, "Christopher C.
 Gorbag> Stacy" <······@dtpq.com> wrote:

 >> The reason that it is described
 >> as a "single user" machine is that there was only provision for one user
 >> to access the window system and the keyboard.

 Gorbag> Umm. Well, I think another reason might be the total
 Gorbag> lack of any protection between those users.

 Gorbag> If you open a stream with a password, other users can get
 Gorbag> access to that stream. If you redefine CONS, it isn't just
 Gorbag> your programs that are affected. Generally "multiple user"
 Gorbag> machines try to make each user have a virtual single user
 Gorbag> machine; the LispM instead allowed multiple users to share
 Gorbag> resources (and not always politely :-), not get a virtual
 Gorbag> private machine.

Umm, well, gee, Gorbag, thanks for informing me about what 
a multiple user system is.  I never developed any operating 
systems for them before, and thanks for clarifying for me about 
what EVAL does and how the Lisp Machine works; somehow all
these decades I haven't ever quite understood all that before.

As a multiple-user network-based server, the Lisp Machine was
more secure than Unix or Windows.  For example, the file system
had very fancy security, and when combined with the superior
file protocols (by which I am not referringf to NFS, although
we had that) and the inherent protections in the system (eg.
no buffer overflow attacks), it was a very secure file server.

That fact alone should suggest that your premesis are wrong.
But let's talk about multiple hostile applications, which
I think is more along the lines of what you're referring to.

The Lisp Machine was intended as a single-user system, 
but it did not have "total lack of protection".
This has been explained on this newsgroup in recent memory
about ten times.  Moreover, I don't think it has much very 
to do with the paging performance of the virtual memory system,
and the things you're sayinh are misleading in some other ways.

You don't need multiple virtual "memory address" spaces to isolate
users from each other, unless your computer's operation is based on
using "memory addresses".  The Lisp Machine is not such a system.

The reason you would need the multiple address spaces is not
some wholesale "lack of protection" issue that you might think.
This comes up often when people ask about the system, usually as
a question presuming that the operating system must have been
fragile, and how could it have run multiple "processes" without
protecting them from each other?

Much has been talked about here about the fact that Lisp Machine was
microcoded, and people assume that the main issue was the speed of
execution of Lisp subroutines.  There microcoded instructions for
things like CAR, CDR, ASSOC, and MEMBER.  (If you Google around,
you should be able to find some detailed messages I wrote about this.)
But the really important thing about the microcode was that it
implemented the memory model.

The Lisp Machine's memory model was not the same as conventional
computers. Normal programs on the machine were restricted (by the
microcode) in how they accessed memory -- they could not use any
arbitrary memory address.  Instead, memory is composed of objects,
and you make references to objects, not to memory locations.
You do not [see below] make a random access to memory -- you go
through an object reference pointer, not an address pointer.
The microcode implements the virtual memory and object system.

Different programs were "protected" from and did not interfere
with each other because they were only making references to
objects, not to random addresses.  The Lisp Machine would not
crash (excluding rare bugs in the low-level part of the system) 
or be subverted unless object references were handed out and
misused inappropriatly.  An object reference can only be
obtained by asking a cooperating program that already has it 
to give it to you.  When you do a Lisp function call, 
a value is returned -- that value is an object reference.

But in practice, the operating system written for the Lisp
Machine was intended for a single-user, so it was deliberately
easy to obtain object references.  But it could have been
changed around to support multiple users.

The feature that you are looking for in your multi-user system 
is called "security".  Security is essentially about preventing
communication (especially that which was not invited).
It's a matter of semantics: the Lisp Machine had "protection", 
but the operating software did not choose to implement security.
The system didn't do anything to stop one program from communicating
with each other.  In fact, it went out of its way to allow them to
freely communicate with each other.

Most object references are obtained by symbolic reference.  
You give a symbol name, and EVAL calls the named function object 
or gets the variable value.  The way in which programs are protected
from colliding with each other over symbol names is the Lisp "package"
system (which was invented for this purpose on the LispM).

An example of a lack of security is that you could intern symbols 
in any package that you wanted, and there were functions such as
DO-PACKAGE that would allow you to explore the symbolic space.

To have a conflict, you have to have a communication -- you have to
pass an object reference.  The most common way of doing this across
programs is (ultimately) through a symbol referencing a global variable.

Some programs used global variables.  These would need to be rewritten
to use multiple objects so that one user would not bash another when
they were running the same shared code.  Some programs were already
written with that in mind, and since the code was necessarily pure
(function objects), and since you have your own stack and specials,
you could run such programs as many simultaneous times as you liked.

One way to implement security on the Lisp Machine without multiple
address spaces would be to implement a reference monitor through 
which all symbolic references would be vetted.

So while everything lived in one address space, you could still 
have multiple "users" (which it did have), and they would be
protected from each other (they were), and it would not have
been difficult to even add security. One reason that it was a
"single-user" machine was because it did not have security,
but you don't need multiple address spaces to have security.

The operating system that we had previous to the LispM was called ITS
(Incompatible Timesharing System), which multiplexed the PDP-10.
ITS had seperate protected user address spaces, but no security!  
It was like (but older than, and in various ways better than) the
systems that you are familiar with, such as Unix (or TOPS-20 or VMS).
All the familiar mechanisms for preventing communication were 
there, but the monitor security policy could be summarzied as, 
"Did you really mean that?  Well, okay then!"

Multiple versus single Virtual address space is orthogonal
to being a multiple-user system; it's just one technique
used in implementing protection and security.

By the way, a number of ideas and experiences from ITS were
generalized in the design of the Lisp Machine.  A good example 
is the multiprocess scheduling primitive that passes a user's
arbitrary function to the scheduler for the "runnable" test.

You mentioned the problem of someone trashing the CONS function.
Function objects are not mutable, so you can't do that.
Again, you don't need a seperate address space: just give each
user his own seperate COMMON-LISP package.  There was suport
in the system to do that, and also hierarchical packages.
In fact, even with just a single user, there were already 
a bunch of different packages for supporting all the 
different dialects of Lisp that were on the machine.

You could also give the user his very own CONS function
object, not just his own symbol for it, if needed.

The other piece that you will want in your unrestricted multiple 
user LispM is more resource controls in the process scheduler.
Again, this isn't about multiple virtual address spaces.  
But you want to prevent a runaway process from impacting the 
other users.  This would not be a big change to the scheduler, 
which already has a sophisticated priority model and everything.

There is also little or no security already written into the
parts of the higher level operating system.  You would need
to change the network API, for example, to prevent users from
taking over the Ethernet port.  Calls to the API need to go
through the reference monitor, and most users would be denied
access to the symbol NETWORK-INTERNALS:82586-TRANSMIT-EPACKET.
The code for those kind of API changes would be trivial,
correct, bug free, and inherently not supsecptable to attacks,
unlike the software on todays's "secure" systems like Unix 
or MS Windows and their irrelevent virtual memory systems.

Now, One detail that I ommited above about the LispM's memory
protection system was about how object references are created.
Normally you never see them -- you never see pointers in Lisp.  
But they're in there, and they really are virtual or physical
memory addresses.  In order to completely implement Lisp itself, 
and certain parts of the Lisp Machine operating system (hardware
drivers), you need to be able to construct object references from
whole cloth.  You need to do storage allocation, write arbitrary
bits to construct object headers, fiddle with typee tags, etc.
To do things like make an array object correspond to a hardware
buffer, you need to be able to specify the physical addresses.
Very very little of the system needed to use these functions.
This kind of thing was done on the Lisp Machine through Lisp functions
called "subprimitives" (and by calling certain microcoded instructions,
which is approximately the same thing).  To make the system secure,
those functions would also need to go through the reference monitor.  

Chris

PS. In my earlier message, I suggested that perhaps the Genera VM
    performed very well because people put a lot of work into it.  
    So I went and asked one of those people about the issues
    raised here, and whether single address space was a factor. 
    He said I could pass along his comments below:

>>>>> P T Withington ("ptw") writes:
ptw>  
ptw>  My cynical 2p:
ptw> 
ptw> 1. A _lot_ of thought went into optimizing paging traffic
ptw>    in the Lisp VM: things like sorting and coalescing the
ptw>    paging requests to minimize head seeks.  I think you see
ptw>    similar levels of optimization in BSD-based systems.
ptw>    I don't know about Linux or Windows, but my impression is
ptw>    they are simpler.
ptw> 
ptw> 2. The ratio of disk to memory (to CPU) speed is much
ptw>    higher today than it was in LispM days, this makes any
ptw>    paging seem much worse.  There are so many levels of
ptw>    indirection between the OS and the disk head, it is hard
ptw>    to make optimizations like the above.
ptw> 
ptw> 3. Shared libraries never really get shared any more,
ptw>    because every program is built on a different version of
ptw>    the library and the only solution to the versioning
ptw>    problem so far is for every program to run with its
ptw>    private copy of the library.
ptw> 
ptw> 4. Nobody worries about paging these days because RAM is
ptw>    cheap.  You just tell the user how much RAM to buy on
ptw>    your box.  And nobody has the luxury of optimizing their
ptw>    programs on .COM time.
ptw> 
ptw> These combine to make paging on modern systems a very
ptw> painful experience.  About the only thing it seems to be
ptw> good for is hiding memory leaks.
ptw> 
ptw> "Here's a nickel kid.  Go get yourself a real computer."


PPS. God no fucking wonder Erik got fed up.
From: Gorbag
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <BAF63802.4C7B%gorbagNOSPAM@NOSPAMmac.com>
On 5/25/03 4:14 AM, in article ·············@dtpq.com, "Christopher C.
Stacy" <······@dtpq.com> wrote:

>>>>>> On Sat, 24 May 2003 20:28:28 -0700, Gorbag  ("Gorbag") writes:
> 
> Gorbag> On 5/24/03 7:52 PM, in article ·············@dtpq.com, "Christopher C.
> Gorbag> Stacy" <······@dtpq.com> wrote:
> 
>>> The reason that it is described
>>> as a "single user" machine is that there was only provision for one user
>>> to access the window system and the keyboard.
> 
> Gorbag> Umm. Well, I think another reason might be the total
> Gorbag> lack of any protection between those users.
> 
> Gorbag> If you open a stream with a password, other users can get
> Gorbag> access to that stream. If you redefine CONS, it isn't just
> Gorbag> your programs that are affected. Generally "multiple user"
> Gorbag> machines try to make each user have a virtual single user
> Gorbag> machine; the LispM instead allowed multiple users to share
> Gorbag> resources (and not always politely :-), not get a virtual
> Gorbag> private machine.
> 
> Umm, well, gee, Gorbag, thanks for informing me about what
> a multiple user system is.  I never developed any operating
> systems for them before, and thanks for clarifying for me about
> what EVAL does and how the Lisp Machine works; somehow all
> these decades I haven't ever quite understood all that before.

[...]

> That fact alone should suggest that your premesis are wrong.
> But let's talk about multiple hostile applications, which
> I think is more along the lines of what you're referring to.

Well, in some sense, yes. Here are the specific examples I had in mind (from
personal experience)

1) Two users running eval servers on the machine (or one logged into the
console, another on supdup, etc.) could not load incompatible versions of
the same system. That's the kind of problem I'm alluding to above. Yes, you
do go on to explain how this *could* be handled, but in fact, at least
through Genera 8, it was NOT so handled. That is, I guess "hostile
applications" but certainly not at the design level, just two folks trying
to work on two different development releases with insufficient resources to
each "own" their own lispm.

2) (This could be considered a feature) It was relatively easy to supdup in
and fix a stuck machine even though as supdup user I didn't own any of the
processes or loaded systems. I could even recover zmacs buffers for the
stuck luser. Thanks for addressing how additional engineering effort on the
vendor's part might have "solved" this lack of security.

So, as you say, the LispM *could* have implemented various kinds of security
(which, BTW is a type of protection) mechanisms, as well as considerably
beefed up the notion of what is associated with a user (like loaded
systems). In fact I vaguely recall suggesting such a thing at the SLUG-88
meeting at the prompting of David Moon, to general laughter all around.

Nevertheless, without these things, I don�t think the Lispm could be
honestly marketed as a "multi-user" system. The features demanded by the
market for such systems were not there, regardless of the spin you might
want to put on it. That they were capable of being multi-user is not in
doubt; but that's not what was being sold. I heard at an AI conference
recently that AI folks should stop claiming they invented everything, I
think the same needs to go for the Lispm companies. The Lisp machines were
wonderful and productive environments, but they didn't have everything. SMOP
is not the same as DID.
From: Christopher C. Stacy
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <uu1biyh09.fsf@dtpq.com>
Look, Gorbag, you have a good time explaining the Lisp
Machine to people.  I'm going on newsgroup vacation.
From: Harri Haataja
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <pan.2003.05.25.23.35.59.608775@cs.helsinki.fi>
Christopher C. Stacy wrote:
>>>>>> On Sat, 24 May 2003 20:28:28 -0700, Gorbag  ("Gorbag") writes:
>  Gorbag> On 5/24/03 7:52 PM, in article ·············@dtpq.com,
>  "Christopher C. Gorbag> Stacy" <······@dtpq.com> wrote:
> 
>  >> The reason that it is described
>  >> as a "single user" machine is that there was only provision for one
>  >> user to access the window system and the keyboard.
> 
>  Gorbag> Umm. Well, I think another reason might be the total Gorbag>
>  lack of any protection between those users.

> The Lisp Machine was intended as a single-user system, but it did not
> have "total lack of protection". This has been explained on this
> newsgroup in recent memory about ten times.

And not once I've seen a message that wasn't overly vague or that made any
sense.

But this was a very nice post to see. It doesn't explain things of course,
you'd have to know the system pretty well to judge it. But IMHO here was
made a plausible claim about how something that works and is not a "single
use system" in other than some na�ve sense that considers people could
actually work and be worth something.

Much appreciated by me at least.
From: Barry Margolin
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <tRLAa.6$Y07.201@paloalto-snr1.gtei.net>
In article <·············@dtpq.com>,
Christopher C. Stacy <······@dtpq.com> wrote:
>The Lisp Machine was intended as a single-user system, 
>but it did not have "total lack of protection".

Your message describes lots of things that *could have* been done to
implement protection between users on a Lisp Machine, but the simple fact
is that none of them *were* done.

You mentioned that the Lisp memory model provides protection, but you
apparently forgot all about sub-primitives, which allow direct access to
raw memory (they're used by things like the GC and the VM implementation
itself).  In a conventional OS, access to operations that bypass OS
safeguards would be restricted to privileged mode (i.e. the kernel), but
the Lisp Machine had no notion of privileged mode AFAIK.

The Lisp memory model certainly reduces the types of errors that plagued
other single-user, unprotected systems like MacOS (pre-OSX) and early
Windows, where an error in one program could easily trash the entire
system.  As on systems with per-process protection, most errors would just
take down that one application.  But malicious applications, as well as
troubleshooting applications like (DDT), still have the full run of the
system.  Also, it wasn't unheard of for an application to get deadlocked
inside a WITHOUT-INTERRUPTS, which would lock up the entire system.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Madhu
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <m31xynrcva.fsf@robolove.meer.net>
Helu

* ······@dtpq.com (Christopher C. Stacy) <·············@dtpq.com> :
| >>>>> On Sat, 24 May 2003 12:18:30 GMT, Madhu  ("Madhu") writes:
|Madhu> * "Scott McKay" <·······················@rwcrnsc52.ops.asp.att.net> :
|Madhu> | The Symbolics Lisp Machine, of all the computers I have ever
|Madhu> | used, had the best implementation of large VM I have ever used.
|Madhu> This technology wouldnt carry over if say each lisp "process" was
|Madhu> given its own address space (like UNIX processes are), would it?
|Madhu> Was its performance on large VM the consequence of its shared
|Madhu> address space? (Other than a consequence of it being designed to
|Madhu> be bit efficient)
|
| "This technology" consists of algorithms implemented by programs.
| I don't know what it would mean to completely change around a 
| (not clearly identified) piece of software in some completely
| unspecified way in some vague context, and have it "carry over".

In an earlier article, which I quote below:

| From: "Scott McKay" <···@attbi.com>
| Message-ID: <·····················@sccrnsc03>
| Date: Fri, 18 Apr 2003 02:18:22 GMT
|...
| One of the things we killed ourselves on in the Lisp Machine OS
| was getting trap-handling fast and making the paging system work
| well.  I can't get any of my "modern" computers to work well if
| there is more than about twice as much virtual memory as there
| is physical memory.  On 3600s and Ivories, there would typically
| be around 2 to 4 megawords (10 to 20 megabytes), but the paging
| area would be 80 to 100 megabytes; this configuration performed
| quite well indeed, even in the presence of garbage collection on
| that virtual address space.
| I fear all this knowledge has been lost to the world.

It is mentioned that modern machines do not have good performance
in the face of large VMs while lisp machines did.

Now, modern computers tend not to have a shared address space,
while lisp machines did. Because of their multiple address
spaces, modern systems have seperate page tables for each process
- Quickly leading to trashing when using a huge amount of VM and
poor VM performance. This has been the case despite huge
investments in cache research and incredible cache optimizations:
TLBs still trash when you switch processes.

My question as stated above: is the superior performance
of the lisp machines on Huge VMs a direct consequence of avoiding
multiple address spaces? I suspect the answer is yes.

| But if you're asking if a good virtual memory system and process
| scheduler could be written for a different architecture,
| incorporating the ideas from Genera, the answer is: yes.
|
| What are you asking, and why are you asking?

I hope my question is clearer now? I felt that this multiple
address space design was a crippling factor, and even if individiual
ideas can be applied from Genera to modern systems, they wouldnt
make em quite as good. My interest is merely to ascertain if
indeed this is the case.


Regards
Madhu

(cancelled, re-edited & re-posted)
--
From: Barry Margolin
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <kQ4za.3$Kf1.61@paloalto-snr1.gtei.net>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>While I appreciate your enthusiasm for the LispM, the virtual memory
>system wasn't that innovative.  The GC was aware of the VM, but the VM
>didn't really need to know much about the GC.  (The VM arranged for
>the GC to have a peek at a page as it was going out to disk, but I
>think that's about it.)  

I think there was also some interaction to support the ephemeral GC (but
maybe that's what you're referring to).

But I agree that the basic operation of the VM was not much different from
how systems have been doing it for decades.  It may have been somewhat
unusual at the time for a personal workstation to make use of VM, but
considering the typical memory needs of a Lisp application and the cost of
RAM in the 80's, it was hardly surprising that it needed it.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ptmbcaw1.fsf@ccs.neu.edu>
Barry Margolin <··············@level3.com> writes:

> In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
> >While I appreciate your enthusiasm for the LispM, the virtual memory
> >system wasn't that innovative.  The GC was aware of the VM, but the VM
> >didn't really need to know much about the GC.  (The VM arranged for
> >the GC to have a peek at a page as it was going out to disk, but I
> >think that's about it.)  
> 
> I think there was also some interaction to support the ephemeral GC (but
> maybe that's what you're referring to).

Yes.

The idea was to ensure that a page likely to be scavenged (and
therefore paged in by the GC) wasn't going out to the disk, or if it
was, that it had the lowest ephemeral count possible.  The Lambda and
the 3600 differed here in that the Lambda used a flat table and the
3600 used a B-Tree to record the data.  Additionally, because the 3600
had tags even on `unboxed' storage, it could easily parse the heap
from an arbitrary page boundary.  The Lambda had an index indicating
the first valid object on each page.  Since the 3600 had a much bigger
address space than the Lambda, it couldn't use the table approach.
From: Thien-Thi Nguyen
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <7gfzn7z278.fsf@gnufans.net>
"Franz Kafka" writes:

> In an ideal Lisp Machine the compiler would have
> been implemented in hardware--(the GC, type checking,
> support for Lists, support for CLOS classes), I don't
> know how ideal the actual processors were.

well, what you don't know is a good place to start hacking.
why not look at "compilation" in its small pieces (e.g.,
lexing, parsing, high-level optimizations, allocation (but to
what? ;-), low-level optimizations, runtime hooks, etc), and
see if you can map some of these down to RTL?

some people use "pre-compiled headers" to get around one of
the more painful bottlenecks in the compilation process, can
you sidestep that w/ a "parallel-read" approach?  (nb: this is
made much easier by the regular syntax of the input!)

thi
From: Barry Margolin
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <Sa9za.8$Kf1.308@paloalto-snr1.gtei.net>
In article <··············@gnufans.net>,
Thien-Thi Nguyen  <···@glug.org> wrote:
>"Franz Kafka" writes:
>
>> In an ideal Lisp Machine the compiler would have
>> been implemented in hardware--(the GC, type checking,
>> support for Lists, support for CLOS classes), I don't
>> know how ideal the actual processors were.
>
>well, what you don't know is a good place to start hacking.
>why not look at "compilation" in its small pieces (e.g.,
>lexing, parsing, high-level optimizations, allocation (but to
>what? ;-), low-level optimizations, runtime hooks, etc), and
>see if you can map some of these down to RTL?

You don't want to do *too* much in hardware, because hardware is much
harder to change than software.  For instance, before CLOS, Lisp Machines
had Flavors.  If too much of the support for Flavors had been embedded in
the hardware, it would have made it harder for them to convert to CLOS.

What you generally want to do in hardware is handle low-level features that
occur very frequently and have significant performance impact.  Since GC is
often a bottleneck in Lisp, it's a prime candidate for hardware support.
Arithmetic is a simple, low-level facility, but Lisp's polymorphism often
results in performance problems compared to other languages (unless you
make heavy use of declarations), so it was another good place for hardware
acceleration.

CLOS, on the other hand, is a relatively high level facility.  While the
performance of method dispatch certainly isn't negligible, it's usually
possible to optimize it well enough in software.  Most methods are long
enough that the dispatch only amounts to a small percentage of the overall
time.  Compare that with arithmetic, where it might take several
instructions on a conventional CPU to dispatch to a single ADD instruction.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Thien-Thi Nguyen
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <7g3cj64t67.fsf@gnufans.net>
Barry Margolin <··············@level3.com> writes:

> You don't want to do *too* much in hardware, because hardware is
> much harder to change than software. [...]

true, but the exercise of designing hardware may be challenging
enough for the foolhardy to really learn something in the process
(e.g., how to condense "can be could be should be" postings on
usenet into constructive action... ;-)

ok, now to take my own medicine...

thi
From: Tim Bradshaw
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ey3u1bnuthe.fsf@cley.com>
* Franz Kafka wrote:

> In an ideal Lisp Machine the compiler would have
> been implemented in hardware--(the GC, type checking,
> support for Lists, support for CLOS classes), I don't
> know how ideal the actual processors were.

> The cool thing about them is that you could rewrite the
> OS if you knew Lisp. Linux is written in C. However,
> since Lisp is interactive--you could interactivly rewrite, or
> debug parts of the OS.

But, presumably, in your `ideal' world, if there was a bug in the
compiler you'd have to change the hardware - produce a new version of
the processor, and swap out all the ones in the field.

The really disturbing thing is that you can read apparently serious
papers written by people in the 70s and 80s where they suggest
more-or-less this.  Hmmm.

--tim
From: Franz Kafka
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <aL3za.8738$d33.3091@news01.roc.ny.frontiernet.net>
"Tim Bradshaw" <···@cley.com> wrote in message
····················@cley.com...
> * Franz Kafka wrote:
>
> > In an ideal Lisp Machine the compiler would have
> > been implemented in hardware--(the GC, type checking,
> > support for Lists, support for CLOS classes), I don't
> > know how ideal the actual processors were.
>
> > The cool thing about them is that you could rewrite the
> > OS if you knew Lisp. Linux is written in C. However,
> > since Lisp is interactive--you could interactivly rewrite, or
> > debug parts of the OS.
>
> But, presumably, in your `ideal' world, if there was a bug in the
> compiler you'd have to change the hardware - produce a new version of
> the processor, and swap out all the ones in the field.
>
> The really disturbing thing is that you can read apparently serious
> papers written by people in the 70s and 80s where they suggest
> more-or-less this.  Hmmm.
>
> --tim

The processor would have to be reconfigurable--so that we would
not have to swap processors to change the chip. If it were reprogrammable
using a HLL like VHDL or better yet something
in Lisp it would be a very powerful world building tool.

Only primitives & maybe call/cc (because that can build all of
the control structures) should be emulated in hardware. But the
chip should allow run-time rewiring so that new ideas about
Lisp/AI could be tested.

It def. would be a single user machine, and def. would not allow
people to change the chip over the net. But, on the local machine
maybe /w a key-lock interface or jumper switch setting the
chip could be reprogrammed using Lisp.

Boy wouldn't that be cool. :-)
From: Tim Bradshaw
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ey3brxvunyl.fsf@cley.com>
* Franz Kafka wrote:
> The processor would have to be reconfigurable--so that we would
> not have to swap processors to change the chip. If it were reprogrammable
> using a HLL like VHDL or better yet something
> in Lisp it would be a very powerful world building tool.

This is called `user microcode', with HW support.  These machines
existed, of course.  I've used many such, and seen all of them
scrapped, after the RISC machines comprehensively ate their lunch.

--tim
From: Franz Kafka
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <DO6za.799$df.381@news02.roc.ny.frontiernet.net>
"Tim Bradshaw" <···@cley.com> wrote in message
····················@cley.com...
> * Franz Kafka wrote:
> > The processor would have to be reconfigurable--so that we would
> > not have to swap processors to change the chip. If it were
reprogrammable
> > using a HLL like VHDL or better yet something
> > in Lisp it would be a very powerful world building tool.
>
> This is called `user microcode', with HW support.  These machines
> existed, of course.  I've used many such, and seen all of them
> scrapped, after the RISC machines comprehensively ate their lunch.
>
> --tim

It would be used Like a prototyping board--to test out new archetectures,
and maybe to play around with the latest and
greatest.

If it was done right you could turn it into any machine you wanted.


Logic Designer<-->Microcode Desigenr<-->OS Designer
^
i
i
i
^
VM Designer


Recodeable Hardware
Microcoded Lisp-1 or 2
OS Kernal built in Lisp.
Tool built on OS Kernal which is Lisp program.

Benfits:
Can modifty OS, Hardware, Compiler by knowing only
Lisp.
No need to learn Machine Lang. if it is done right Lisp
would be the Machine Lang. & the Assembly Lang, and
the Microcode Lang. (primitives would be on a reprogrammable
chip--and thus modifiable using Lisp.)

Costs:
Making microcode functs avail. in Lisp.
Needing 2 learn Lisp.

This would be perf. for research in High Level Archetecture, and
OS design.
From: David Steuber
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <87fzn6ob5t.fsf@verizon.net>
"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> "Tim Bradshaw" <···@cley.com> wrote in message
> ····················@cley.com...
> >
> > This is called `user microcode', with HW support.  These machines
> > existed, of course.  I've used many such, and seen all of them
> > scrapped, after the RISC machines comprehensively ate their lunch.
> 
> It would be used Like a prototyping board--to test out new archetectures,
> and maybe to play around with the latest and
> greatest.
> 
> If it was done right you could turn it into any machine you wanted.

IMHO, CPUs should have the simplest possible design to do the job they
need to do (like handling VM and such as well as executing
instructions).  By simple, I mean the fewest possible logic gates.  I
good number of registers is nice for performance.

Intel has worked very hard at increasing transistor density and
switching speed.  While this works, I think switching speed is more
important.  I would think (I don't know) that a reduced number of
transistors could allow high speed switching without the cost of heat
dissipation.  I really hate the fan noise on my dual athalon box.  My
PowerBook G4 is very quiet (although it gets quite hot).

Anyway, I like the idea of keeping the hardware as simple as possible
(but not simpler) and puting all the complexity in the software.  Then
again, I am not a chip designer.

-- 
(describe 'describe)
From: Burton Samograd
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <87ptmbdqo9.fsf@kruhft.vc.shawcable.net>
Tim Bradshaw <···@cley.com> writes:
> But, presumably, in your `ideal' world, if there was a bug in the
> compiler you'd have to change the hardware - produce a new version of
> the processor, and swap out all the ones in the field.
> 
> The really disturbing thing is that you can read apparently serious
> papers written by people in the 70s and 80s where they suggest
> more-or-less this.  Hmmm.

Weren't the processor instructions all written in microcode?  If that
was the case updating the processor would be as simple as re-flashing
the store, no case disassembly required.  I've read some current
research that has been going a similar way, with auto reconfiguring
FPGA system that create custom "processors" for specific tasks as
required by the system.  I don't think it replaced a general purpose
processors, but the idea of having a programmable add on card on a
high bandwidth bus would be a great thing for pc's (by allowing the
creation of things like audio/video processors, and, well...alternate
language execution architectures).

-- 
burton samograd
······@kruhft.dyndns.org
http://kruhftwerk.dyndns.org
From: Espen Vestre
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <kwd6ibuslv.fsf@merced.netfonds.no>
"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> & all of the tools to use Lisp were written in Lisp. Disk Access, Network
> Access, Graphics, I/O, File Access, Compiler Tools, Editors, Debugers. And
> they all ran at once.
> 
> So you could edit/debug/run Lisp code in the same environment.

That wasn't always as fun as it sounds like. I remember when my lisp
image on a Xerox 1186 got FUBAR and I had to copy a fresh image on
the Slowest Disk Ever Known To Mankind. Ouuch. 

> It would stare those myths in their face and say, byte me you myth Lisp can
> do a lot--look at me, I'm an OS & I'm written in Lisp.

A serious suggestion: Instead of dreaming of lisp machines, you'd
have more fun programming lisp in one of the modern lisp environments.
(I assume you haven't done much real programming in lisp yet?)

Maybe you'd even come up with a really cool application that could
help getting rid of those myths.
-- 
  (espen)
From: Björn Lindberg
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <hcs3cj7np4d.fsf@knatte.nada.kth.se>
"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> The basics is this.
> 
> The OS of a Lisp Machine is written in Lisp.
> 
> In an ideal Lisp Machine the compiler would have
> been implemented in hardware--(the GC, type checking,
> support for Lists, support for CLOS classes), I don't
> know how ideal the actual processors were.
> 
> But, with VLSI tools some one could make that ideal CPU,
> or embed it in a board for Mac/Win/Linux.
> 
> The cool thing about them is that you could rewrite the
> OS if you knew Lisp. Linux is written in C. However,
> since Lisp is interactive--you could interactivly rewrite, or
> debug parts of the OS.
> 
> & all of the tools to use Lisp were written in Lisp. Disk Access, Network
> Access, Graphics, I/O, File Access, Compiler Tools, Editors, Debugers. And
> they all ran at once.
> 
> So you could edit/debug/run Lisp code in the same environment.

Did all these things along with user programs reside in the same Lisp
image? If so, how about name collisions?

I gather that the Lisp machines were not multi-user machines?


Bj�rn
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <fzn7dsov.fsf@ccs.neu.edu>
·······@nada.kth.se (Bj�rn Lindberg) writes:

> I gather that the Lisp machines were not multi-user machines?

They were single-user workstations.
From: Björn Lindberg
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <hcsy90zm6l7.fsf@knatte.nada.kth.se>
Joe Marshall <···@ccs.neu.edu> writes:

> ·······@nada.kth.se (Bj�rn Lindberg) writes:
> 
> > I gather that the Lisp machines were not multi-user machines?
> 
> They were single-user workstations.

So, just as a thought experiment, to have a multi-user Lisp machine, I
guess one would have to implement some kind of access restrictions
between packages. "Kernel space" would be the most access restricted
packages, and each user's programs would live in packages with access
permissions for that user. Do those of you with Lisp machine
experience think that this could actually have been possible?


Bj�rn
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <k7cjcatg.fsf@ccs.neu.edu>
·······@nada.kth.se (Bj�rn Lindberg) writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > ·······@nada.kth.se (Bj�rn Lindberg) writes:
> > 
> > > I gather that the Lisp machines were not multi-user machines?
> > 
> > They were single-user workstations.
> 
> So, just as a thought experiment, to have a multi-user Lisp machine, I
> guess one would have to implement some kind of access restrictions
> between packages. "Kernel space" would be the most access restricted
> packages, and each user's programs would live in packages with access
> permissions for that user. Do those of you with Lisp machine
> experience think that this could actually have been possible?

I think one would have to go further than that.  Suppose both you and
I wanted to run Macsyma, and you wanted the floating point precision
set to 32 digits, but I wanted the print base set to 8.  We really
couldn't even share the package.
From: Franz Kafka
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <iT6za.800$fg.161@news02.roc.ny.frontiernet.net>
"Joe Marshall" <···@ccs.neu.edu> wrote in message
·················@ccs.neu.edu...
> ·······@nada.kth.se (Bj�rn Lindberg) writes:
>
> > Joe Marshall <···@ccs.neu.edu> writes:
> >
> > > ·······@nada.kth.se (Bj�rn Lindberg) writes:
> > >
> > > > I gather that the Lisp machines were not multi-user machines?
> > >
> > > They were single-user workstations.
> >
> > So, just as a thought experiment, to have a multi-user Lisp machine, I
> > guess one would have to implement some kind of access restrictions
> > between packages. "Kernel space" would be the most access restricted
> > packages, and each user's programs would live in packages with access
> > permissions for that user. Do those of you with Lisp machine
> > experience think that this could actually have been possible?
>
> I think one would have to go further than that.  Suppose both you and
> I wanted to run Macsyma, and you wanted the floating point precision
> set to 32 digits, but I wanted the print base set to 8.  We really
> couldn't even share the package.

unless the OS was truly OO and each user was an instance that
could modify the OS to their needs but the base OS object
would be unchanged--and could be reinstancinated if the
need ever arose.

OS Kernal Object
--* user1 instance (user1 perfs)
    --* user1 new instance (new methods for OS functs)
--* user2 instance (user2 perfs)
--* user3 instance (user3 adds experimental VM system
    --* user1 makes an instance of user3's VM to use the new VM

OS Kernal Object + Standard OS object = Lisp OS with support
for many users, and ability to be extended via OO means.

Lisp & CLOS provide the needed lang. to describe this. :)
From: David Steuber
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <87addeoaog.fsf@verizon.net>
"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> unless the OS was truly OO and each user was an instance that
> could modify the OS to their needs but the base OS object
> would be unchanged--and could be reinstancinated if the
> need ever arose.
> 
> OS Kernal Object
> --* user1 instance (user1 perfs)
>     --* user1 new instance (new methods for OS functs)
> --* user2 instance (user2 perfs)
> --* user3 instance (user3 adds experimental VM system
>     --* user1 makes an instance of user3's VM to use the new VM
> 
> OS Kernal Object + Standard OS object = Lisp OS with support
> for many users, and ability to be extended via OO means.
> 
> Lisp & CLOS provide the needed lang. to describe this. :)

If you successfully implement this concept on current hardware, then I
think people may come knocking at your door.

I've done my share of hand waving in the past.  In my experience, code
talks, everything else goes to /dev/null.

-- 
(describe 'describe)
From: Frode Vatvedt Fjeld
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <2hhe7njb8w.fsf@vserver.cs.uit.no>
Joe Marshall <···@ccs.neu.edu> writes:

> I think one would have to go further than that.  Suppose both you
> and I wanted to run Macsyma, and you wanted the floating point
> precision set to 32 digits, but I wanted the print base set to 8.
> We really couldn't even share the package.

Presumably you'd have some kind of threads and thread-local bindings
that would deal with the problem of different users/threads having
their own values for *print-base* etc?

-- 
Frode Vatvedt Fjeld
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <el2qdcd0.fsf@ccs.neu.edu>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > I think one would have to go further than that.  Suppose both you
> > and I wanted to run Macsyma, and you wanted the floating point
> > precision set to 32 digits, but I wanted the print base set to 8.
> > We really couldn't even share the package.
> 
> Presumably you'd have some kind of threads and thread-local bindings
> that would deal with the problem of different users/threads having
> their own values for *print-base* etc?

Yes, but you'd have to go even further.  We would have to be isolated
from modifications to any shared structure.
From: Frode Vatvedt Fjeld
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <2h3cj6k8na.fsf@vserver.cs.uit.no>
Joe Marshall <···@ccs.neu.edu> writes:

> Frode Vatvedt Fjeld <······@cs.uit.no> writes:
>
>> Presumably you'd have some kind of threads and thread-local
>> bindings that would deal with the problem of different
>> users/threads having their own values for *print-base* etc?
>
> Yes, but you'd have to go even further.  We would have to be
> isolated from modifications to any shared structure.

Do you see this as an unavoidable problem, or just a problem with
legacy software that makes undue use of global state? I mean, my hunch
is that if you follow some not too intrusive guidelines for
maintaining your application's state ("you" being the application
programmer, and the guidelines being something like "reference all
global state through some small number of thread-local bindings"),
this could work well. I'm not quite sure how feasible this approach is
in terms of CL programming techniques you have to abandon, though. For
example, I suppose some uses of symbol-plists couldn't be used.

On the other hand, if you want/need to give each user the illusion of
having their own lisp world (and probably if you want to run multiple
instances of legacy applications you'd need this), you'd need much
more drastic measures.

-- 
Frode Vatvedt Fjeld
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <3cj5ddl2.fsf@ccs.neu.edu>
Frode Vatvedt Fjeld <······@cs.uit.no> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > Frode Vatvedt Fjeld <······@cs.uit.no> writes:
> >
> >> Presumably you'd have some kind of threads and thread-local
> >> bindings that would deal with the problem of different
> >> users/threads having their own values for *print-base* etc?
> >
> > Yes, but you'd have to go even further.  We would have to be
> > isolated from modifications to any shared structure.
> 
> Do you see this as an unavoidable problem, or just a problem with
> legacy software that makes undue use of global state? 

No, it's not unavoidable.

You originally suggested some sort of access restrictions between
packages.  I think that the package system is the wrong place to put
the kind of security that you need.

Let's start with the notion that you have your lisp machine and I have
mine and that whatever I do on mine doesn't affect yours.  This is the
ideal sort of multi-user system.  Now we want to start sharing
things.  We could virtualize the hardware and run simultaneously and
that gets us a step closer.  We could probably share a large number of
the packages and bindings provided that we either had immutable
symbols or `copy on write' symbols.  I could see sharing the CL
package, for example, but it would have to be designed to be shared in
this way.

Now things get kind of tricky.  For instance, if I modify CLOS via the
MOP, you don't want to see that.  If I create a SETF expansion on some
shared symbol, you don't want to see that, either.  There's a bunch of
`hidden global state' implied in Common Lisp that you need to make
explicit.  The package system is just one part of this.

> I mean, my hunch is that if you follow some not too intrusive
> guidelines for maintaining your application's state ("you" being the
> application programmer, and the guidelines being something like
> "reference all global state through some small number of
> thread-local bindings"), this could work well.  I'm not quite sure
> how feasible this approach is in terms of CL programming techniques
> you have to abandon, though.  For example, I suppose some uses of
> symbol-plists couldn't be used.

This does work pretty well, actually.  Since the Lisp Machine had one
global address space, applications would end up tromping on each other
unless they were `well behaved', and most Lisp Machine programmers
learned how to avoid some of the less-friendly features.
From: Barry Margolin
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <OCKBa.2$Wb3.154@paloalto-snr1.gtei.net>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>Frode Vatvedt Fjeld <······@cs.uit.no> writes:
>> I mean, my hunch is that if you follow some not too intrusive
>> guidelines for maintaining your application's state ("you" being the
>> application programmer, and the guidelines being something like
>> "reference all global state through some small number of
>> thread-local bindings"), this could work well.  I'm not quite sure
>> how feasible this approach is in terms of CL programming techniques
>> you have to abandon, though.  For example, I suppose some uses of
>> symbol-plists couldn't be used.
>
>This does work pretty well, actually.  Since the Lisp Machine had one
>global address space, applications would end up tromping on each other
>unless they were `well behaved', and most Lisp Machine programmers
>learned how to avoid some of the less-friendly features.

Object-Oriented Programming is one of the reasons it works well.  In the
last few revisions of Genera, Symbolics introduced a very nice "Application
Framework" facility.  It included mechanisms to encapsulate state, as well
as a GUI builder, all based using CLOS, and they started converting some
applications into this model.

It's relatively easy to encapsulate application state this way -- it just
requires following a disciplined programming style, made easier with good
tools.  Encapsulating the entire programming environment, so that changes
to class definitions, packages, or function bindings by one user won't
affect another user, would have required much deeper redesign of the OS.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Tim Bradshaw
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ey3r86q4nel.fsf@cley.com>
* Joe Marshall wrote:

> Yes, but you'd have to go even further.  We would have to be isolated
> from modifications to any shared structure.

I think this is copy-on-write pages, isn't it?

--tim
From: Joe Marshall
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <wughbyzf.fsf@ccs.neu.edu>
Tim Bradshaw <···@cley.com> writes:

> * Joe Marshall wrote:
> 
> > Yes, but you'd have to go even further.  We would have to be isolated
> > from modifications to any shared structure.
> 
> I think this is copy-on-write pages, isn't it?

Yes, but remember the garbage collector.
 
From: Tim Bradshaw
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <ey3he7nupjy.fsf@cley.com>
* Bj�rn Lindberg wrote:

> Did all these things along with user programs reside in the same Lisp
> image? If so, how about name collisions?

Packages.

--tim
From: Burton Samograd
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <87of1wf6d5.fsf@kruhft.vc.shawcable.net>
David Steuber <·············@verizon.net> writes:

> I would be interested in knowing what is so special about these
> machines.  There seem to be many fond memories of them.  I would be
> particularly interested in screen shots and photos of the hardware.

Check out the Symbolics Lisp Machine Museum:

http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/symbolics.html

-- 
burton samograd
······@kruhft.dyndns.org
http://kruhftwerk.dyndns.org
From: David Steuber
Subject: Re: Tell Me About Lisp Machines
Date: 
Message-ID: <87ptmcdmvh.fsf@verizon.net>
Burton Samograd <······@hotmail.com> writes:

> Check out the Symbolics Lisp Machine Museum:
> 
> http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/symbolics.html

Looks neat!  Thanks.

-- 
(describe 'describe)