From: Andreas Davour
Subject: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <cs93b2cepjg.fsf@Psilocybe.Update.UU.SE>
Hi.

I know there are some people around here who knows a lot about the
making of the various lisp machines. I have done some research and found
a few confusing concepts.

Why was the MIT machine designed with a umbilical to a PDP? Was that
just an early prototype which was thus bootstrapped, or was they only
sold to customers who already had a PDP around?

A related question concerns the storage model. I've read Richard
Greenblatt's paper about the features of the CONS and he mentions
storage on another machine as a design feature. I'm not sure I
follow. Why is that good? Is there some excellent insight to that design
that I'm missing? I only know lisp, and no electrical and digital
engineering and could possibly miss a lot.

Thanks for any answers you can provide!

/Andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

From: Kent M Pitman
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <uk5vozj2w.fsf@nhplace.com>
Andreas Davour <····@update.uu.se> writes:

> I know there are some people around here who knows a lot about the
> making of the various lisp machines. I have done some research and found
> a few confusing concepts.
> 
> Why was the MIT machine designed with a umbilical to a PDP? Was that
> just an early prototype which was thus bootstrapped, or was they only
> sold to customers who already had a PDP around?

I wasn't involved in this at the time, but my sense of it was that
since Lisp Machine compiled itself, it had to be compiled on a
previous machine.  So they wrote a bootstrap system.  It was not sold
with PDP machines around, except insofar that such machines were
accidentally common and possible to talk to over the lisp machine's
transparently networked file system.

Subsequent revs of the LispM software were done by compiling on an
extant machine into a little pod of code that had enough stuff that it
could be kickstarted via something called a "breath of life" kick, not
unlike how animals are birthed live.  Once alive, the BOL would
compile and load the rest of the Lisp Machine system code, so that to
the extent possible, a given version had compiled itself.  Then when it
had all the code in place, it would be dumped into a bootable world.
I hope I've gotten that right.  I wasn't centrally involved in this 
process and only got into it occasionally to make a small patch.
But basically, this technique (modulo some specific exceptions to make 
compilation order work), meant you had the full language available to 
write the language, since the the initial small world was compiled by
a full running world, and the little world that compiled the rest had
substantially much of lisp available from the outset.  

(I think the Midas assembler for the PDP10 did this, and had to have a
previous version of itself around in order to compile.)

> A related question concerns the storage model. I've read Richard
> Greenblatt's paper about the features of the CONS and he mentions
> storage on another machine as a design feature. I'm not sure I
> follow. Why is that good? Is there some excellent insight to that design
> that I'm missing? I only know lisp, and no electrical and digital
> engineering and could possibly miss a lot.

I haven't read the paper.  But I think it was routine, especially in
the early days, to run a patch cable between two machines and there
was a way you could run a debugger on a working machine to look at
memory on one that was broken.  That was a lot better than running the
debugger on the broken machine. :) I don't know if that's what you're
referring to or not.

After a while, the lisp machine debugger and operating system was
robust enough that it was unlikely that it didn't have capability of
debugging itself.  There were occasional lapses, like when Dynamic
Windows (a new window system, similar to CLIM) went in and the window
system kept crashing while they were debugging it, where an external
debugger would have helped.  But usually the debuggers were good
enough to debug even the running operating system.  There were several
levels of such debuggers, and most were way more featureful than most
other debuggers I've used... owing mostly to tagged data.  Standard
debuggers have the problem that when you pass wrong type data from
badly linked programs, you can't tell what that data was, so it's hard
to know where it came from.  If tags are primitive to the entire
architecture, it's easy for a debugger to show you WHAT you got and it's
much easier to figure out WHY if you know WHAT....

> Thanks for any answers you can provide!

I hope this helps.  I'm not the best person to answer these questions 
since I was not in the first generation of people designing or using
Lisp Machines.  I started using CADRs around 1981, and 3600's a couple
years after that.  But my experience was mostly as a user, not an 
implementor, until later in the 1980's... and even then I tended toward
the application side, not the low-levels.  Nevertheless, I am recording
these thoughts for you mostly so that you have at least something to read
if no one else replies.  Someone with more accurate knowledge should
feel free to correct my impressions.
From: Andreas Davour
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <cs9vef8csm3.fsf@Psilocybe.Update.UU.SE>
Kent M Pitman <······@nhplace.com> writes:

> Andreas Davour <····@update.uu.se> writes:
>
>> I know there are some people around here who knows a lot about the
>> making of the various lisp machines. I have done some research and found
>> a few confusing concepts.
>> 
>> Why was the MIT machine designed with a umbilical to a PDP? Was that
>> just an early prototype which was thus bootstrapped, or was they only
>> sold to customers who already had a PDP around?
>
> I wasn't involved in this at the time, but my sense of it was that
> since Lisp Machine compiled itself, it had to be compiled on a
> previous machine.  So they wrote a bootstrap system.  It was not sold
> with PDP machines around, except insofar that such machines were
> accidentally common and possible to talk to over the lisp machine's
> transparently networked file system.

Makes sense. I think that network file system is at the core of my
second item.

>> A related question concerns the storage model. I've read Richard
>> Greenblatt's paper about the features of the CONS and he mentions
>> storage on another machine as a design feature. I'm not sure I
>> follow. Why is that good? Is there some excellent insight to that design
>> that I'm missing? I only know lisp, and no electrical and digital
>> engineering and could possibly miss a lot.
>
> I haven't read the paper.  But I think it was routine, especially in
> the early days, to run a patch cable between two machines and there
> was a way you could run a debugger on a working machine to look at
> memory on one that was broken.  That was a lot better than running the
> debugger on the broken machine. :) I don't know if that's what you're
> referring to or not.

I don't think so. The networked file system is was I was referring
to. At least on LMI machines (I've only used a Symbolics 3600 myself)
I've gotten the impression that you didn't store anything locally on
disk except the world and OS sources and everything else went on the PDP
via the networked filesystem. It might have been a wrong impression on
my part, or something that was just present on the early machines, even
though I think the CADR did use it.

>> Thanks for any answers you can provide!
>
> I hope this helps.  I'm not the best person to answer these questions 
> since I was not in the first generation of people designing or using
> Lisp Machines.  I started using CADRs around 1981, and 3600's a couple
> years after that.  But my experience was mostly as a user, not an 
> implementor, until later in the 1980's... and even then I tended toward
> the application side, not the low-levels.  Nevertheless, I am recording
> these thoughts for you mostly so that you have at least something to read
> if no one else replies.  Someone with more accurate knowledge should
> feel free to correct my impressions.

You sure knows a lot for someone who's not the best person to ask, so
I'm very glad you posted! 

/andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Kent M Pitman
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <ulkg49sls.fsf@nhplace.com>
Andreas Davour <····@update.uu.se> writes:

> >> A related question concerns the storage model. I've read Richard
> >> Greenblatt's paper about the features of the CONS and he mentions
> >> storage on another machine as a design feature. I'm not sure I
> >> follow. Why is that good? Is there some excellent insight to that design
> >> that I'm missing? I only know lisp, and no electrical and digital
> >> engineering and could possibly miss a lot.
> >
> > I haven't read the paper.  But I think it was routine, especially in
> > the early days, to run a patch cable between two machines and there
> > was a way you could run a debugger on a working machine to look at
> > memory on one that was broken.  That was a lot better than running the
> > debugger on the broken machine. :) I don't know if that's what you're
> > referring to or not.
> 
> I don't think so. The networked file system is was I was referring
> to. At least on LMI machines (I've only used a Symbolics 3600 myself)
> I've gotten the impression that you didn't store anything locally on
> disk except the world and OS sources and everything else went on the PDP
> via the networked filesystem. It might have been a wrong impression on
> my part, or something that was just present on the early machines, even
> though I think the CADR did use it.

The Lisp Machine had a transparent networked file system, by which I
mean the open function (you'd say "the open system call" on other
operating systems) understood primitively how to open to a machine 
anywhere on the network.  

So one reason you stored things on other machines is "because you
could".  People had a home machine, and they kept their files there.
And then they grabbed any old Lisp Machine to do their work, and it
could get at their files because the files were not local to a disk
they couldn't use from another machine.

But the other subtlety that is probably not occurring to you, and that
is an artifact of the modern world and the way progress reduces
choice, is that the Lisp Machine operating systems didn't presuppose a
file system; those are orthogonal choices.  You sort of see this in
the issue of FAT32 vs NTFS on Windows NT, but the choice was
considerably more active on the Lisp Machine.  There were some LispMs
with 3 different file systems, each with their own filename syntax,
theory of locking, backup, file properties, etc... big differences,
not little ones.  They were competing in a kind of market capitalism
not seen in the modern world, in part because the magic of pathnames
allowed them to... most programs didn't mind the variation because you
could abstract away from them.  Users could decide they were going to
use a particular file system and name their files with strings, but
programs would manipulate the strings by parsing them and doing
merging and whatnot in a way that was abstract.
From: Rob Warnock
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <vLqdnS_UkLynn6HbnZ2dnUVZ_viunZ2d@speakeasy.net>
Kent M Pitman  <······@nhplace.com> wrote:
+---------------
| But the other subtlety that is probably not occurring to you, and that
| is an artifact of the modern world and the way progress reduces
| choice, is that the Lisp Machine operating systems didn't presuppose a
| file system; those are orthogonal choices.  You sort of see this in
| the issue of FAT32 vs NTFS on Windows NT, but the choice was
| considerably more active on the Lisp Machine.  There were some LispMs
| with 3 different file systems, each with their own filename syntax,
| theory of locking, backup, file properties, etc... big differences,
| not little ones.  They were competing in a kind of market capitalism
| not seen in the modern world...
+---------------

Actually, one still sees quite a bit of that even today; consider
UFS, EFS, EXT3, XFS, ZFS, ReiserFS-3 & -4, etc. I've seen machines
in the last week that had XFS, EXT2, EXT3, NFS, CIFS, FAT16, & ISO9660
filesystems all mounted at once.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Kent M Pitman
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <u64776gow.fsf@nhplace.com>
····@rpw3.org (Rob Warnock) writes:

> Kent M Pitman  <······@nhplace.com> wrote:
> +---------------
> | But the other subtlety that is probably not occurring to you, and that
> | is an artifact of the modern world and the way progress reduces
> | choice, is that the Lisp Machine operating systems didn't presuppose a
> | file system; those are orthogonal choices.  You sort of see this in
> | the issue of FAT32 vs NTFS on Windows NT, but the choice was
> | considerably more active on the Lisp Machine.  There were some LispMs
> | with 3 different file systems, each with their own filename syntax,
> | theory of locking, backup, file properties, etc... big differences,
> | not little ones.  They were competing in a kind of market capitalism
> | not seen in the modern world...
> +---------------
> 
> Actually, one still sees quite a bit of that even today; consider
> UFS, EFS, EXT3, XFS, ZFS, ReiserFS-3 & -4, etc. I've seen machines
> in the last week that had XFS, EXT2, EXT3, NFS, CIFS, FAT16, & ISO9660
> filesystems all mounted at once.

Yeah, I didn't really mean there weren't kinds of file systems.  Maybe
I didn't put the right emphasis.  I would make the analogy to the
notion of selling a computer with a barebones configuration that
comes, in some case without an operating system.... it's rare, but
it's done--that is, the notion that hardware should be purchasable
without the decision made for you.  Or, as in the case of Windows, the
notion that a system should come without a browser, and you should
make your own choice rather than assume that because it's Windows you
use IE... it happens more now, but there's still a bias.  In that
sense, in those times, the operating system came without a file
system, and you loaded in your choice of file system--it wasn't given.
Yes, today, you can get an initial file system and add others, but
there isn't the sense of getting an operating system without a file
system and then adding one of your choice.

Well, depending on who you got it from, there was often a preferred
file system type... if you had one.  But it was, for a while, an option.
And many didn't use it.  It was in that sense ... unusual ... 
Or maybe you don't think it was, which is ok.  I'm just communicating
an emotional impression of what was unusual, and as with all such things,
others' mileage can vary.
From: Rainer Joswig
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <2007050511003816807-joswig@lispde>
On 2007-05-05 06:24:31 +0200, Kent M Pitman <······@nhplace.com> said:

> ····@rpw3.org (Rob Warnock) writes:
> 
>> Kent M Pitman  <······@nhplace.com> wrote:
>> +---------------
>> | But the other subtlety that is probably not occurring to you, and that
>> | is an artifact of the modern world and the way progress reduces
>> | choice, is that the Lisp Machine operating systems didn't presuppose a
>> | file system; those are orthogonal choices.  You sort of see this in
>> | the issue of FAT32 vs NTFS on Windows NT, but the choice was
>> | considerably more active on the Lisp Machine.  There were some LispMs
>> | with 3 different file systems, each with their own filename syntax,
>> | theory of locking, backup, file properties, etc... big differences,
>> | not little ones.  They were competing in a kind of market capitalism
>> | not seen in the modern world...
>> +---------------
>> 
>> Actually, one still sees quite a bit of that even today; consider
>> UFS, EFS, EXT3, XFS, ZFS, ReiserFS-3 & -4, etc. I've seen machines
>> in the last week that had XFS, EXT2, EXT3, NFS, CIFS, FAT16, & ISO9660
>> filesystems all mounted at once.
> 
> Yeah, I didn't really mean there weren't kinds of file systems.  Maybe
> I didn't put the right emphasis.  I would make the analogy to the
> notion of selling a computer with a barebones configuration that
> comes, in some case without an operating system.... it's rare, but
> it's done--that is, the notion that hardware should be purchasable
> without the decision made for you.  Or, as in the case of Windows, the
> notion that a system should come without a browser, and you should
> make your own choice rather than assume that because it's Windows you
> use IE... it happens more now, but there's still a bias.  In that
> sense, in those times, the operating system came without a file
> system, and you loaded in your choice of file system--it wasn't given.
> Yes, today, you can get an initial file system and add others, but
> there isn't the sense of getting an operating system without a file
> system and then adding one of your choice.
> 
> Well, depending on who you got it from, there was often a preferred
> file system type... if you had one.  But it was, for a while, an option.
> And many didn't use it.  It was in that sense ... unusual ...
> Or maybe you don't think it was, which is ok.  I'm just communicating
> an emotional impression of what was unusual, and as with all such things,
> others' mileage can vary.

I think the feeling was slighty different from today, because the
situation where the machines were used are slightly different.

The Lisp Machines were often sold to (research) labs
with local area networks and some early connection to the outside.
It was quite typical that there was a user base with five to
ten developers working on one or more projects
(which was quite an expensive setup btw.). Many
labs had network administrators that helped with the
infrastructure. The Lisp Machine was not only for
developing code, but it was for editing documents, reading/writing
mail, they were terminals to other machines and so on.
Often they used one or more server to share resources.
The machines shared resources on the LAN like fonts,
documentation database, compiled applications code,
OS patches,  Lisp 'worlds', databases, administration
databases, etc.
You used to bring a new machine to the network by giving
it either locally (or remotely) a Lisp world to boot and
then point to the resources in the network. Then the
machine 'knew' about networks, gateways, databases,
users, filesystems and so on.

It was also not unusual that sometimes the networked file
system was faster than the local one.
From: Tim Bradshaw
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <1178390308.814549.195320@n59g2000hsh.googlegroups.com>
On May 5, 5:24 am, Kent M Pitman <······@nhplace.com> wrote:
> In that
> sense, in those times, the operating system came without a file
> system, and you loaded in your choice of file system--it wasn't given.
> Yes, today, you can get an initial file system and add others, but
> there isn't the sense of getting an operating system without a file
> system and then adding one of your choice.

I agree with this, but just to play the devil's advocate a bit, it is
quite literally the case that some modern OS's don't have a filesystem
when they start, and you have to make some kind of choice.  For
instance, when Solaris boots the kernel initially doesn't know about
the filesystem and has to load various modules to be able to talk to
it at all.  Of course there's a bootstrap issue here - where does it
get the modules from?  The answer to that is (this is true for SPARC,
x86 differs slightly) the bootstrap program which loaded the kernel
*does* know how to talk to the filesystem (that's how it loaded the
kernel), and the kernel calls back into the bootstrap program to load
the bits it needs.

Currently there are, I think, only two filesystems supported for
booting Solaris - NFS and UFS - but there will soon be at least one
more I think: ZFS.

A modern Solaris system, when running, could easily be talking to 5
different filesystems: UFS, NFS (in several versions), ZFS FAT, and
whatever CDs have.  And that excludes tmpfs and the various odd
special purpose filesystems (there are probably at least another 5 of
those).

One very important difference is that Unix systems typically dress all
these systems up as a standard Unix filesystem - they all have the
same pathname syntax (possibly with limits) etc.  So they all look
like the same thing.  That has tended to crush the life out of
filesystems with really alternative views of the world such as
versions etc.  Things like clearcase implement a versioned filesystem,
but to Unix the version is just part of the file name.  That's a shame
in a way, although I think it's probably a reasonable decision for an
OS to make.

--tim
From: Andreas Davour
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <cs9irb7mum1.fsf@Psilocybe.Update.UU.SE>
Kent M Pitman <······@nhplace.com> writes:

> Andreas Davour <····@update.uu.se> writes:
>
>> >> A related question concerns the storage model. I've read Richard
>> >> Greenblatt's paper about the features of the CONS and he mentions
>> >> storage on another machine as a design feature. I'm not sure I
>> >> follow. Why is that good? Is there some excellent insight to that design
>> >> that I'm missing? I only know lisp, and no electrical and digital
>> >> engineering and could possibly miss a lot.
>> >
>> > I haven't read the paper.  But I think it was routine, especially in
>> > the early days, to run a patch cable between two machines and there
>> > was a way you could run a debugger on a working machine to look at
>> > memory on one that was broken.  That was a lot better than running the
>> > debugger on the broken machine. :) I don't know if that's what you're
>> > referring to or not.
>> 
>> I don't think so. The networked file system is was I was referring
>> to. At least on LMI machines (I've only used a Symbolics 3600 myself)
>> I've gotten the impression that you didn't store anything locally on
>> disk except the world and OS sources and everything else went on the PDP
>> via the networked filesystem. It might have been a wrong impression on
>> my part, or something that was just present on the early machines, even
>> though I think the CADR did use it.
>
> The Lisp Machine had a transparent networked file system, by which I
> mean the open function (you'd say "the open system call" on other
> operating systems) understood primitively how to open to a machine 
> anywhere on the network.  
>
> So one reason you stored things on other machines is "because you
> could".  People had a home machine, and they kept their files there.
> And then they grabbed any old Lisp Machine to do their work, and it
> could get at their files because the files were not local to a disk
> they couldn't use from another machine.
>
> But the other subtlety that is probably not occurring to you, and that
> is an artifact of the modern world and the way progress reduces
> choice, is that the Lisp Machine operating systems didn't presuppose a
> file system; those are orthogonal choices.

It all makes sense, but really hints at another world, another time and
another choices. I think I'm getting it.

/andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Kent M Pitman
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <u8xc373sr.fsf@nhplace.com>
Andreas Davour <····@update.uu.se> writes:

> It all makes sense, but really hints at another world, another time and
> another choices. I think I'm getting it.

This is the underlying theme of a number of my posts here.  Not that
people necessarily want to return to that world.  I don't think one
could.  Sometimes I think when I talk about the way we used to do
things, people think I'm saying "those ways were better".  What I'm
really saying, though, is that on many occasions I'm just overwhelmed
by how arbitrary the current world is, and how much "choice" is
available if only exercised.

People take the State of the World as if it had been a process of
"convergence" on permanent and obvious truth--or sometimes they'll
even go so far as to christen the truth as Good (probably because it
resulted from processes they didn't understand and yet trust)-- rather
than seeing the process as more chaotic that has merely paused at the
current state, awaiting either the unlikely realization that choice is
all around them or else, more likely, an accidental kick in another
direction because something else has become the high order bit and the
seemingly permanent thing is now the hapless victim of what will seem,
as a result of its failure to take the opportunity for control, as
randomness.
From: Andreas Davour
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <cs9zm4jgsgx.fsf@Psilocybe.Update.UU.SE>
Kent M Pitman <······@nhplace.com> writes:

> Andreas Davour <····@update.uu.se> writes:
>
>> It all makes sense, but really hints at another world, another time and
>> another choices. I think I'm getting it.
>
> This is the underlying theme of a number of my posts here.  Not that
> people necessarily want to return to that world.  I don't think one
> could.  Sometimes I think when I talk about the way we used to do
> things, people think I'm saying "those ways were better".  What I'm
> really saying, though, is that on many occasions I'm just overwhelmed
> by how arbitrary the current world is, and how much "choice" is
> available if only exercised.

Sometimes it's someone like me, who wasn't around when "those ways" were
invented, that call for a return to the "good old days". The world is
indeed arbitrary in it's randomness. :)

BTW I just noticed my horrible misspelling in the subject. I just hate
this laptop keyboard...

/Andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: ············@gmail.com
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <1178523564.226013.188730@h2g2000hsg.googlegroups.com>
On May 4, 9:10 pm, Andreas Davour <····@update.uu.se> wrote:
> I don't think so. The networked file system is was I was referring
> to. At least on LMI machines (I've only used a Symbolics 3600 myself)
> I've gotten the impression that you didn't store anything locally on
> disk except the world and OS sources and everything else went on the PDP
> via the networked filesystem. It might have been a wrong impression on
> my part, or something that was just present on the early machines, even
> though I think the CADR did use it.

On LMI CADRs and Lambdas you certainly had both paging, "world loads",
microcode and file systems locally. But only the first three types
were necessary, as far as I recall.

-- Bjorn
From: Christopher C. Stacy
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <yzld51g7ffl.fsf@news.dtpq.com>
Andreas Davour <····@update.uu.se> writes:
> Why was the MIT machine designed with a umbilical to a PDP? Was that
> just an early prototype which was thus bootstrapped, or was they only
> sold to customers who already had a PDP around?

Just the prototype, with a program on  the PDP-10 serving as a debug console.

> A related question concerns the storage model. I've read Richard
> Greenblatt's paper about the features of the CONS and he mentions
> storage on another machine as a design feature. I'm not sure I
> follow. Why is that good? Is there some excellent insight to that design
> that I'm missing? I only know lisp, and no electrical and digital
> engineering and could possibly miss a lot.

The same reason that people use network storage appliances now.
Also, most of the files that people wanted to access were
already on some other computer (a PDP-10) on the network.
From: Andreas Davour
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <cs9r6pwcsda.fsf@Psilocybe.Update.UU.SE>
······@news.dtpq.com (Christopher C. Stacy) writes:

> Andreas Davour <····@update.uu.se> writes:
>> Why was the MIT machine designed with a umbilical to a PDP? Was that
>> just an early prototype which was thus bootstrapped, or was they only
>> sold to customers who already had a PDP around?
>
> Just the prototype, with a program on  the PDP-10 serving as a debug console.

OK, then Kent's impressions were correct. Since I know you have
extensive ITS experience I guess you were in the middle of stuff as it
happend?

>> A related question concerns the storage model. I've read Richard
>> Greenblatt's paper about the features of the CONS and he mentions
>> storage on another machine as a design feature. I'm not sure I
>> follow. Why is that good? Is there some excellent insight to that design
>> that I'm missing? I only know lisp, and no electrical and digital
>> engineering and could possibly miss a lot.
>
> The same reason that people use network storage appliances now.
> Also, most of the files that people wanted to access were
> already on some other computer (a PDP-10) on the network.

The last I can understand, that it was a nice feature. But, I don't
understand why people use network storage appliances now!

Maybe you know to what extent this networked filesystem was used? Was it
like the ITS filesystem just for utility or was the local disc used for
storage of anything but the world image and the system sources?

I guess a few of these answers I'm seeking are found if I could get that
CADR emulator running and just looked around...

/andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Barry Margolin
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <barmar-8E0C26.20444704052007@comcast.dca.giganews.com>
In article <···············@Psilocybe.Update.UU.SE>,
 Andreas Davour <····@update.uu.se> wrote:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
> > Andreas Davour <····@update.uu.se> writes:
> >> A related question concerns the storage model. I've read Richard
> >> Greenblatt's paper about the features of the CONS and he mentions
> >> storage on another machine as a design feature. I'm not sure I
> >> follow. Why is that good? Is there some excellent insight to that design
> >> that I'm missing? I only know lisp, and no electrical and digital
> >> engineering and could possibly miss a lot.
> >
> > The same reason that people use network storage appliances now.
> > Also, most of the files that people wanted to access were
> > already on some other computer (a PDP-10) on the network.
> 
> The last I can understand, that it was a nice feature. But, I don't
> understand why people use network storage appliances now!
> 
> Maybe you know to what extent this networked filesystem was used? Was it
> like the ITS filesystem just for utility or was the local disc used for
> storage of anything but the world image and the system sources?

There were a couple of reasons why networked filesystems were used:

1) LispM's were too expensive for every engineer to have their own.  
When I was at MIT (early 80's), I think there were about 4 in the EE 
department lab, and another half dozen or so in the AI lab.  Most files 
were kept on fileservers so you could use any available machine in the 
lab, rather than having to wait for the one with your files on it to 
become free.

2) Disk storage was really expensive in those days.  So it would have 
been prohibitively expensive to put much disk space on each machine.  
There were economies of scale available by putting large disks on 
central minicomputers and mainframes.

3) Even in environments where everyone has their own personal machine, 
it's common to work on group projects.  Or you write a program on one 
machine and then want someone else to be able to try it on theirs.  
Shared filesystems foster this type of collaboration.

4) It simplified the design of the OS by leaving out a fancy file 
system.  The network servers already had existing file systems, it was 
easy to take advantage of them.  Early Lispms just had a simple 
partitioning scheme -- one partition for the firmware, another for the 
Lisp image, another for paging, that's it.  Eventually native file 
systems were implemented, but there was no need for this in the early 
days when they could count on the existence of mainframes.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Barry Margolin
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <barmar-8E0C26.20444704052007@comcast.dca.giganews.com>
In article <···············@Psilocybe.Update.UU.SE>,
 Andreas Davour <····@update.uu.se> wrote:

> ······@news.dtpq.com (Christopher C. Stacy) writes:
> 
> > Andreas Davour <····@update.uu.se> writes:
> >> A related question concerns the storage model. I've read Richard
> >> Greenblatt's paper about the features of the CONS and he mentions
> >> storage on another machine as a design feature. I'm not sure I
> >> follow. Why is that good? Is there some excellent insight to that design
> >> that I'm missing? I only know lisp, and no electrical and digital
> >> engineering and could possibly miss a lot.
> >
> > The same reason that people use network storage appliances now.
> > Also, most of the files that people wanted to access were
> > already on some other computer (a PDP-10) on the network.
> 
> The last I can understand, that it was a nice feature. But, I don't
> understand why people use network storage appliances now!
> 
> Maybe you know to what extent this networked filesystem was used? Was it
> like the ITS filesystem just for utility or was the local disc used for
> storage of anything but the world image and the system sources?

There were a couple of reasons why networked filesystems were used:

1) LispM's were too expensive for every engineer to have their own.  
When I was at MIT (early 80's), I think there were about 4 in the EE 
department lab, and another half dozen or so in the AI lab.  Most files 
were kept on fileservers so you could use any available machine in the 
lab, rather than having to wait for the one with your files on it to 
become free.

2) Disk storage was really expensive in those days.  So it would have 
been prohibitively expensive to put much disk space on each machine.  
There were economies of scale available by putting large disks on 
central minicomputers and mainframes.

3) Even in environments where everyone has their own personal machine, 
it's common to work on group projects.  Or you write a program on one 
machine and then want someone else to be able to try it on theirs.  
Shared filesystems foster this type of collaboration.

4) It simplified the design of the OS by leaving out a fancy file 
system.  The network servers already had existing file systems, it was 
easy to take advantage of them.  Early Lispms just had a simple 
partitioning scheme -- one partition for the firmware, another for the 
Lisp image, another for paging, that's it.  Eventually native file 
systems were implemented, but there was no need for this in the early 
days when they could count on the existence of mainframes.

-- 
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Kent M Pitman
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <uslacnf0q.fsf@nhplace.com>
Barry Margolin <······@alum.mit.edu> writes:

> In article <···············@Psilocybe.Update.UU.SE>,
>  Andreas Davour <····@update.uu.se> wrote:
> 
> > ······@news.dtpq.com (Christopher C. Stacy) writes:
> > 
> > > Andreas Davour <····@update.uu.se> writes:
> > >> A related question concerns the storage model. I've read Richard
> > >> Greenblatt's paper about the features of the CONS and he mentions
> > >> storage on another machine as a design feature. I'm not sure I
> > >> follow. Why is that good? Is there some excellent insight to that design
> > >> that I'm missing? I only know lisp, and no electrical and digital
> > >> engineering and could possibly miss a lot.
> > >
> > > The same reason that people use network storage appliances now.
> > > Also, most of the files that people wanted to access were
> > > already on some other computer (a PDP-10) on the network.
> > 
> > The last I can understand, that it was a nice feature. But, I don't
> > understand why people use network storage appliances now!
> > 
> > Maybe you know to what extent this networked filesystem was used? Was it
> > like the ITS filesystem just for utility or was the local disc used for
> > storage of anything but the world image and the system sources?
> 
> There were a couple of reasons why networked filesystems were used:

I totally agree with Barry's observations here and wanted to add a few things
he reminded me of.
 
> 1) LispM's were too expensive for every engineer to have their own.  
> When I was at MIT (early 80's), I think there were about 4 in the EE 
> department lab, and another half dozen or so in the AI lab.  Most files 
> were kept on fileservers so you could use any available machine in the 
> lab, rather than having to wait for the one with your files on it to 
> become free.

The importance of this cannot be understated.  To say this more
plainly, most people had on their desk a VT52 (a 24x80 crt screen,
connected to a mainframe) that they did most of their work on.  We
often shared access to Lisp Machines.  A few people had them
dedicated, but the rest of us (and as an undergrad, I was *definitely*
one) had to move into others' offices at night after they'd gone home
and borrow them until daylight.  The machines got nearly 24 hour use
that way, but one didn't always know which machine they would use
next.  This culture of physical openness was enhanced because,
valuable as these machines were (and they were extraordinarily so),
there was no place to steal them to.  No one could have carried them
out the door without being noticed, and no one would have wanted them
other than the people who already had access.  But still, people left
their doors open and expected others to borrow their offices in a way
that is simply not done any longer most places I've seen in the modern
world.  So Barry's point about using any available machine was really
a central issue.
 
> 2) Disk storage was really expensive in those days.  So it would have 
> been prohibitively expensive to put much disk space on each machine.  
> There were economies of scale available by putting large disks on 
> central minicomputers and mainframes.

Building again on this, one issue is that these disks on the LispM were
INCREDIBLY large by some accounts.  A 450MB disk was not uncommon, though
often it was less ... but no matter how you did it, in some ways, it was
luxurious, since there were about a half dozen mainframes at the Lab for
Computer Science and AI Lab combined, with less than a gigabyte per machine,
if I recall correctly (seems like the MIT-MC machine, which had a lot, had
two 270 MB disks and maybe a 450 MB disk ... and later maybe a couple more
disks of a size I don't recall, but comparable), shared between probably
50 or so people who were "real" users and maybe a few hundred or a thousand
users who were "tourist" users... so not much disk space at all for all 
projects combined.  And then along comes the Lisp Machine, where the address
space was way larger (suddenly 29 bits instead of 18 bits), and though the
disk seemed huge, it all went to "world loads".  The software took up a
gigantic amount of that disk.  So you might have 250MB on your own machine,
which seemed large, but then when a "world load" (a saved image of the entire
operating system with your code loaded) took up 50 or 100MB, you realized
that suddenly you could have maybe a distribution load and maybe one or two
other world loads and that was it.  So storing a "file system" partition
on the same machine was a way to make you NOT have enough room for a world,
and to risk that someone needing to dump a world would delete your file system
to make room.  Not a pretty picture.  (These machines had no file security,
so anyone could delete anything.  The network file systems had file security
only if the remote system had security. MIT ITS had none. TOPS-20 did. Unix
claimed to, but it was originally implemented by assuming that the remote
machine would ask for the password, so the lispm had a variable called
*check-passwords-for-show* that would prompt you for a password to pretend that
your unix had local file security, when in fact the client was responsible
for passwording the user and the lispm client was willing not to... So file
security was just managed differently than now, and the fear that your 
stuff would go away was very tangible.)

> 3) Even in environments where everyone has their own personal machine, 
> it's common to work on group projects.  Or you write a program on one 
> machine and then want someone else to be able to try it on theirs.  
> Shared filesystems foster this type of collaboration.

Indeed.  The right model of this would be like URL sharing.  It seems
like such a modern idea, unless you realize that in the old days, in
those environments, we could already just pass filenames around, so we
didn't need URLs.  In 1985, when the Macintosh was just coming on the
market and PC meant DOS, the Lisp Machine had an interface where you
got command completion on the copy file command and its filename
prompting, and where you could type a "copy file" command to the
command shell and it would prompt for filenames and give you filename
completion for any file on the arpanet, in any filename
syntax. (Admittedly, it had a list of all servers on the arpanet and
knew which filename syntax each used, and it had to go out over FTP to
get the filenames to complete over, but it did so invisibly and didn't
tell you that was what it was doing.)  So networked files were as
casual as URLs.  "Where did you put the data?"  "Oh, it's on MIT-MC at
MY;FILE >" or "Oh, it's on PARCFTP.XEROX.COM at /foo/bar".  People
sometimes bought Lisp Machines for the single purpose of "connectivity
server" because they could talk CHAOS, TCP, DNA, SNA, and other
network protocols and link together machines that otherwise didn't
talk to one another.

> 4) It simplified the design of the OS by leaving out a fancy file 
> system.  The network servers already had existing file systems, it was 
> easy to take advantage of them.  Early Lispms just had a simple 
> partitioning scheme -- one partition for the firmware, another for the 
> Lisp image, another for paging, that's it.  Eventually native file 
> systems were implemented, but there was no need for this in the early 
> days when they could count on the existence of mainframes.

Oh, and importantly, related to this, backup media was scarce and
expensive, and backup software not sophisticated.  So one could have
centralized servers that were backed up.  One might have a single
"cart tape" ("cartridge tape", kind of like a huge cassette tape) that
they occasionally backed up a set of files onto, but ordinarily files
that were going to get backed up wanted to be on a server.  i found
cart tapes too much pain in a lot of cases and was glad the mainframes
were backed up.
From: Andreas Davour
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <cs9abwjmu82.fsf@Psilocybe.Update.UU.SE>
Kent M Pitman <······@nhplace.com> writes:

> Barry Margolin <······@alum.mit.edu> writes:

>> There were a couple of reasons why networked filesystems were used:
>
> I totally agree with Barry's observations here and wanted to add a few things
> he reminded me of.
  
>> 2) Disk storage was really expensive in those days.  So it would have 
>> been prohibitively expensive to put much disk space on each machine.  
>> There were economies of scale available by putting large disks on 
>> central minicomputers and mainframes.
>
> Building again on this, one issue is that these disks on the LispM were
> INCREDIBLY large by some accounts.  A 450MB disk was not uncommon, though
> often it was less ... but no matter how you did it, in some ways, it was
> luxurious, since there were about a half dozen mainframes at the Lab for
> Computer Science and AI Lab combined, with less than a gigabyte per machine,
> if I recall correctly (seems like the MIT-MC machine, which had a lot, had
> two 270 MB disks and maybe a 450 MB disk ... and later maybe a couple more
> disks of a size I don't recall, but comparable), shared between probably
> 50 or so people who were "real" users and maybe a few hundred or a thousand
> users who were "tourist" users... so not much disk space at all for all 
> projects combined.  And then along comes the Lisp Machine, where the address
> space was way larger (suddenly 29 bits instead of 18 bits), and though the
> disk seemed huge, it all went to "world loads".  The software took up a
> gigantic amount of that disk.  So you might have 250MB on your own machine,
> which seemed large, but then when a "world load" (a saved image of the entire
> operating system with your code loaded) took up 50 or 100MB, you realized
> that suddenly you could have maybe a distribution load and maybe one or two
> other world loads and that was it.  So storing a "file system" partition
> on the same machine was a way to make you NOT have enough room for a world,
> and to risk that someone needing to dump a world would delete your file system
> to make room.  Not a pretty picture.  (These machines had no file security,
> so anyone could delete anything.  The network file systems had file security
> only if the remote system had security. MIT ITS had none. TOPS-20 did. Unix
> claimed to, but it was originally implemented by assuming that the remote
> machine would ask for the password, so the lispm had a variable called
> *check-passwords-for-show* that would prompt you for a password to pretend that
> your unix had local file security, when in fact the client was responsible
> for passwording the user and the lispm client was willing not to... So file
> security was just managed differently than now, and the fear that your 
> stuff would go away was very tangible.)

The *check-passwords-for-show* really made me chuckle. :)

Thanks for your glimpse into a lost world Kent (and Barry). I think I've
gotten more than the answers I asked for.

/andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Andreas Davour
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <cs9ejlvmud8.fsf@Psilocybe.Update.UU.SE>
Barry Margolin <······@alum.mit.edu> writes:

> In article <···············@Psilocybe.Update.UU.SE>,
>  Andreas Davour <····@update.uu.se> wrote:
>
>> ······@news.dtpq.com (Christopher C. Stacy) writes:
>> 
>> > Andreas Davour <····@update.uu.se> writes:
>> >> A related question concerns the storage model. I've read Richard
>> >> Greenblatt's paper about the features of the CONS and he mentions
>> >> storage on another machine as a design feature. I'm not sure I
>> >> follow. Why is that good? Is there some excellent insight to that design
>> >> that I'm missing? I only know lisp, and no electrical and digital
>> >> engineering and could possibly miss a lot.
>> >
>> > The same reason that people use network storage appliances now.
>> > Also, most of the files that people wanted to access were
>> > already on some other computer (a PDP-10) on the network.
>> 
>> The last I can understand, that it was a nice feature. But, I don't
>> understand why people use network storage appliances now!
>> 
>> Maybe you know to what extent this networked filesystem was used? Was it
>> like the ITS filesystem just for utility or was the local disc used for
>> storage of anything but the world image and the system sources?
>
> There were a couple of reasons why networked filesystems were used:
[A lot of good reasons mentioned by Barry]

Those are all very good reasons. I think that the culture today is so
very different that I just didn't think about a few of those uses. I'm
poking around a ITS system almost daily, enjoying the trip back in time
and some strangeness just don't hit home in their fullness,
apparantly. The openness of the system and the culture in which it was
created is staggering.

Damn how I wish I were older, sometimes. 

/andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Tim Bradshaw
Subject: Re: LimpM, why the umbilical and the storage arrangement with the PDP10?
Date: 
Message-ID: <1178383000.914644.39240@n76g2000hsh.googlegroups.com>
On May 4, 8:15 pm, Andreas Davour <····@update.uu.se> wrote:

> The last I can understand, that it was a nice feature. But, I don't
> understand why people use network storage appliances now!

Because they want to have a pool of storage shared between machines,
and accessible, potentially, from all of them.  The utility of that,
financially and administratively should be clear.

There are three approaches to doing this:

1. Make machines which have significant storage into file servers for
other machines, in addition to whatever else they do.  This is OK but
it tends to be administratively complex and has reliability and
availability issues - if one of the fileserver machines needs to be
rebooted or something, all the clients lose access to its storage for
a while for instance.

2. Have some kind of dedicated block-level storage network.  This is
quite popular at present, but it's not really a great solution: these
things are expensive, and because the network is at the block level,
you can't actually move a bit of storage between machines unless they
share the same filesystem implementation.  Sharing storage between
machines, even when they do share a filesystem implementation, is even
more problematic.  Finally the implementation technology (fibre
channel) isn't keeping up with ethernet.

3. Have dedicated fileservers which provide access to their storage
using a network filesystem protocol running over the normal network
stack.  This is a pretty good solution.  Until recently it's been
limited because the dedicated storage networks could provide more
bandwidth, but this is no longer realistically true.  Because the
access is at the filesystem level (as it is with (1)) you can easily
share storage between with machines, simultaneously.  There are also
protools (iSCSI etc) for exporting block-level access.