From: Peter Seibel
Subject: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <m3k7738k2u.fsf@javamonkey.com>
I've read many places (including the History section of the Common
Lisp standard) that Scheme introduced lexical scoping and lexical
closures. Does that mean that all Lisp dialects prior to Scheme used
only dynamically scoped variables? Elsewhere I had gathered the
impression that there was a period when Lisp implementations tended to
use dynamic scoping when interpreted and lexical scoping when
compiled, leading to various amounts of chaos that were sorted out in
Common Lisp by requiring the same behavior in both interpreted and
compiled code. Was this period all post-Scheme?

Also, did Scheme introduce lexical scoping to the Lisp world or to the
progamming world generally. (I.e. were there non-Lisp languages with
lexical scoping prior to Scheme.) The HyperSpec mentions that Scheme's
"design brought to Lisp some of the ideas from programming language
semantics developed in the 1960's". Was lexical scoping one of those
ideas? If so, had it actually be incorporated into other languages?
Algol60?

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp

From: Pascal Costanza
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <bmq3tq$8q$1@newsreader2.netcologne.de>
Peter Seibel wrote:

> I've read many places (including the History section of the Common
> Lisp standard) that Scheme introduced lexical scoping and lexical
> closures. Does that mean that all Lisp dialects prior to Scheme used
> only dynamically scoped variables? Elsewhere I had gathered the
> impression that there was a period when Lisp implementations tended to
> use dynamic scoping when interpreted and lexical scoping when
> compiled, leading to various amounts of chaos that were sorted out in
> Common Lisp by requiring the same behavior in both interpreted and
> compiled code. Was this period all post-Scheme?

No, this was pre-Scheme. Scheme has sorted these things out, and Scheme 
was the most important influence on Common Lisp in this regard. See the 
"Evolution of Lisp" paper by Gabriel and Steele at 
http://www.dreamsongs.com/Essays.html

> Also, did Scheme introduce lexical scoping to the Lisp world or to the
> progamming world generally. (I.e. were there non-Lisp languages with
> lexical scoping prior to Scheme.) The HyperSpec mentions that Scheme's
> "design brought to Lisp some of the ideas from programming language
> semantics developed in the 1960's". Was lexical scoping one of those
> ideas? If so, had it actually be incorporated into other languages?
> Algol60?

Algol actually had lexical scoping, and the original papers about Scheme 
mentions Algol as one important influence on the clarification of 
lexical vs. dynamic scoping.

You should probably read the so-called "lambda papers", referenced at 
http://library.readscheme.org/page1.html - they provide lots of 
information about the history of lexical scoping in Lisp dialects. (Also 
in some of the lambda papers whose titles don't suggest to be about this 
topic.)


Pascal
From: james anderson
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <3F9093EF.38F1354E@setf.de>
Peter Seibel wrote:
> 
> I've read many places (including the History section of the Common
> Lisp standard) that Scheme introduced lexical scoping and lexical
> closures. ...
> 
> Also, did Scheme introduce lexical scoping to the Lisp world or to the
> progamming world generally. (I.e. were there non-Lisp languages with
> lexical scoping prior to Scheme.) The HyperSpec mentions that Scheme's
> "design brought to Lisp some of the ideas from programming language
> semantics developed in the 1960's". Was lexical scoping one of those
> ideas? If so, had it actually be incorporated into other languages?
> Algol60?

the ai lab has all of its technical reports and memos back to #6 online.[0]
the first rrs is memo number 452. there is no real discussion, but what you
want is on page 3. if you follow the reference to the moses memo, you will
find more discussion and an explanation of the motivation. if you look in 
steele&gabriel's paper on the evolution of lisp, it will help with context.

[0]http://www.ai.mit.edu/research/publications/

...
From: Peter Seibel
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <m3brsf8gnb.fsf@javamonkey.com>
james anderson <··············@setf.de> writes:

> Peter Seibel wrote:
> > 
> > I've read many places (including the History section of the Common
> > Lisp standard) that Scheme introduced lexical scoping and lexical
> > closures. ...
> > 
> > Also, did Scheme introduce lexical scoping to the Lisp world or to the
> > progamming world generally. (I.e. were there non-Lisp languages with
> > lexical scoping prior to Scheme.) The HyperSpec mentions that Scheme's
> > "design brought to Lisp some of the ideas from programming language
> > semantics developed in the 1960's". Was lexical scoping one of those
> > ideas? If so, had it actually be incorporated into other languages?
> > Algol60?
> 
> the ai lab has all of its technical reports and memos back to #6 online.[0]
> the first rrs is memo number 452. there is no real discussion, but what you
> want is on page 3. if you follow the reference to the moses memo, you will
> find more discussion and an explanation of the motivation. if you look in 
> steele&gabriel's paper on the evolution of lisp, it will help with context.
> 
> [0]http://www.ai.mit.edu/research/publications/

Cool, thanks.

-Peter


-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <TL3kb.2015$W16.966@newsread2.news.atl.earthlink.net>
Peter Seibel wrote:
> I've read many places (including the History section of the Common
> Lisp standard) that Scheme introduced lexical scoping and lexical
> closures. Does that mean that all Lisp dialects prior to Scheme used
> only dynamically scoped variables? Elsewhere I had gathered the
> impression that there was a period when Lisp implementations tended to
> use dynamic scoping when interpreted and lexical scoping when
> compiled, leading to various amounts of chaos that were sorted out in
> Common Lisp by requiring the same behavior in both interpreted and
> compiled code. Was this period all post-Scheme?
>
> Also, did Scheme introduce lexical scoping to the Lisp world or to the
> progamming world generally. (I.e. were there non-Lisp languages with
> lexical scoping prior to Scheme.) The HyperSpec mentions that Scheme's
> "design brought to Lisp some of the ideas from programming language
> semantics developed in the 1960's". Was lexical scoping one of those
> ideas? If so, had it actually be incorporated into other languages?
> Algol60?

The first example of lexical scoping and lexical closures can be found in
the lambda calculus, circa 1936-1941.  It's all laid out quite clearly in
the first few pages of Church's 1941 book, "The Calculi of Lambda
Conversion"[1], including definitions of free and bound variables, anonymous
functions, and higher order functions (the latter two by different names).

Earlier mathematical formalisms had notions of free variables in
expressions, but the lambda calculus combined this with anonymous functions
and reduction rules which resulted in semantics that are identical to what
we today call lexical scope, lexical closure, and anonymous functions.

Lexical scoping was in Algol 60, circa 1960.  Algol 60 did not have
first-class lexical closures.

Peter Landin's theoretical language family, ISWIM, introduced in the 1966
paper, "The Next 700 Programming Languages"[2], was explicitly based on
lambda calculus, and had lexical scope and lexical closures.  Landin had
previously written a paper on the "Correspondence between Algol 60 and
Church's lambda notation".

Reynolds' GEDANKEN was another language in the theoretical language space
which addressed these issues, but I know little about it.

Algol 68 apparently had first-class procedures, but I don't know the extent
to which they supported lexical closure.  There was apparently an extension
called Algol 68RS which supported closures.

The Lisp 1.5 FUNCTION construct apparently created a closure.  See the Lisp
1.5 Programmer's Manual [3], section 3.1 and Appendix B.  You would use
FUNCTION instead of QUOTE to quote a LAMBDA expression, and the effect was
to turn the lambda into a closure.  I think it was Dan Friedman who
mentioned this at ILC 2003.

Algol 60 was an influence on Steele and Sussmann in the development of
Scheme - they saw lexical scoping as necessary to implement the actor
concept they were exploring.  Scheme introduced the concept of a lambda
expression as evaluating to a procedure, rather than *being* a procedure,
which allowed lambda expressions on their own to be closures, i.e. the
closure was created at the time of evaluation, when the enclosing
environment was available.

Anton

[1]
http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?isbn=0691083940
[2] http://www.cs.utah.edu/~wilson/compilers/old/papers/p157-landin.pdf
[3] http://green.iis.nsk.su/~vp/doc/lisp1.5/mccarthy.html and

http://search.barnesandnoble.com/textbooks/booksearch/isbnInquiry.asp?isbn=0
262130114
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <ZD4kb.2042$W16.642@newsread2.news.atl.earthlink.net>
An additional note to the references I gave:

John Reynolds' 1972 paper, "Definitional Interpreters for Higher-Order
Programming Languages"[1], was referenced by the original paper on Scheme,
"Scheme - An Interpreter for Extended Lambda Calculus"[2].  Reynold's paper
mentions various functional languages based on lambda calculus, some of
which I mentioned, such as ISWIM and GEDANKEN, as well as systems like
Landin's SECD machine.

So there's an indirect connection there between the development of Scheme
and these somewhat more theoretical languages, although as I understand it,
the main source for Scheme's lexical scope was Algol 60.

Another relevant reference found in [2] is Joel Moses' paper "The Function
of FUNCTION in LISP".

Anton

[1] ftp://ftp.cs.cmu.edu/user/jcr/defint.ps.gz or
[2] ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-349.pdf
From: james anderson
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <3F91007F.302CA967@setf.de>
Anton van Straaten wrote:
> 
> An additional note to the references I gave:
> 
> John Reynolds' 1972 paper, "Definitional Interpreters for Higher-Order
> Programming Languages"[1], was referenced by the original paper on Scheme,
> "Scheme - An Interpreter for Extended Lambda Calculus"[2].  Reynold's paper
> mentions various functional languages based on lambda calculus, some of
> which I mentioned, such as ISWIM and GEDANKEN, as well as systems like
> Landin's SECD machine.
> 
> So there's an indirect connection there between the development of Scheme
> and these somewhat more theoretical languages, although as I understand it,
> the main source for Scheme's lexical scope was Algol 60.
> 

i suppose one would have to ask them, but, given moses' exposition and that of
the first scheme-related memos, it reads as if the source was landin's work
and algol 60 was a convenient foil.

> Another relevant reference found in [2] is Joel Moses' paper "The Function
> of FUNCTION in LISP".
> 
> Anton
> 
> [1] ftp://ftp.cs.cmu.edu/user/jcr/defint.ps.gz or
> [2] ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-349.pdf
From: Christopher C. Stacy
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <u1xtajujs.fsf@dtpq.com>
MDL ("muddle") was a A Lisp dialect that Sussman had some involvement
in with before Scheme.  It was dynamically scoped.  However, variables
in MDL could have seperate thread ("local") and shared ("global")
value cells, All atoms in MDL evaluated to themselves (like keywords
do in Common Lisp).  If you wanted the local or global value of a
variable, you had to call the LVAL or GBAL function on it (usually
written using the reader macro characters "." and "," respectively).

They realized that lexical scoping was desirable, but the closest
they got was facilities for "lexical blocking", where you fiddle 
around with the obarray.

MDL is also where Lisp got most of the familiar lambda list 
features and keywords (such as optional, aux, rest); the keywords 
also controlled whether or not args were evaluated.  There were also
environments, closures, and activation records (continuations).

MDL also had vectors, hash tables, and sophisticated 
user-defined data types and structures where you could
control the storage layout if you wanted.

Not everything is written down in the memos and working papers.
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <Whgkb.2501$W16.439@newsread2.news.atl.earthlink.net>
james anderson wrote:
>
>
> Anton van Straaten wrote:
> >
> > An additional note to the references I gave:
> >
> > John Reynolds' 1972 paper, "Definitional Interpreters for Higher-Order
> > Programming Languages"[1], was referenced by the original paper on
Scheme,
> > "Scheme - An Interpreter for Extended Lambda Calculus"[2].  Reynold's
paper
> > mentions various functional languages based on lambda calculus, some of
> > which I mentioned, such as ISWIM and GEDANKEN, as well as systems like
> > Landin's SECD machine.
> >
> > So there's an indirect connection there between the development of
Scheme
> > and these somewhat more theoretical languages, although as I understand
it,
> > the main source for Scheme's lexical scope was Algol 60.
> >
>
> i suppose one would have to ask them, but, given moses' exposition and
that of
> the first scheme-related memos, it reads as if the source was landin's
work
> and algol 60 was a convenient foil.

My point is that in the earliest Scheme memo, AI-349, December 1975, there's
no direct mention of Landin's work, even in the bibliography.  There's
mention of Lisp, Actors, Algol, PLASMA, and lambda calculus.  Particularly,
the Acknowledgements describe the "experimental and highly empirical
approach to bootstrap our knowledge" that was used, inspired by Actors,
PLASMA, and lambda calculus.

My conclusion, based also on other material I've seen (emails, etc.) is that
in the initial development of Scheme, Sussmann and Steele weren't aware of
how closely their use of lambda calculus matched Landin's work - otherwise,
Landin would have been an incredibly obvious reference for the first paper.

Multiple direct mentions of Landin occur in "Lambda: The Ultimate
Imperative", from March 1976, three months later.  By that time, they had
discovered the connection, now that they were exploring the space which
Scheme had propelled them into.  In the Conclusions section, they point out
how Landin and Reynolds' work "contained much more machinery than what we ha
ve used in SCHEME".

Anton
From: james anderson
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <3F919E2C.81054A6B@setf.de>
Anton van Straaten wrote:
> 
> james anderson wrote:
> >
> >
> > ...
> >
> > i suppose one would have to ask them, but, given moses' exposition and
> that of
> > the first scheme-related memos, it reads as if the source was landin's
> work
> > and algol 60 was a convenient foil.
> 
> My point is that in the earliest Scheme memo, AI-349, December 1975, there's
> no direct mention of Landin's work, even in the bibliography.  There's
> mention of Lisp, Actors, Algol, PLASMA, and lambda calculus.  Particularly,
> the Acknowledgements describe the "experimental and highly empirical
> approach to bootstrap our knowledge" that was used, inspired by Actors,
> PLASMA, and lambda calculus.

despite all of which, i still wonder how that could have come to pass, when
moses in 1970, in aim-199, identifies landin as a clear source for the
approach to the same problem which sussman/steele aim to address, moses
appears in the acknowledgements to sussman's thesis in '73 - so they were
certainly taking to each other, and aim-349 does reference moses (in the form
of aim-199) in its explanation of the relation between closures and the upward
funarg problem, so i would have thought that they knew of its content.

> 
> My conclusion, based also on other material I've seen (emails, etc.) is that
> in the initial development of Scheme, Sussmann and Steele weren't aware of
> how closely their use of lambda calculus matched Landin's work - otherwise,
> Landin would have been an incredibly obvious reference for the first paper.

hmm.

> 
> Multiple direct mentions of Landin occur in "Lambda: The Ultimate
> Imperative", from March 1976, three months later.  By that time, they had
> discovered the connection, now that they were exploring the space which
> Scheme had propelled them into.  In the Conclusions section, they point out
> how Landin and Reynolds' work "contained much more machinery than what we ha
> ve used in SCHEME".

another curious thing is that, c.hewitt appears in _everybody's_
acknowledgements and i would be surprised to hear that he was either unaware
of the cross-relevance, or reluctant to let one know about it.

i can only wonder.
...
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <_5jkb.2886$W16.1230@newsread2.news.atl.earthlink.net>
james anderson wrote:
>
>
> Anton van Straaten wrote:
> >
> > james anderson wrote:
> > >
> > >
> > > ...
> > >
> > > i suppose one would have to ask them, but, given moses' exposition and
> > that of
> > > the first scheme-related memos, it reads as if the source was landin's
> > work
> > > and algol 60 was a convenient foil.
> >
> > My point is that in the earliest Scheme memo, AI-349, December 1975,
there's
> > no direct mention of Landin's work, even in the bibliography.  There's
> > mention of Lisp, Actors, Algol, PLASMA, and lambda calculus.
Particularly,
> > the Acknowledgements describe the "experimental and highly empirical
> > approach to bootstrap our knowledge" that was used, inspired by Actors,
> > PLASMA, and lambda calculus.
>
> despite all of which, i still wonder how that could have come to pass,
when
> moses in 1970, in aim-199, identifies landin as a clear source for the
> approach to the same problem which sussman/steele aim to address

Which problem are you thinking they aimed to address, though?  It wasn't the
problem of lexical scope or closure.  According to the Acknowledgements in
AIM-349 (and I've seen other similar descriptions written by Steele), they
were simply trying to understand actors.  Actors needed something like
lexical scope, which I've explicitly seen mentioned that they adopted from
Algol 60 (I'd have to dig up a reference).  They then "discovered that the
'actors' and the lambda expressions were identical in implementation [...]
and it was only natural to begin thinking about actors in terms of lambda
calculus."

> moses appears in the acknowledgements to sussman's thesis in '73 - so
> they were certainly taking to each other, and aim-349 does reference
> moses (in the form of aim-199) in its explanation of the relation between
> closures and the upward funarg problem, so i would have thought that
> they knew of its content.

Even if they knew of the content of AIM-199, they may not have had
first-hand knowledge of Landin's work.  Since their main focus wasn't the
upward funarg problem, they may not have explored past Moses and into
Landin.

> > My conclusion, based also on other material I've seen (emails, etc.) is
that
> > in the initial development of Scheme, Sussmann and Steele weren't aware
of
> > how closely their use of lambda calculus matched Landin's work -
otherwise,
> > Landin would have been an incredibly obvious reference for the first
paper.
>
> hmm.

"hmm" you're skeptical, or "hmm" good point?  :)

> > Multiple direct mentions of Landin occur in "Lambda: The Ultimate
> > Imperative", from March 1976, three months later.  By that time, they
had
> > discovered the connection, now that they were exploring the space which
> > Scheme had propelled them into.  In the Conclusions section, they point
out
> > how Landin and Reynolds' work "contained much more machinery than what
we ha
> > ve used in SCHEME".
>
> another curious thing is that, c.hewitt appears in _everybody's_
> acknowledgements and i would be surprised to hear that he was either
unaware
> of the cross-relevance, or reluctant to let one know about it.
>
> i can only wonder.
> ...

But presumably, Hewitt wasn't really aware of the close connection between
actors and lambda calculus, since that's one of the things Sussmann and
Steele seem to have discovered.  I think a big aspect of puzzling this out
now is that we know about all these connections already, but they were far
from obvious originally.

For example, in a casual reading of Landin's "The Next 700 Programming
Languages", which defined the ISWIM family, lexical scope and closures don't
stand out particularly as being one of the things that are being addressed.
Even if you're paying close attention, at best ISWIM seems to assume things
about scope, and you'd be left asking "how does it do that?"  You'd have to
look at the SECD machine quite closely to really get it.

In fact, I think this is exactly why it took a while for insights from
lambda calculus to be fully recognized: lexical scope and closure are simply
inherent in lambda calculus, something that emerges from its definition and
the reduction rules, and aren't particularly remarkable; whereas in a
programming language with an implementation that doesn't use reduction, you
have to deal with how to implement these features, e.g. with environments
that aren't just naively stack-allocated.  The "experimental and highly
empirical approach to bootstrap our knowledge" which Sussmann and Steele
refer to, was necessary because they were coming at things from the
implementation side, rather than starting with a mathematical formalism as
Landin seems to have done.  Only after the Scheme implementation was done
did some of those connections become obvious.

Well, that's my theory and I'm sticking to it...  :)

Anton
From: ozan s yigit
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <vi4znfx3jc3.fsf@blue.cs.yorku.ca>
"Anton van Straaten" <·····@appsolutions.com> writes:

> Well, that's my theory and I'm sticking to it...  :)

hmm, i think most of the usual suspects are alive, so if you want
to be sure of the details of that figment of history, why not ask?
email ··········@sun.com, for example... ;)

i'm curious: why does it really matter how we got here?
remember what heraclitus once said...

oz
---
if you do not work on important problems
how can you expect to do important work? -- richard w. hamming
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <hYJkb.5034$W16.2251@newsread2.news.atl.earthlink.net>
ozan s yigit wrote:
> "Anton van Straaten" <·····@appsolutions.com> writes:
>
> > Well, that's my theory and I'm sticking to it...  :)
>
> hmm, i think most of the usual suspects are alive, so if you want
> to be sure of the details of that figment of history, why not ask?
> email ··········@sun.com, for example... ;)

I've asked such questions in the past - as recently as last week, at ILC03 -
and I imagine I'll do so again.  But in this case, I'm not the one
questioning my theory...  :)

> i'm curious: why does it really matter how we got here?

In this case at least, I think that the convergence between theory and
practice is of interest - see Tom Lord's exposition on the matter (factor
out poetic license to taste ;)   Besides, if we decide that we want to go
somewhere else from "here", knowing where we've already been helps - lest we
be doomed to recreate history and all that.

> remember what heraclitus once said...

"Of this Word's being forever do men prove to be uncomprehending, both
before they hear and once they have heard it"?  Or perhaps I need a more
specific reference... :)

Anton
From: ozan s yigit
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <vi4znfvb38e.fsf@blue.cs.yorku.ca>
Anton:

>                       ...  Besides, if we decide that we want to go
> somewhere else from "here", knowing where we've already been helps - lest we
> be doomed to recreate history and all that.

does it? it seems to me that there is nothing in the details being sought
that will in any way help in going forward, because they fundamentally give
us no *new* information that is not present in the current implementations.
[try this: if what you are after is *not* present, than you can *only*
speculate as to its significance, what *might have been* but you will not
know more than that.]

going forward requires a careful look at these languages *now*, which is
what they [sussman, steele, et al] most certainly did in their time.

> > remember what heraclitus once said...
> 
> "Of this Word's being forever do men prove to be uncomprehending, both
> before they hear and once they have heard it"?  Or perhaps I need a more
> specific reference... :)

you cannot bathe in the same river twice.

oz
From: james anderson
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <3F925147.A177E362@setf.de>
Anton van Straaten wrote:
> 
> james anderson wrote:
> >
> > ...
> >
> > hmm.
> 
> "hmm" you're skeptical, or "hmm" good point?  :)
> 

hmm, as in somethimes even the best of arguments is not sufficient to remove
all skepticism.

i won't recount them, but nore, in general, that this is not the first time i
am seeing moses' paper (aim-199). i would be hard pressed to produce the
reading list, but my recollection is that it was among the readings for the
undergaduate introductory ai course at least as early as 1973. if they gave it
to us to read, i would have thought that they would have read it as well.

aim-199 does leave this reader with the impression, that landin's work should
have been known to their research group, due to his presence there - albeit
several years earlier.

in addition, aim-349, in its section on "some implementation details", is
thorough enough, that this reader would, but for your allusions to other
documents to the contrary, come to the conclusion that the authors did think
they were addressing problems of lexical scope and closures and did understand
that the material in aim-199 was relevant to their investigation. it is
certainly possible that they chose to use the same terminology - that is the
distinction between upwards and downwards funargs, as an aid to explicate
their problem space, and did make explicit reference to the source, but did
not  understand its implications.

again, i can think only "hmm...", and concur that, if one really wants to
know, one will have to ask them.

...
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <NjKkb.5047$W16.4425@newsread2.news.atl.earthlink.net>
james anderson wrote:
>
>
> Anton van Straaten wrote:
> >
> > james anderson wrote:
> > >
> > > ...
> > >
> > > hmm.
> >
> > "hmm" you're skeptical, or "hmm" good point?  :)
> >
>
> hmm, as in somethimes even the best of arguments is not sufficient to
remove
> all skepticism.
>
> i won't recount them, but nore, in general, that this is not the first
time i
> am seeing moses' paper (aim-199). i would be hard pressed to produce the
> reading list, but my recollection is that it was among the readings for
the
> undergaduate introductory ai course at least as early as 1973. if they
gave it
> to us to read, i would have thought that they would have read it as well.
>
> aim-199 does leave this reader with the impression, that landin's work
should
> have been known to their research group, due to his presence there -
albeit
> several years earlier.

Known in how much detail, though?  As I said, the ISWIM paper and even the
Algol/lambda calculus correspondence paper wouldn't necessarily make the
connections obvious - you'd need an understanding of the SECD machine to
really "get" the implementation side.

> in addition, aim-349, in its section on "some implementation details", is
> thorough enough, that this reader would, but for your allusions to other
> documents to the contrary, come to the conclusion that the authors did
think
> they were addressing problems of lexical scope and closures and did
understand
> that the material in aim-199 was relevant to their investigation.

I agree that they were clearly addressing the issue of closures, but almost
in passing, because they needed them to implement actors.  AIM-349 did
reference AIM-199, so it's possible they simply implemented what Moses
suggested.  Everything I've read indicates that investigating actors was
their primary concern, and that some of the other implications were only
discovered once the first interpreter was implemented (see referenced
message below).

> it is certainly possible that they chose to use the same terminology -
that is the
> distinction between upwards and downwards funargs, as an aid to explicate
> their problem space, and did make explicit reference to the source, but
did
> not  understand its implications.

I'm not saying they weren't familiar with other work on the issues of
closure, or up/down funargs.  I'm saying they didn't realize specifically
that Landin's work related so directly to what they had done with Scheme,
until after the first paper - and that their primary influences, aside from
Lisp of course, were Actors, Algol, PLASMA, and as someone else pointed out,
MDL.

> again, i can think only "hmm...", and concur that, if one really wants to
> know, one will have to ask them.

Google might be the next best thing: here's a note from Guy Steele on the
rrrs-authors list, from '98:
http://zurich.ai.mit.edu/pipermail/rrrs-authors/1998-January/002234.html

This contains the quote: "Gerry had been teaching about Algol 60, and he
suggested that our interpreter
should use Algol-style lexical scoping, which would presumably support
acquaintances [of an actor] properly."

Anton
From: james anderson
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <3F939686.603DF911@setf.de>
Anton van Straaten wrote:
> 
> james anderson wrote:
> >
> >...
> 
> I'm not saying they weren't familiar with other work on the issues of
> closure, or up/down funargs.  I'm saying they didn't realize specifically
> that Landin's work related so directly to what they had done with Scheme,
> until after the first paper - and that their primary influences, aside from
> Lisp of course, were Actors, Algol, PLASMA, and as someone else pointed out,
> MDL.
> 
> > again, i can think only "hmm...", and concur that, if one really wants to
> > know, one will have to ask them.
> 
> Google might be the next best thing: here's a note from Guy Steele on the
> rrrs-authors list, from '98:
> http://zurich.ai.mit.edu/pipermail/rrrs-authors/1998-January/002234.html
> 
> This contains the quote: "Gerry had been teaching about Algol 60, and he
> suggested that our interpreter
> should use Algol-style lexical scoping, which would presumably support
> acquaintances [of an actor] properly."
> 

so much for retrospective quantification.

...
From: Barry Margolin
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <P7Tkb.103$lK3.46@news.level3.com>
In article <··············@javamonkey.com>,
Peter Seibel  <·····@javamonkey.com> wrote:
>
>I've read many places (including the History section of the Common
>Lisp standard) that Scheme introduced lexical scoping and lexical
>closures. Does that mean that all Lisp dialects prior to Scheme used
>only dynamically scoped variables? Elsewhere I had gathered the
>impression that there was a period when Lisp implementations tended to
>use dynamic scoping when interpreted and lexical scoping when
>compiled, leading to various amounts of chaos that were sorted out in
>Common Lisp by requiring the same behavior in both interpreted and
>compiled code. Was this period all post-Scheme?

Maclisp had the dual nature you describe.  However, its lexical scoping was
more like C than Scheme or Common Lisp.  You could not return a lexical
closure from the scope in which it was created and have it remember the
variable bindings.  It was more of a compiler optimization than real
lexical scoping.

This is the reason why dynamic variables are called "special" in Maclisp
and Common Lisp.  This declaration warned the compiler that it would have
to compile bindings of the variable differently from normal variables.

-- 
Barry Margolin, ··············@level3.com
Level(3), Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Kent M Pitman
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <wkd6c78eat.fsf@nospam.nhplace.com>
Peter Seibel <·····@javamonkey.com> writes:

> I've read many places (including the History section of the Common
> Lisp standard) that Scheme introduced lexical scoping and lexical
> closures. Does that mean that all Lisp dialects prior to Scheme used
> only dynamically scoped variables? Elsewhere I had gathered the
> impression that there was a period when Lisp implementations tended to
> use dynamic scoping when interpreted and lexical scoping when
> compiled, leading to various amounts of chaos that were sorted out in
> Common Lisp by requiring the same behavior in both interpreted and
> compiled code. Was this period all post-Scheme?
> 
> Also, did Scheme introduce lexical scoping to the Lisp world or to the
> progamming world generally. (I.e. were there non-Lisp languages with
> lexical scoping prior to Scheme.) The HyperSpec mentions that Scheme's
> "design brought to Lisp some of the ideas from programming language
> semantics developed in the 1960's". Was lexical scoping one of those
> ideas? If so, had it actually be incorporated into other languages?
> Algol60?

Algol 60 was, as mentioned in other replies to this, the origin of the
lexical scoping in Scheme.

Earlier versions of Lisp (pre-Scheme) didn't exactly use lexical scoping
in the modern sense.  Rather, they simply "compiled away" the name so that
it wasn't accessible at all.  You still didn't get closures ... mostly.
At least in Maclisp, which I'm most familiar with, 
 (defun foo (x) (bar))
made x available to bar in interpreted code but not compiled code.  But
 (defun foo (x) #'(lambda () x))
just made a mess because there were no serious closures.  There was
a *function operator that attempted by using copying semantics to sort
of compensate, but it worked poorly.  We were all just fishing around
for serious semantics.  Lisp had become so maleable that it was more
popular to just bludgeon things into working than to worry about what
the language would promise, especially since the language semantics was
dynamically changing every day or week anyway...  

Scheme demonstrated that the Algol 60 idea was general, and the
Sussman/Steele papers related to Scheme showed there were tons of cool
things you could do with closures both expressionally and during compilation,
which drove the desirability of using them.  (There was still a lot of fear
that lexicality would lead to unforseen problems, but that fear was largely
just fear of the unknown and not of anything real.)

Note, too, that this is all mixed up with the FEXPR problem.  Consider that
a great many Lisp operators were implemented as FEXPRs (functions that
didn't evaluate their arguments) and many of these just did EVAL 
on the body forms, and got confused if the variables referred to by those
body forms had gone away.  It required a special care to get this right,
or, put another way: it was semantically bankrupt in a lot of ways.
Bootstrapping your way out of a jungle is not pretty until really very
close to the end of the process...
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <gk7qb.10133$9M3.231@newsread2.news.atl.earthlink.net>
Kent M Pitman wrote:
> Algol 60 was, as mentioned in other replies to this, the origin of the
> lexical scoping in Scheme.
>
> Earlier versions of Lisp (pre-Scheme) didn't exactly use lexical scoping
> in the modern sense.  Rather, they simply "compiled away" the name so that
> it wasn't accessible at all.  You still didn't get closures ... mostly.

Just a clarification, from what I've gathered, Algol 60 did lexical scoping,
but did not have first-class closures (afaict).

> At least in Maclisp, which I'm most familiar with,
>  (defun foo (x) (bar))
> made x available to bar in interpreted code but not compiled code.  But
>  (defun foo (x) #'(lambda () x))
> just made a mess because there were no serious closures.  There was
> a *function operator that attempted by using copying semantics to sort
> of compensate, but it worked poorly.

Someone notable at ILC2003 - perhaps Dan Friedman, but I'm not sure -
mentioned that the definition of FUNCTION in Lisp 1.5 got closures correct,
i.e. using (FUNCTION (LAMBDA (...) ...)) worked and handled closure
correctly, by a modern definition.  Does anyone know anything more about
that?  True, false, caveats?

> We were all just fishing around for serious semantics.  Lisp had become
> so maleable that it was more popular to just bludgeon things into working

Must... resist.. taking... cheap.. shot... at.... CL...

> Note, too, that this is all mixed up with the FEXPR problem.  Consider
that
> a great many Lisp operators were implemented as FEXPRs (functions that
> didn't evaluate their arguments) and many of these just did EVAL
> on the body forms, and got confused if the variables referred to by those
> body forms had gone away.  It required a special care to get this right,
> or, put another way: it was semantically bankrupt in a lot of ways.

Although, its semantic bankruptcy was mostly related to the lack of proper
closures.  An FEXPR-style feature implemented with closures - i.e. where an
unevaluated expression passed as an argument is evaluated in the environment
in which it was defined - is perfectly sound semantically, although it can
have practical consequences for e.g. compilers.

Anton
From: Kent M Pitman
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <wk65hyj157.fsf@nhplace.com>
"Anton van Straaten" <·····@appsolutions.com> writes:

> Someone notable at ILC2003 - perhaps Dan Friedman, but I'm not sure -
> mentioned that the definition of FUNCTION in Lisp 1.5 got closures correct,
> i.e. using (FUNCTION (LAMBDA (...) ...)) worked and handled closure
> correctly, by a modern definition.  Does anyone know anything more about
> that?  True, false, caveats?

I don't know if I believe this.  It's possible, but...  Maclisp had 
FUNCTION and it worked for downward (i.e., dynamic extent) closures
(like you give to MAPCAR), but not upward closures.  Sorry if I didn't
make that very clear.  It was late when I was posting before...

> > Note, too, that this is all mixed up with the FEXPR problem.  Consider
> that
> > a great many Lisp operators were implemented as FEXPRs (functions that
> > didn't evaluate their arguments) and many of these just did EVAL
> > on the body forms, and got confused if the variables referred to by those
> > body forms had gone away.  It required a special care to get this right,
> > or, put another way: it was semantically bankrupt in a lot of ways.
> 
> Although, its semantic bankruptcy was mostly related to the lack of proper
> closures.  An FEXPR-style feature implemented with closures - i.e. where an
> unevaluated expression passed as an argument is evaluated in the environment
> in which it was defined - is perfectly sound semantically, although it can
> have practical consequences for e.g. compilers.

Sort of.  What you say is not untrue, really, but the problem is that
FEXPRs were also for things that simply didn't evaluate their
arguments.  e.g., QUOTE itself was a FEXPR.  As a practical matter,
you'd have to have split the feature in half since there were
efficiency (speed and space) concerns in compiling something that only
"might" be used with such an eval AND there was a semantic concern if
the thing that was being quoted was not a form and never would be, so
that you didn't try to compile nonsensical quoted structure that was
not intended for evaluation.  Maybe this is what you meant by FEXPR-style,
but there are really two bundled concepts here that needed to be teased
apart in order to get any meaningful discussion.  It is, sort of, the case
that you could do as you say, and I vaguely recall that a FEXPR could be
a two-argument function and the second argument would indeed be a stack
pointer that could be given to EVAL.  I don't remember the details of that,
and I doubt it's documented anywhere...  but if you get a pdp10 emulator
and play with maclisp, maybe you can figure it out. (They are available.)
From: Anton van Straaten
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <q69qb.10451$9M3.4804@newsread2.news.atl.earthlink.net>
Kent M Pitman wrote:
> "Anton van Straaten" <·····@appsolutions.com> writes:
>
> > Someone notable at ILC2003 - perhaps Dan Friedman, but I'm not sure -
> > mentioned that the definition of FUNCTION in Lisp 1.5 got closures
correct,
> > i.e. using (FUNCTION (LAMBDA (...) ...)) worked and handled closure
> > correctly, by a modern definition.  Does anyone know anything more about
> > that?  True, false, caveats?
>
> I don't know if I believe this.  It's possible, but...  Maclisp had
> FUNCTION and it worked for downward (i.e., dynamic extent) closures
> (like you give to MAPCAR), but not upward closures.

OK, who has a running Lisp 1.5 we can check this against?  :)  I agree it
seems likely that the upward case wasn't handled, from everything I've
heard.

> > Although, its semantic bankruptcy was mostly related to the lack of
proper
> > closures.  An FEXPR-style feature implemented with closures - i.e. where
an
> > unevaluated expression passed as an argument is evaluated in the
environment
> > in which it was defined - is perfectly sound semantically, although it
can
> > have practical consequences for e.g. compilers.
>
> Sort of.  What you say is not untrue, really, but the problem is that
> FEXPRs were also for things that simply didn't evaluate their
> arguments.  e.g., QUOTE itself was a FEXPR.  As a practical matter,
> you'd have to have split the feature in half since there were
> efficiency (speed and space) concerns in compiling something that only
> "might" be used with such an eval AND there was a semantic concern if
> the thing that was being quoted was not a form and never would be, so
> that you didn't try to compile nonsensical quoted structure that was
> not intended for evaluation.  Maybe this is what you meant by FEXPR-style,
> but there are really two bundled concepts here that needed to be teased
> apart in order to get any meaningful discussion.

Good point, I wasn't thinking of such "abuses" of FEXPRs.  But it sounds as
though all that would be needed would be to define QUOTE as a primitive, and
then use FEXPRs for the other "half" of the feature, which was really the
only half I was thinking of.

Anton
From: Thomas F. Burdick
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <xcvk76e1ytb.fsf@famine.OCF.Berkeley.EDU>
"Anton van Straaten" <·····@appsolutions.com> writes:

> Kent M Pitman wrote:
> > "Anton van Straaten" <·····@appsolutions.com> writes:
> >
> > > Someone notable at ILC2003 - perhaps Dan Friedman, but I'm not sure -
> > > mentioned that the definition of FUNCTION in Lisp 1.5 got closures
> correct,
> > > i.e. using (FUNCTION (LAMBDA (...) ...)) worked and handled closure
> > > correctly, by a modern definition.  Does anyone know anything more about
> > > that?  True, false, caveats?
> >
> > I don't know if I believe this.  It's possible, but...  Maclisp had
> > FUNCTION and it worked for downward (i.e., dynamic extent) closures
> > (like you give to MAPCAR), but not upward closures.
> 
> OK, who has a running Lisp 1.5 we can check this against?  :)  I agree it
> seems likely that the upward case wasn't handled, from everything I've
> heard.

I think I remember hearing the same thing, that Lisp 1.5 had upward
and downward closures, but by the time MACLISP came around they didn't
work anymore.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: William D Clinger
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <fb74251e.0311051234.12e824ad@posting.google.com>
Anton van Straaten wrote:
> Just a clarification, from what I've gathered, Algol 60 did lexical scoping,
> but did not have first-class closures (afaict).

By default, Algol 60 passed arguments by name, not by value.
By-name semantics implies lexical scope, and it was usually
implemented via thunks, which were closures that take no
arguments.  These closures were not first-class, however,
because they could only be (implicitly) called; they could
not be stored in other variables or data structures, etc.

IIRC, Algol 60 also allowed procedures to be passed as
arguments.  Once again, these procedures were represented
as closures, but they were downward-only, and hence not
first-class.

Will
From: Peter Seibel
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <m3islybtj1.fsf@javamonkey.com>
Kent M Pitman <······@nospam.nhplace.com> writes:

> Scheme demonstrated that the Algol 60 idea was general

How did Scheme generalize from Algol 60? (I know how Scheme works but
I had a hard time finding a "Dummies Guide to Algol 60" or "Learn
Algol 60 in 6 Days".)

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Kent M Pitman
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <wk1xsmmyyz.fsf@nhplace.com>
Peter Seibel <·····@javamonkey.com> writes:

> Kent M Pitman <······@nospam.nhplace.com> writes:
> 
> > Scheme demonstrated that the Algol 60 idea was general
> 
> How did Scheme generalize from Algol 60? (I know how Scheme works but
> I had a hard time finding a "Dummies Guide to Algol 60" or "Learn
> Algol 60 in 6 Days".)

It loses some of the classic typography, but someone has webbed the
Algol 60 report:

  http://www.masswerk.at/algol60/report.htm
From: Jens Axel Søgaard
Subject: Re: History of lexical scoping in Scheme and other Lisps?
Date: 
Message-ID: <3fa95644$0$69999$edfadb0f@dread12.news.tele.dk>
Peter Seibel wrote:
> Kent M Pitman <······@nospam.nhplace.com> writes:
>>Scheme demonstrated that the Algol 60 idea was general

> How did Scheme generalize from Algol 60? (I know how Scheme works but
> I had a hard time finding a "Dummies Guide to Algol 60" or "Learn
> Algol 60 in 6 Days".)

To make the circle complete, there is an Algol60 implemented in
Scheme:

     <http://www.cs.utah.edu/plt/develop/>

-- 
Jens Axel S�gaard