From: Ralph Richard Cook
Subject: Smug scheme weenies?
Date: 
Message-ID: <41dca148.1124677@newsgroups.bellsouth.net>
I'm just starting "Essentials of Programming Languages, Second
Edition" and for the nth time I'm reading that Scheme "combines the
uniform syntax and data abstraction capabilities of Lisp with the
lixical scoping and block structure of Algol". That's nice, but they
act like Scheme is the _only_ Lisp that does this. Isn't Common Lisp
also capable of this, with lexical scoping for variables and labels
and flet for functions?

From: Cameron MacKinnon
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <xY-dncn48djbKkHcRVn-og@golden.net>
Ralph Richard Cook wrote:
> I'm just starting "Essentials of Programming Languages, Second
> Edition" and for the nth time I'm reading that Scheme "combines the
> uniform syntax and data abstraction capabilities of Lisp with the
> lixical scoping and block structure of Algol". That's nice, but they
> act like Scheme is the _only_ Lisp that does this. Isn't Common Lisp
> also capable of this, with lexical scoping for variables and labels
> and flet for functions?

Scheme WAS the innovator. Before Scheme provided a counterexample, Lisp 
implementers believed that lexical addressing couldn't be implemented 
efficiently. "The primary influences on Common Lisp were Lisp Machine 
Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme." -- CLHS 1.1.2
From: Ralph Richard Cook
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <41e0b019.4918392@newsgroups.bellsouth.net>
Cameron MacKinnon <··········@clearspot.net> wrote:

>Ralph Richard Cook wrote:
>> I'm just starting "Essentials of Programming Languages, Second
>> Edition" and for the nth time I'm reading that Scheme "combines the
>> uniform syntax and data abstraction capabilities of Lisp with the
>> lixical scoping and block structure of Algol". That's nice, but they
>> act like Scheme is the _only_ Lisp that does this. Isn't Common Lisp
>> also capable of this, with lexical scoping for variables and labels
>> and flet for functions?
>
>Scheme WAS the innovator. Before Scheme provided a counterexample, Lisp 
>implementers believed that lexical addressing couldn't be implemented 
>efficiently. "The primary influences on Common Lisp were Lisp Machine 
>Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme." -- CLHS 1.1.2

Oh, I know that Scheme did it first, but that was back in 1975, and
hasn't Common Lisp been around for at least 10 years? My point is that
the Scheme people use the "lexical scope, block structure" spiel like
it's _still_ the only Lisp that does it.
From: Cameron MacKinnon
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <Z_6dnSuDgdFhVkHcRVn-qA@golden.net>
Ralph Richard Cook wrote:
> Oh, I know that Scheme did it first, but that was back in 1975, and
> hasn't Common Lisp been around for at least 10 years? My point is that
> the Scheme people use the "lexical scope, block structure" spiel like
> it's _still_ the only Lisp that does it.

...and compare their language to Algol, which hasn't had mindshare in 
most of that text's readers' lifetimes. Maybe it's just a stale quote.

In Scheme, variables are ALWAYS lexically scoped, whereas in Common 
Lisp, they can be declared special. For the people for whom this 
distinction matters, it matters a lot.
From: Ray Dillinger
Subject: Scoping rules
Date: 
Message-ID: <gV_Dd.801$m31.9518@typhoon.sonic.net>
Cameron MacKinnon wrote:

> In Scheme, variables are ALWAYS lexically scoped, whereas in Common 
> Lisp, they can be declared special. For the people for whom this 
> distinction matters, it matters a lot.

I've been looking at something else...

Instead of dividing it up by variable, I'm thinking it might be
the right thing to divide it up by function.  Thus, you'd have
some functions ("lambda functions") defined with lexical scope,
and some ("mu functions") defined with dynamic scope.

This would move the dynamic/lexical distinction to a question of
how a particular function's interface works, rather than keeping
separate sets of variables.

It seems to work, at least in theory and in toy programs (I've
built a homebrew LISP-1 that has both kinds of functions).  I'm
using a partial with-gensyms form to avoid unintentional dynamic
capture though, and that seems like a wart.

				Bear
From: Ron Garret
Subject: Re: Scoping rules
Date: 
Message-ID: <rNOSPAMon-5F2B9C.22151608012005@news.gha.chartermi.net>
In article <··················@typhoon.sonic.net>,
 Ray Dillinger <····@sonic.net> wrote:

> Cameron MacKinnon wrote:
> 
> > In Scheme, variables are ALWAYS lexically scoped, whereas in Common 
> > Lisp, they can be declared special. For the people for whom this 
> > distinction matters, it matters a lot.
> 
> I've been looking at something else...
> 
> Instead of dividing it up by variable, I'm thinking it might be
> the right thing to divide it up by function.  Thus, you'd have
> some functions ("lambda functions") defined with lexical scope,
> and some ("mu functions") defined with dynamic scope.

The problem isn't the binding construct, it's references.  You need a 
way to decide whether you're accessing the lexical or dynamic binding 
when both are in scope at the same time, e.g.:

(lambda (x)
  (mu (x)
    (list x (symbol-value 'x)))

Since you need that anyway, you don't really need mu, you can use lambda 
to bind dynamic variables whose names are whatever you need to write to 
access a dynamic binding, e.g. lists whose car is the symbol 
SYMBOL-VALUE.  You can define a reader macro (personally I like #\$, 
though #\* would be more in keeping with CL tradition) that expands into 
SYMBOL-VALUE.  The above example could then be rendered as something 
like:

(lambda (x)
  (lambda ($x)  ; or (lambda (*x)  or (lambda (*x*)
     (list x $x)))

Personally I have always thought this is the Right Answer.  In CL a 
typographical convention for dynamic variables is ubiquitous.  Any 
convention that is useful enough to be ubiquitous is useful enough to be 
built into the language.  (Tomorrow I go try to convince the Python 
folks to make SELF a keyword.)

IMHO of course.

rg
From: Ray Dillinger
Subject: Re: Scoping rules
Date: 
Message-ID: <cZDEd.1279$m31.13415@typhoon.sonic.net>
Ron Garret wrote:
> In article <··················@typhoon.sonic.net>,
>  Ray Dillinger <····@sonic.net> wrote:
> 
> 
>>Cameron MacKinnon wrote:
>>
>>
>>>In Scheme, variables are ALWAYS lexically scoped, whereas in Common 
>>>Lisp, they can be declared special. For the people for whom this 
>>>distinction matters, it matters a lot.
>>
>>I've been looking at something else...
>>
>>Instead of dividing it up by variable, I'm thinking it might be
>>the right thing to divide it up by function.  Thus, you'd have
>>some functions ("lambda functions") defined with lexical scope,
>>and some ("mu functions") defined with dynamic scope.
> 
> 
> The problem isn't the binding construct, it's references.  You need a 
> way to decide whether you're accessing the lexical or dynamic binding 
> when both are in scope at the same time, e.g.:
> 
> (lambda (x)
>   (mu (x)
>     (list x (symbol-value 'x)))


Wait....  I think maybe I didn't explain correctly how this
works. There's no reference here to a free variable that
would need to be resolved, so the discipline for resolving
free variables doesn't come into it.

Where you have something like

(define foo
    (lambda (x)

     (define bar
        (mu (y)
          (+ x y))) ;<== which x is this?

     (+ x (let ((x 22))
            (bar 1)))))


(foo 10) => 33

x is a free variable in the scope of function bar.
Because bar is a mu function, x is resolved dynamically
(referring to the x in the let expression at the call site)
rather than lexically (referring to x the argument of foo).

There's no need to distinguish lexical and dynamic bindings,
because at any point in the program, only one set of bindings
is visible; the "scope parent" of a given lexical area is
its lexical parent if the area is defined by a lambda binding,
and its dynamic parent (call site) if it's defined by a mu
binding. But there's only one chain of parent, grandparent,
great-grandparent, etc, to check for bindings when a free
variable is encountered.

Or, putting it another way, if you have mu, you don't need
to have a way to distinguish dynamic and lexical variables.
But you may need a partial with-gensyms guard in most Mu
functions to specify hygienic renaming of all but a few
selected variables.

IMHO, of course; this is still mostly an experiment.

It's interesting, but this is just as much the mathematical
lambda function as the one we call lambda; the pure lambda-
calculus lambda function is silent about any method of
resolving free variables in its scope.

			Bear
From: ······@earthlink.net
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <1105217131.129602.28190@c13g2000cwb.googlegroups.com>
Cameron MacKinnon wrote:
> Scheme WAS the innovator. Before Scheme provided a counterexample,
Lisp
> implementers believed that lexical addressing couldn't be implemented

> efficiently. "The primary influences on Common Lisp were Lisp Machine

> Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme." -- CLHS 1.1.2

The quoted sentence doesn't support the "believed ... couldn't be
implemented" claim, a claim which I'm pretty sure is false.

Compiled MacLisp had lexical variables by default.  I wouldn't be
surprised if one of {NIL, S-1, Spice} also had lexical variables.  And,
since many other languages at the time had efficiently implemented
lexical variables, it's hard to believe that Lisp implementors at the
time thought that they couldn't do likewise.
From: Peter Seibel
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <m3fz1blebw.fsf@javamonkey.com>
······@earthlink.net writes:

> Cameron MacKinnon wrote:
>> Scheme WAS the innovator. Before Scheme provided a counterexample,
> Lisp
>> implementers believed that lexical addressing couldn't be implemented
>
>> efficiently. "The primary influences on Common Lisp were Lisp Machine
>
>> Lisp, MacLisp, NIL, S-1 Lisp, Spice Lisp, and Scheme." -- CLHS 1.1.2
>
> The quoted sentence doesn't support the "believed ... couldn't be
> implemented" claim, a claim which I'm pretty sure is false.
>
> Compiled MacLisp had lexical variables by default. I wouldn't be
> surprised if one of {NIL, S-1, Spice} also had lexical variables.
> And, since many other languages at the time had efficiently
> implemented lexical variables, it's hard to believe that Lisp
> implementors at the time thought that they couldn't do likewise.

I think--though I may be wrong about this--the belief was that you
couldn't implement the efficiently *in an interpreter*. One of the
things that Common Lisp cleaned up compared to many earlier Lisps was
that it made whether a variable was lexically or dynamically scoped
dependent on how it was defined or declared, not on whether or not the
code code was compiled. But there are folks on this list who can, no
doubt, correct me if I've got that wrong.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: ······@earthlink.net
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <1105254454.254811.14880@c13g2000cwb.googlegroups.com>
> I think--though I may be wrong about this--the belief was that you
couldn't implement the efficiently *in an interpreter*.

I didn't know all of the folks involved, but I can't imagine that the
ones I did know believing that.

The difference between lexical and dynamic lookup is just how the list
of enclosing scopes is defined.  It's no harder to maintain a list of
static scopes and no harder to search it.  (Some of the tricks to
eliminate the "if not found, search enclosing scope" interation for
dynamic scope don't work directly, but there are static scope tricks
too.)

Yes, many early lisps didn't make it possible to say "use lexical
scope" and switching the default between compiled and interpreted code
(as in MacLisp) was "not nice", but those are very different than "hard
to implement".
From: Peter Seibel
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <m33bxbkqlo.fsf@javamonkey.com>
······@earthlink.net writes:

>> I think--though I may be wrong about this--the belief was that you
> couldn't implement the efficiently *in an interpreter*.
>
> I didn't know all of the folks involved, but I can't imagine that
> the ones I did know believing that.

Well, here's one of the sources from which I picked up this idea:

  <http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg00650.html>

That's Olin Shivers talking about the history of T (the Lisp dialect),
here's a relevant passage:

  Some context: Common Lisp did not exist (the effort was just getting
  underway). MIT Scheme did not exist. Scheme was a couple of AI Lab
  tech reports and a master's thesis. We're talking the tiniest seed
  crystal imaginable, here. There was immense experience in the lisp
  community on optimising compiled implementations of
  dynamically-scoped languages -- this, to such an extent, that it was
  a widely held opinion at the time that "lexical scope is
  interesting, *theoretically*, but it's inefficient to implement;
  dynamic scope is the fast choice." I'm not kidding. To name two
  examples, I heard this, on different occasions, from Richard
  Stallman (designer & implementor of emacs lisp) and Richard Fateman
  (prof. at Berkeley, and the principal force behind franz lisp,
  undoubtedly the most important lisp implementation built in the
  early Vax era -- important because it was delivered and it worked).
  I asked RMS when he was implementing emacs lisp why it was
  dynamically scoped and his exact reply was that lexical scope was
  too inefficient. So my point here is that even to people who were
  experts in the area of lisp implementation, in 1982 (and for years
  afterward, actually), Scheme was a radical, not-at-all-accepted
  notion.

In the next message in the thread[1], Dan Weinreb commented on this
passage, saying:

  Having been there then, I can assure you that what they were
  thinking was as follows. In dynamic scoping, you can implement "get
  the value of symbol A" by "read the contents of A's 'value cell'".
  But in lexical scoping, you have to implement it as "do a linear
  search up the A-list, e.g. using the assoc function". This is all
  based on the general body of knowledge at the time of how to do a
  Lisp interpreter.

-Peter

[1] <http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/msg00676.html>


-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouyfz1afoqq.fsf@panix3.panix.com>
Peter Seibel <·····@javamonkey.com> writes:

> That's Olin Shivers talking about the history of T (the Lisp dialect),
> here's a relevant passage:

...

>   I asked RMS when he was implementing emacs lisp why it was
>   dynamically scoped and his exact reply was that lexical scope was
>   too inefficient. 
                     
I asked Richard Stallman the same question a few years ago and
received a slightly different answer.  His explanation to me was that
a lexically-scoped interpreter would run too slowly on the /small
microcomputers/ that GNU Emacs was to run on.  The PDP-10, the sole
host to Maclisp, is not such a machine.  The use of dynamic scope in
Emacs was not a reflection on the Lisp state-of-the-art.
                     
                     So my point here is that even to people who were
>   experts in the area of lisp implementation, in 1982 (and for years
>   afterward, actually), Scheme was a radical, not-at-all-accepted
>   notion.

Making comparisons to Maclisp in this regard is quite silly.  Maclisp
was not a very self-consistent system and its development was shaped
by an incredible amount of environmental pressure.  Its intellectual
successors such as VAX NIL and Zetalisp, which had the priviledge of
being complete rewrites, used lexical-scope in the interpreter and in
the compiler.  If you believe this (rather, uhm... romantic) tale,
both of these systems were in use before the T project existed.

When Lisp family trees are drawn with the obligatory edge connecting
Scheme to Common Lisp, I sincerely doubt that this is done to reflect
the contribution of lexical scope (it depicts the origins of lexical
closures).  If we believe John McCarthy's "History of Lisp" (1979), it
becomes clear that lexical scope was always desired and the presence
of dynamic scoping was considered a bug.

http://www-formal.stanford.edu/jmc/history/lisp/node4.html#SECTION00040000000000000000
From: Christopher C. Stacy
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <u8y729e9g.fsf@news.dtpq.com>
>>>>> On 09 Jan 2005 13:20:45 -0500, Carl Shapiro ("Carl") writes:
 Carl> The PDP-10, the sole host to Maclisp, is not such a machine.

MACLISP also ran on Multics, but of course that was
also a large(er even than a PDP-10) machine.
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouyllb2q88l.fsf@panix3.panix.com>
······@news.dtpq.com (Christopher C. Stacy) writes:

> >>>>> On 09 Jan 2005 13:20:45 -0500, Carl Shapiro ("Carl") writes:
>  Carl> The PDP-10, the sole host to Maclisp, is not such a machine.
> 
> MACLISP also ran on Multics, but of course that was
> also a large(er even than a PDP-10) machine.

Nice catch!  That slipped my mind.
From: Cameron MacKinnon
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <DaadnT9EN99H1n_cRVn-ig@golden.net>
Carl Shapiro wrote:
> When Lisp family trees are drawn with the obligatory edge connecting
> Scheme to Common Lisp, I sincerely doubt that this is done to reflect
> the contribution of lexical scope (it depicts the origins of lexical
> closures).  If we believe John McCarthy's "History of Lisp" (1979), it
> becomes clear that lexical scope was always desired and the presence
> of dynamic scoping was considered a bug.

"The major contributions of Scheme were lexical scoping, lexical 
closures, first-class continuations, and simplified syntax (no 
separation of value cells and function cells). Some of these 
contributions made a large impact on the design of Common Lisp."
	-- CLHS 1.1.2

By elimination, the spec says that CL's lexical scoping came from 
Scheme. Which is not to say that you're wrong and the committee was 
right; I only know what I've read from the historical record.
From: ······@earthlink.net
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <1105456584.312777.40540@f14g2000cwb.googlegroups.com>
> "The major contributions of Scheme were lexical scoping, lexical
closures, first-class continuations, and simplified syntax (no
separation of value cells and function cells). Some of these
contributions made a large impact on the design of Common Lisp."
-- CLHS 1.1.2

> By elimination, the spec says that CL's lexical scoping came from
Scheme. Which is not to say that you're wrong and the committee was
right; I only know what I've read from the historical record.

That's not the "historical record".  That's a rationale section of a
language standards doc.

Moreover, you're not interpreting it accurately.

It doesn't say that Scheme was the first lisp to offer lexical scoping.
(It wasn't - the standard mentions at least one other Lisp that did.)
And it doesn't say that no one in the lisp community knew how to
efficiently implement lexical scoping before Scheme.  (The folks who
wrote the MacLisp compiler, among others, knew how to efficiently
implement lexical scoping.)

Note that "rationale" sections aren't likely to be accurate history.
It doesn't matter if they're correct and there are bigger fish to fry
and palms to grease.  (What is the purpose of such a section?)
Scheme's lexical scoping was much cleaner and influential, especially
at the time, so it's reasonable to give it some credit.  That's all
that sentence does.
From: Ron Garret
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <rNOSPAMon-A42E0E.08114509012005@news.gha.chartermi.net>
In article <·······················@c13g2000cwb.googlegroups.com>,
 ······@earthlink.net wrote:

> The difference between lexical and dynamic lookup is just how the list
> of enclosing scopes is defined.

There's one other important difference: lexical lookup can be done at 
compile time.  Dynamic lookup must be done at run time.  (And for anyone 
tempted to respond that dynamic lookup is a simple matter of getting the 
value in a symbol's value cell the response is: you've overlooked 
threads.)

rg
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouybrbyfof2.fsf@panix3.panix.com>
Ron Garret <·········@flownet.com> writes:

> There's one other important difference: lexical lookup can be done at 
> compile time.  Dynamic lookup must be done at run time.  
                                                           
This assertion is false.
                        
                                                           (And for anyone 
> tempted to respond that dynamic lookup is a simple matter of getting the 
> value in a symbol's value cell the response is: you've overlooked 
> threads.)

This all depends on where you put the symbol value cells.  If symbol
value cells always exist, for example, at known offsets into a thread
local data structure, you can compile-away dynamic variable lookups in
a half-dozen instructions.
From: Ron Garret
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <rNOSPAMon-A29590.15585309012005@news.gha.chartermi.net>
In article <···············@panix3.panix.com>,
 Carl Shapiro <·············@panix.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > There's one other important difference: lexical lookup can be done at 
> > compile time.  Dynamic lookup must be done at run time.  
>                                                            
> This assertion is false.

Which one?  I made two assertions.  But presuming you mean the second 
one, it's true by definition.  You cannot know which variables are 
dynamically bound until run time.  That's what dynamic binding means.

>                         
>                                                            (And for anyone 
> > tempted to respond that dynamic lookup is a simple matter of getting the 
> > value in a symbol's value cell the response is: you've overlooked 
> > threads.)
> 
> This all depends on where you put the symbol value cells.  If symbol
> value cells always exist, for example, at known offsets into a thread
> local data structure, you can compile-away dynamic variable lookups in
> a half-dozen instructions.

That would work if there were no such thing as global bindings.  But 
there is, so it doesn't.

One point that might be causing confusion: when I say dynamic lookup 
must be done at runtime I do not mean that you have to call assoc on a 
binding list, only that you have to determine at run time whether a free 
reference refers to a dynamic binding or a global binding.

rg
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouypt0eq8ji.fsf@panix3.panix.com>
Ron Garret <·········@flownet.com> writes:

> In article <···············@panix3.panix.com>,
>  Carl Shapiro <·············@panix.com> wrote:
> 
> > Ron Garret <·········@flownet.com> writes:
> > 
> > > There's one other important difference: lexical lookup can be done at 
> > > compile time.  Dynamic lookup must be done at run time.  
> >                                                            
> > This assertion is false.
> 
> Which one?  I made two assertions.  But presuming you mean the second 
> one, it's true by definition.  You cannot know which variables are 
> dynamically bound until run time.  That's what dynamic binding means.

Your one important difference.

> >                                                            (And for anyone 
> > > tempted to respond that dynamic lookup is a simple matter of getting the 
> > > value in a symbol's value cell the response is: you've overlooked 
> > > threads.)
> > 
> > This all depends on where you put the symbol value cells.  If symbol
> > value cells always exist, for example, at known offsets into a thread
> > local data structure, you can compile-away dynamic variable lookups in
> > a half-dozen instructions.
> 
> That would work if there were no such thing as global bindings.  But 
> there is, so it doesn't.

As a matter of fact, it does work.  This is merely an elaboration of
shallow binding.  A variant also exists in Warren's Abstract Machine.

> One point that might be causing confusion: when I say dynamic lookup 
> must be done at runtime I do not mean that you have to call assoc on a 
> binding list, only that you have to determine at run time whether a free 
> reference refers to a dynamic binding or a global binding.

Fortunately, this doesn't make a difference.  If, during a variable
access, a determination is made at runtime as to whether the value of
a variable is stored in the dynamic or global environment, it is an
implementation detail.  This decision does not have to be made and an
implementation can (and several will) compile-away a dynamic or global
access in a style similar to a lexical variable access.  This does not
break the variable semantics users expect.
From: Ron Garret
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <rNOSPAMon-7198AC.21174909012005@news.gha.chartermi.net>
In article <···············@panix3.panix.com>,
 Carl Shapiro <·············@panix.com> wrote:

> This does not break the variable semantics users expect.

It does if users expect semantics that conform to the standard in a 
multithreaded implementation.  Whether users really expect this or not I 
cannot say.

rg
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouyekgtg4bt.fsf@panix3.panix.com>
Ron Garret <·········@flownet.com> writes:

> In article <···············@panix3.panix.com>,
>  Carl Shapiro <·············@panix.com> wrote:
> 
> > This does not break the variable semantics users expect.
> 
> It does if users expect semantics that conform to the standard in a 
> multithreaded implementation.  Whether users really expect this or not I 
> cannot say.

Users expect the behavior described in section 3.1.1.2, all of which
is possible--including the behavior described in the last paragraph
which you are particularly interested in--using the strategy I have
described.  I am sorry you think this is so difficult to achieve!
From: Ron Garret
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <rNOSPAMon-5764AC.09355810012005@news.gha.chartermi.net>
In article <···············@panix3.panix.com>,
 Carl Shapiro <·············@panix.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > In article <···············@panix3.panix.com>,
> >  Carl Shapiro <·············@panix.com> wrote:
> > 
> > > This does not break the variable semantics users expect.
> > 
> > It does if users expect semantics that conform to the standard in a 
> > multithreaded implementation.  Whether users really expect this or not I 
> > cannot say.
> 
> Users expect the behavior described in section 3.1.1.2, all of which
> is possible--including the behavior described in the last paragraph
> which you are particularly interested in--using the strategy I have
> described.  I am sorry you think this is so difficult to achieve!

I never said it was difficult to achieve.  What I said was:

> Dynamic lookup must be done at run time.  (And for anyone 
> tempted to respond that dynamic lookup is a simple matter of getting the 
> value in a symbol's value cell the response is: you've overlooked 
> threads.)

And I stand by that.  If you wish to dispute it would you please 
describe how you would compile (lambda (y) (setq x y)) in a way that 
works correctly in a multithreaded environment and involves no runtime 
decision-making?

BTW, I'm not entirely certain that we're actually disagreeing here.  You 
wrote:

> If symbol
> value cells always exist, for example, at known offsets into a thread
> local data structure, you can compile-away dynamic variable lookups in
> a half-dozen instructions.

Which strikes me as plausible (though rather resource-intensive, as you 
would have to pre-allocate a binding for every symbol in every thread, 
or at least every symbol that is potentially dynamically bound).  But 
you still have to determine at run time whether a variable is 
dynamically bound in order to tell whether to look up its value in the 
thread-local bindings table or the global bindings.  Yes, this could be 
done in half a dozen instructions (perhaps even less) but it still has 
to be done at run time.

rg
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouyfz18jwbo.fsf@panix3.panix.com>
Ron Garret <·········@flownet.com> writes:

> And I stand by that.  If you wish to dispute it would you please 
> describe how you would compile (lambda (y) (setq x y)) in a way that 
> works correctly in a multithreaded environment and involves no runtime 
> decision-making?

Each thread maintain a vector containing the addresses of variables
which are known to occur "free".  By default, each slot in this vector
points to the corresponding global value of its variable.  When
variables are dynamically bound, this address is saved and replaced
with the address of the dynamic binding.  No runtime decision making
occurs because the correct value, dynamic or global, is always
accesses through the same location.  In your example, the SETQ would
store the value of Y to the address found at X's offset into this
environment structure.  It may update a dynamic value if X has been
rebound, otherwise, the global value is updated.
From: Ron Garret
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <rNOSPAMon-BD9998.08565811012005@news.gha.chartermi.net>
In article <···············@panix3.panix.com>,
 Carl Shapiro <·············@panix.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > And I stand by that.  If you wish to dispute it would you please 
> > describe how you would compile (lambda (y) (setq x y)) in a way that 
> > works correctly in a multithreaded environment and involves no runtime 
> > decision-making?
> 
> Each thread maintain a vector containing the addresses of variables
> which are known to occur "free".  By default, each slot in this vector
> points to the corresponding global value of its variable.  When
> variables are dynamically bound, this address is saved and replaced
> with the address of the dynamic binding.  No runtime decision making
> occurs because the correct value, dynamic or global, is always
> accesses through the same location.  In your example, the SETQ would
> store the value of Y to the address found at X's offset into this
> environment structure.  It may update a dynamic value if X has been
> rebound, otherwise, the global value is updated.

Well, that's devilishly clever.  It's worth noting that this solution 
would probably be slower in practice than one which involved run-time 
decision making (because you always have to touch memory twice), and it 
also suffers from the same problem as that solution, namely, that you 
have to do a global analysis, which is awkward in CL.  Nonetheless, I 
concede the point.

rg
From: Rob Warnock
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <eIOdnf8W0qxyOXncRVn-hg@speakeasy.net>
Ron Garret  <·········@flownet.com> wrote:
+---------------
| > Each thread maintain a vector containing the addresses of variables
| > which are known to occur "free".  By default, each slot in this vector
| > points to the corresponding global value of its variable.  When
| > variables are dynamically bound, this address is saved and replaced
| > with the address of the dynamic binding.  No runtime decision making
| > occurs because the correct value, dynamic or global, is always
| > accesses through the same location.
+---------------

IIRC hallway conversations at ILC 2003, both Allegro and Corman Lisp
use exactly this technique (or minor variations thereof).

+---------------
| > store the value of Y to the address found at X's offset into this
| > environment structure.  It may update a dynamic value if X has been
| > rebound, otherwise, the global value is updated.
| 
| Well, that's devilishly clever.  It's worth noting that this solution 
| would probably be slower in practice than one which involved run-time 
| decision making (because you always have to touch memory twice)...
+---------------

Yes, but the "extra" memory you touch [one pointer redirection]
is almost always cached if you've touched that variable recently.

And if you're doing accessing the variable multiple times in the
same thread-safe region of cade [e.g., for read-modify-write, say],
you only pay the "extra" accesses once. The generated code can locally
cache the final address for all the accesses.

Note that this makes thread-switching *very* fast, since all you have
to do is change a global pointer to the current-thread global/special-
variable-vector to point to the new thread's vector.

+---------------
| ...and it also suffers from the same problem as that solution,
| namely, that you have to do a global analysis...
+---------------

Not really. Whether a variable is being used as a global or a special
is lexically apparent, so the usual lexical-versus-global compilation
choice [which you have to make any during compilation/evaluation] can
be made in isolation when compiling [or preprocessing for interpretation]
the access. And you can assign global/special-variable vector slots
lazily as find that you need them during compilation/evaluation, caching
the slot number in the symbol's internal data.

The main inefficiency that I see [and it's probably minor in most
applications] is that if a global/special is bound [and notice
that I say "bound", rather than "defined" -- it matters!!] in *any*
thread, then a slot must be allocated for it in *all* threads
[but only allocated, not set].


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Ron Garret
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <rNOSPAMon-ED511C.21393711012005@news.gha.chartermi.net>
In article <······················@speakeasy.net>,
 ····@rpw3.org (Rob Warnock) wrote:

> Ron Garret  <·········@flownet.com> wrote:
> +---------------
> | > Each thread maintain a vector containing the addresses of variables
> | > which are known to occur "free".  By default, each slot in this vector
> | > points to the corresponding global value of its variable.  When
> | > variables are dynamically bound, this address is saved and replaced
> | > with the address of the dynamic binding.  No runtime decision making
> | > occurs because the correct value, dynamic or global, is always
> | > accesses through the same location.
> +---------------
> 
> IIRC hallway conversations at ILC 2003, both Allegro and Corman Lisp
> use exactly this technique (or minor variations thereof).

Alas, I was not able to attend ILC2003.  :-(

If I do a DEFVAR while there are already threads running, what happens 
to the dynvar table in the existing threads?

> +---------------
> | > store the value of Y to the address found at X's offset into this
> | > environment structure.  It may update a dynamic value if X has been
> | > rebound, otherwise, the global value is updated.
> | 
> | Well, that's devilishly clever.  It's worth noting that this solution 
> | would probably be slower in practice than one which involved run-time 
> | decision making (because you always have to touch memory twice)...
> +---------------
> 
> Yes, but the "extra" memory you touch [one pointer redirection]
> is almost always cached if you've touched that variable recently.

Not quite.  It's cached if you've touched that variable IN THAT THREAD 
recently.

> And if you're doing accessing the variable multiple times in the
> same thread-safe region of cade [e.g., for read-modify-write, say],
> you only pay the "extra" accesses once. The generated code can locally
> cache the final address for all the accesses.

Good point.  It seems to me that it doesn't even have to be thread-safe, 
just known that the variable can't be rebound in that thread (e.g. in a 
local loop where all calls are known).  But are any compilers out there 
actually that clever?

> Note that this makes thread-switching *very* fast, since all you have
> to do is change a global pointer to the current-thread global/special-
> variable-vector to point to the new thread's vector.

Not quite.  That's all you have to do if you never touch a dynamic 
variable (but if you never touch a dynamic variable this is all moot 
anyway).  If you do touch a dynamic variable then you have to pay to get 
(at least part of) that thread's dynamic variable indirection vector 
into cache.  How much this really costs you in practice is, of course, 
an empirical question.

Another hidden cost of this method, it seems to me, is that it can 
consume a lot of memory if you have a lot of dynamic variables and a lot 
of threads even if you don't actually bind those variables very often.

> +---------------
> | ...and it also suffers from the same problem as that solution,
> | namely, that you have to do a global analysis...
> +---------------
> 
> Not really.
...
> The main inefficiency that I see [and it's probably minor in most
> applications] is that if a global/special is bound [and notice
> that I say "bound", rather than "defined" -- it matters!!] in *any*
> thread, then a slot must be allocated for it in *all* threads
> [but only allocated, not set].

That's what I meant.  The only way I can can think of to determine this 
is to do a global analysis.  (Of course, I've got a pretty bad track 
record in this regard of late.)

rg
From: Rob Warnock
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <SMmdndprBblDJnncRVn-3g@speakeasy.net>
Ron Garret  <·········@flownet.com> wrote:
+---------------
| ····@rpw3.org (Rob Warnock) wrote:
| > IIRC hallway conversations at ILC 2003, both Allegro and Corman Lisp
| > use exactly this technique (or minor variations thereof).
| 
| Alas, I was not able to attend ILC2003.  :-(
+---------------

Reminder: ILC 2005 is at Stanford (SF Bay Area) this year.  ;-}  ;-}

+---------------
| If I do a DEFVAR while there are already threads running, what
| happens  to the dynvar table in the existing threads?
+---------------

I dunno, "the right thing"?  ;-}

Seriously, the glovar/dynvar table has to extend: immediately for
the thread that does the DEFVAR, and at least lazily for any threads
that run after that one does the DEFVAR.

+---------------
| > Yes, but the "extra" memory you touch [one pointer redirection]
| > is almost always cached if you've touched that variable recently.
| 
| Not quite.  It's cached if you've touched that variable IN THAT THREAD 
| recently.
+---------------

Yup. That's what I meant.

+---------------
| > And if you're doing accessing the variable multiple times in the
| > same thread-safe region of cade [e.g., for read-modify-write, say],
| > you only pay the "extra" accesses once. The generated code can locally
| > cache the final address for all the accesses.
| 
| Good point.  It seems to me that it doesn't even have to be thread-safe, 
| just known that the variable can't be rebound in that thread (e.g. in a 
| local loop where all calls are known).
+---------------

Sorry for the imprecision, that's really what I meant by "thread-safe",
that *nobody* [in the same or another thread] could rebind it while
the cached address was being used.

Of course, there's GC-induced relocation to worry about, too...  :-(

+---------------
| But are any compilers out there actually that clever?
+---------------

Dunno. [Duane? Roger?]

+---------------
| Another hidden cost of this method, it seems to me, is that it can 
| consume a lot of memory if you have a lot of dynamic variables and a lot 
| of threads even if you don't actually bind those variables very often.
+---------------

Yup.

+---------------
| > +---------------
| > | ...and it also suffers from the same problem as that solution,
| > | namely, that you have to do a global analysis...
| > +---------------
| > 
| > Not really.
| ...
| > The main inefficiency that I see [and it's probably minor in most
| > applications] is that if a global/special is bound [and notice
| > that I say "bound", rather than "defined" -- it matters!!] in *any*
| > thread, then a slot must be allocated for it in *all* threads
| > [but only allocated, not set].
| 
| That's what I meant.  The only way I can can think of to determine this 
| is to do a global analysis.  (Of course, I've got a pretty bad track 
| record in this regard of late.)
+---------------

But note that the "global analysis" you're talking about here is not
required to generate correct code, only to do performance profiling
[or equiv.] of an implementation.

The actual compilation using the glovar/dynvar table approach can be
done with no more analysis than is already required for distiguishing
lexical variables from globals/specials. And the slot allocation can
be done lazily as need arises, *without* prior "global analysis".
[Hint: When you switch threads, if the new thread's glovar/dynvar
table is smaller than the one we're switching from, then you need
to extend it before the new thread can run.]


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Ron Garret
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <rNOSPAMon-AEEC7C.00150612012005@news.gha.chartermi.net>
In article <······················@speakeasy.net>,
 ····@rpw3.org (Rob Warnock) wrote:

> +---------------
> | > And if you're doing accessing the variable multiple times in the
> | > same thread-safe region of cade [e.g., for read-modify-write, say],
> | > you only pay the "extra" accesses once. The generated code can locally
> | > cache the final address for all the accesses.
> | 
> | Good point.  It seems to me that it doesn't even have to be thread-safe, 
> | just known that the variable can't be rebound in that thread (e.g. in a 
> | local loop where all calls are known).
> +---------------
> 
> Sorry for the imprecision, that's really what I meant by "thread-safe",
> that *nobody* [in the same or another thread] could rebind it while
> the cached address was being used.

I understood what you meant, but I don't see why such a stringent 
requirement is needed.  It seems to me that one thread rebinding a 
variable should have no affect on the cached value in a different thread.

rg
From: Rob Warnock
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <s-OdnSsBNJ830nfcRVn-sQ@speakeasy.net>
Ron Garret  <·········@flownet.com> wrote:
+---------------
|  ····@rpw3.org (Rob Warnock) wrote:
| > Sorry for the imprecision, that's really what I meant by "thread-safe",
| > that *nobody* [in the same or another thread] could rebind it while
| > the cached address was being used.
| 
| I understood what you meant, but I don't see why such a stringent 
| requirement is needed.  It seems to me that one thread rebinding a 
| variable should have no affect on the cached value in a different thread.
+---------------

Sorry, you're right. What I said was sufficient for safety, but not
minimal (strictly necessary).

Back to our common point: Even given adequate attention to safety,
a special variable may be accessed many times with only one double-
indirection, so that the per-thread glovar/dynvar table approach
doesn't have to be expensive. 


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouymzvfnho8.fsf@panix3.panix.com>
····@rpw3.org (Rob Warnock) writes:

> Ron Garret  <·········@flownet.com> wrote:

...

> +---------------
> | > And if you're doing accessing the variable multiple times in the
> | > same thread-safe region of cade [e.g., for read-modify-write, say],
> | > you only pay the "extra" accesses once. The generated code can locally
> | > cache the final address for all the accesses.
> | 
> | Good point.  It seems to me that it doesn't even have to be thread-safe, 
> | just known that the variable can't be rebound in that thread (e.g. in a 
> | local loop where all calls are known).
> +---------------
> 
> Sorry for the imprecision, that's really what I meant by "thread-safe",
> that *nobody* [in the same or another thread] could rebind it while
> the cached address was being used.
> 
> Of course, there's GC-induced relocation to worry about, too...  :-(

Interned symbols are generally considered to be permanent objects and
are allocated in a non-moving region of memory.  Since your bindings
point either into permanent objects or to the stack, you never have to
worry about updating these references.

> +---------------
> | But are any compilers out there actually that clever?
> +---------------
> 
> Dunno. [Duane? Roger?]

This optimization is commonly found in deep-bound Lisps.  On entry to
a function stack space is reserved to cache the value slots of
referenced variables.  At an auspicious moment (usually just far
enough before its use) variables are resolved and cached on the stack.
This ensures that specials can be accessed in constant time for as
long as they are determined to be live.  Interlisp implemented this
trick over twenty five years ago.  It was further refined in S-1 Lisp.
Furthermore, you don't need a clever compiler to cache value slots if
the caching procedure is part of the standard dynamic variable access
routines.
From: Duane Rettig
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <4mzve8pvd.fsf@franz.com>
Carl Shapiro <·············@panix.com> writes:

> ····@rpw3.org (Rob Warnock) writes:
> 
> > Ron Garret  <·········@flownet.com> wrote:
> 
> ...
> 
> > +---------------
> > | > And if you're doing accessing the variable multiple times in the
> > | > same thread-safe region of cade [e.g., for read-modify-write, say],
> > | > you only pay the "extra" accesses once. The generated code can locally
> > | > cache the final address for all the accesses.
> > | 
> > | Good point.  It seems to me that it doesn't even have to be thread-safe, 
> > | just known that the variable can't be rebound in that thread (e.g. in a 
> > | local loop where all calls are known).
> > +---------------
> > 
> > Sorry for the imprecision, that's really what I meant by "thread-safe",
> > that *nobody* [in the same or another thread] could rebind it while
> > the cached address was being used.
> > 
> > Of course, there's GC-induced relocation to worry about, too...  :-(
> 
> Interned symbols are generally considered to be permanent objects and
> are allocated in a non-moving region of memory.  Since your bindings
> point either into permanent objects or to the stack, you never have to
> worry about updating these references.

This is not necessarily true.  Many years ago, we experimented with
a version of Allegro CL that held each symbol to the same address in
its lifetime, once it was "tenured" (i.e. moved to oldspace).  The
benefits were potentially great but the costs were prohibitive -
raw code vectors could not be reused as often, since they contained
specific addresses, so the population of code vectors exploded in
critical situations.

> > +---------------
> > | But are any compilers out there actually that clever?
> > +---------------
> > 
> > Dunno. [Duane? Roger?]

Sorry, Ron and Rob, I hadn't noticed this until now.  We are still
considering placing a similar optimization into our wide-binding
approach.  The potential for optimization in our case is wherever
the special is accessed within the lexical contour that it has been
bound, or where it has been accessed more than once, with the exception
that the optimization can't be made when it crosses either a lambda or
a progv boundary.

> This optimization is commonly found in deep-bound Lisps.  On entry to
> a function stack space is reserved to cache the value slots of
> referenced variables.  At an auspicious moment (usually just far
> enough before its use) variables are resolved and cached on the stack.
> This ensures that specials can be accessed in constant time for as
> long as they are determined to be live.  Interlisp implemented this
> trick over twenty five years ago.  It was further refined in S-1 Lisp.
> Furthermore, you don't need a clever compiler to cache value slots if
> the caching procedure is part of the standard dynamic variable access
> routines.

Clearly, deep binding lisps need this kind of caching for constant-time
access.  The thread-local style of trick you've been describing is
precisely what Corman Lisp uses, and similar to what we use.  However,
these lisps are still shallow-binding lisps, and any caching technique
will not change the dynamics of the access time, but will only speed
it up linearly.

If you consider the thread-local technique to be a row-major matrix of
thread-vs-symbol, then you can consider our wide-binding technique
to be a column-major version of the same matrix.  Instead of keeping
an array of slots for each symbol that is to be bound, we keep 
arrays of thread/value slots for each symbol, each called a
symbol-value-vector.  The zeroth slot is always the global slot, and
is always present.  In 7.0, the "bindstack-index" and thread-id have
been merged, so the Nth slot in a symbol-value-vector represents the
locative of the value for the (N-1)th thread-id.  If the
symbol-value-vector is not long enough for the thread-id, then that
means it was never dynamically-bound in that thread, and the global
value is used (this is done by using the symbol-value-vector itself
as the locative).

Locatives are always the same "shape"; they are either a
symbol-value-vector or a (usually) stack-allocated vector of size
1, and the value is always accessed via (aref loc 0).

Note that if the locative is stack-allocated, then it is indeed
immobile for its lifetime.  If this locative can be predetermined,
then the symbol-value access can actually be _faster_ than the
traditional apporoach - to do (symbol-value '*foo*), the address
of symbol '*foo has to first be gotten (either directly by knowing
its address, or via a global or local vector of constants, usually
associated with the function object), and then the value slot
accessed from that address (usually this involves two back-to-back
memory references).  But if the stack address of the locative of the
most recent binding of '*foo* is known at compile-time, then the
compiler can generate a single memory access, roughly of the form
(aref #.(current-symbol-locative '*foo*) 0).

It might also be possible to incorporate the deep-binding technique
of predetermining the locative, if there is one; this could be
done by taking the symbol-value operation halfway.  However, it
would only be useful if it can be determined at compile-time that
there is more than one access to the same variable in the same
vicinity (i.e. not crossing progv or lanmbda boundaries).  Also,
for that kind of optimization to work, the compiler has to be
able to determine which access will occur first; which might
be a problem in a complex looping situation.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <87fz16jw96.fsf@qrnik.zagroda>
Duane Rettig <·····@franz.com> writes:

> In 7.0, the "bindstack-index" and thread-id have been merged, so the
> Nth slot in a symbol-value-vector represents the locative of the
> value for the (N-1)th thread-id.

This works only as long as the number of threads is limited. The limit
in Allegro CL is actually ridiculously low: up to 350 threads at any
given time. I don't know the reasons for so low limit.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Duane Rettig
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <48y6y8j4i.fsf@franz.com>
Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > In 7.0, the "bindstack-index" and thread-id have been merged, so the
> > Nth slot in a symbol-value-vector represents the locative of the
> > value for the (N-1)th thread-id.
> 
> This works only as long as the number of threads is limited. The limit
> in Allegro CL is actually ridiculously low: up to 350 threads at any
> given time. I don't know the reasons for so low limit.

Neither do I.  Thanks for the bug report, but it would have been
appropriate to have sent it to ·······@franz.com

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Carl Shapiro
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ouyekgprfy9.fsf@panix3.panix.com>
Duane Rettig <·····@franz.com> writes:

> This is not necessarily true.  Many years ago, we experimented with
> a version of Allegro CL that held each symbol to the same address in
> its lifetime, once it was "tenured" (i.e. moved to oldspace).  The
> benefits were potentially great but the costs were prohibitive -
> raw code vectors could not be reused as often, since they contained
> specific addresses, so the population of code vectors exploded in
> critical situations.

It sounds like you are describing two changes.  The first being the
storage of permanent symbols in non-moving storage.  The second is
storing the addresses of symbols as immediate values in instruction
vectors.  You can definitely have one without the other.  Interned
symbols can be stored in permanent storage and they can continue to be
referenced through indexed addressing into the constant vector.  This
should not stifle code vector sharing while preserving the benefits of
having symbols in non-moving storage.
From: Thomas F. Burdick
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <xcvhdlmikm0.fsf@conquest.OCF.Berkeley.EDU>
Ron Garret <·········@flownet.com> writes:

> In article <······················@speakeasy.net>,
>  ····@rpw3.org (Rob Warnock) wrote:
> 
> > Ron Garret  <·········@flownet.com> wrote:
> > +---------------
> > | > Each thread maintain a vector containing the addresses of variables
> > | > which are known to occur "free".  By default, each slot in this vector
> > | > points to the corresponding global value of its variable.  When
> > | > variables are dynamically bound, this address is saved and replaced
> > | > with the address of the dynamic binding.  No runtime decision making
> > | > occurs because the correct value, dynamic or global, is always
> > | > accesses through the same location.
> > +---------------
> > 
> > IIRC hallway conversations at ILC 2003, both Allegro and Corman Lisp
> > use exactly this technique (or minor variations thereof).
> 
> Alas, I was not able to attend ILC2003.  :-(

??? One or both of you must be talking about ILC2002, 2003 being the
most recent conference.
From: Wade Humeniuk
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <czdDd.34205$nN6.21723@edtnps84>
Ralph Richard Cook wrote:

> I'm just starting "Essentials of Programming Languages, Second
> Edition" and for the nth time I'm reading that Scheme "combines the
> uniform syntax and data abstraction capabilities of Lisp with the
> lixical scoping and block structure of Algol". That's nice, but they
> act like Scheme is the _only_ Lisp that does this. Isn't Common Lisp
> also capable of this, with lexical scoping for variables and labels
> and flet for functions?
> 

They can even act more weeniish by stating that Scheme is the only Lisp.
The logic being that Scheme is the minimal set of operations that all other
functionality is "just a library".  I learned and used Scheme first, but
as I went along I found that I wanted more core functionality, thus I
started using CL.

As for lexical variables I find it more likely that the creators of Scheme
created and developed versions of Lisp with lexical vars before developing
Scheme.  It is really doubtful that Scheme was "just born from the head
of Zeus".  The real progenitor or lexical vars was the long line of Lisps
created from the initial Lisp.  Lisp influenced Algol, Algol influenced Lisp,
Frank influenced John, and on, and on, and on....

Its nice to believe in a hierarchical story about how things happen.  Perhaps
it makes people feel safer and more secure.

Wade
From: M Jared Finder
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <346geuF48ng51U1@individual.net>
Wade Humeniuk wrote:
> 
> As for lexical variables I find it more likely that the creators of Scheme
> created and developed versions of Lisp with lexical vars before developing
> Scheme.  It is really doubtful that Scheme was "just born from the head
> of Zeus".  The real progenitor or lexical vars was the long line of Lisps
> created from the initial Lisp.  Lisp influenced Algol, Algol influenced 
> Lisp,
> Frank influenced John, and on, and on, and on....

Of course Scheme didn't come out of nowhere, but there has to be some 
Lisp that can claim to be the first Lisp with lexical variables.  Why 
couldn't that Lisp be Scheme?

   -- MJF
From: Rob Warnock
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <ne2dnYQV3PCJq0PcRVn-gw@speakeasy.net>
M Jared Finder <·······@digipen.edu> wrote:
+---------------
| Wade Humeniuk wrote:
| > As for lexical variables I find it more likely that the creators of Scheme
| > created and developed versions of Lisp with lexical vars before developing
| > Scheme.  It is really doubtful that Scheme was "just born from the head
| > of Zeus".  The real progenitor or lexical vars was the long line of Lisps
| > created from the initial Lisp.  Lisp influenced Algol, Algol influenced 
| > Lisp, Frank influenced John, and on, and on, and on....
| 
| Of course Scheme didn't come out of nowhere, but there has to be some 
| Lisp that can claim to be the first Lisp with lexical variables.  Why 
| couldn't that Lisp be Scheme?
+---------------

Why speculate? You can just go to the R5RS itself
<http://www.schemers.org/Documents/Standards/R5RS/>
and see what they say:

    Introduction
    ...
    Scheme was the first major dialect of Lisp ... to use a single
    lexical environment for all variables...

Or better, the Gabriel and Steele paper, "The Evolution of Lisp"
<http://dreamsongs.com/NewFiles/Hopl2.pdf> [77 pages], says:

    The dialect of Lisp known as Scheme was originally an attempt by
    Gerald Jay Sussman and Guy Steele during Autumn 1975 to explicate
    for themselves some aspects of Carl Hewitt s theory of actors as
    a model of computation.
    ...
    Using MacLisp as a working environment, they decided to construct
    a tiny Lisp interpreter and then add the necessary mechanisms for
    creating actors and sending messages. The toy Lisp would provide
    the necessary primitives for implementing the internal behavior of
    primitive actors.

    Because Sussman had just been studying Algol [Naur, 1963], he
    suggested starting with a lexically scoped dialect of Lisp.
    It appeared that such a mechanism would be needed anyway for
    keeping track of acquaintances for actors.

But it also notes that:

    Lisp370 supported both special binding and lexical binding, as well
    as closures over both lexical and special variables, using a technique
    similar to spaghetti stacks. Jon L White spent calendar year 1977 at
    Yorktown Heights working on Lisp370 and then returned to MIT; his
    experience at IBM had some in uence on subsequent MacLisp development
    and on the NIL dialect.

And from the "uncut" version <http://dreamsongs.com/NewFiles/HOPL2-Uncut.pdf>
(a draft of a longer version of the HoPL paper that was never published,
see <http://dreamsongs.com/Essays.html>):

    Scheme was one of the first languages to have taken seriously
    the implications of lexical scoping (as opposed to local scoping,
    which had been in use in Lisp compilers for almost a decade) and
    first-class functions. Namely, Scheme correctly treated closures
    a closure is a function along with the local environment within
    which it was de ned. Furthermore, it embraced the Actor model of
    computation in the sense that function calling could be treated
    as a jump to a procedure rather than as a call to a procedure
    that would return.

Really, "The Evolution of Lisp" [either/both version(s)] is probably
what you want to read if you care about this. It contains a *lot* more
about the history of lexical scoping in Lisp than mentioned above...


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Wade Humeniuk
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <l6pDd.53502$dv1.25040@edtnps89>
M Jared Finder wrote:

> 
> 
> Of course Scheme didn't come out of nowhere, but there has to be some 
> Lisp that can claim to be the first Lisp with lexical variables.  Why 
> couldn't that Lisp be Scheme?
> 
>   -- MJF

Reading the Scheme spec it does not claim to be the first Lisp to have
ALGOL-like lexical vars.  From:

http://www.swiss.ai.mit.edu/~jaffer/r5rs_2.html#SEC2


"Scheme was the first major dialect of Lisp to distinguish procedures from lambda
expressions and symbols, to use a single lexical environment for all variables,
and to evaluate the operator position of a procedure call in the same way as an operand
position."

The first version of Scheme Specification was 1975.  I have a 1974 version of the
Interlisp manual and it documents that PROG "allows a user to write an ALGOL-like
program containing INTERLISP expressions (forms ) to be executed.  The first argument,
args, is a list of local variables ..." (Page 60).  Interlisp acknowledges its
history from BBN-LISP which is even older.

PROG feels like it is much older than 1974 and may have come out from the ALGOL 68 effort,
which came out in 1969.  I find it reasonable to assume that Lisp developers would have 
picked up the lexical idea just after or even earlier during ALGOL development.  So I am
thinking sometime in the 60's.  It was much smaller world then.

http://user.it.uu.se/~richardc/txt/ALGOL68.txt


Wade
From: Wade Humeniuk
Subject: Re: Smug scheme weenies?
Date: 
Message-ID: <3vpDd.59014$KO5.30448@clgrps13>
The roots seem to be even earlier (Whoa!) See:

http://www-formal.stanford.edu/jmc/history/lisp/node4.html

Section d. Free Variables.

FUNARG

Wade