From: Marty Kent
Subject: What's the value of lexical scoping?
Date: 
Message-ID: <24508@ucbvax.BERKELEY.EDU>
I've been wondering lately why it is that "modern" lisps like Common Lisp
and Scheme are committed to lexical scoping.  To me, the only *obvious*
effect of lexical scoping is that it makes it very much more difficult to
write reasonable debugging tools (so the system writers don't bother with
it). Actually I have in mind the lisps for the Mac II, which are Allegro
Common Lisp and MacScheme.  (Since it lacked a compiler last I heard, I
haven't taken XLisp seriously. Perhaps there are other "serious" lisp
systems available for the Mac or Mac II, if there are any, I'd love to
hear about them...)

(To return to my main stream...) With dynamic scoping, you can actually
implement a break loop by just reading, eval'ing and printing.  With
Common Lisp's way of evaluating calls to EVAL in a null lexical
environment, it seems to me that in order to set up a decent break package
one has to know about the implementation of the runtime stack, the
structure of stack frames etc. (NOTE: by "a decent break package" I mean
one in which you can *at the very least* examine the values of locals on
the stack at break-time.) In fact, with Allegro Common Lisp the situation
is even worse, because the compiler doesn't save the names of locals in
the stack frames, which makes it pretty much impossible to scan at runtime
to resolve a name-based reference. 

It seems to me the Common Lisp specification missed out by not mandating
certain runtime stack manipulation primitives, a la (for instance)
Interlisp.  

I realize that discarding variable names from compiled code makes for
faster and smaller object modules, but it seems to me this kind of
"optimization" should be dependent on something like the setting of
"speed" in an optimize declaration.

Well, I don't really mean  to just sit here and bitch,  what I'm really
hoping is that someone will tell me either:
1) actually it's easy to set up a decent runtime debugger using stock
Common Lisp functions, you simply have to ...
or
2) while it's true that Common Lisp's scoping makes it difficult to write
debuggers, lexical scoping is still a good trade-off because it buys
you...

I'd be glad to hear about either of these alternatives, or some new way of
looking at the situation...


Marty Kent  	Sixth Sense Research and Development
		415/642 0288	415/548 9129
		·····@dewey.soe.berkeley.edu
		{uwvax, decvax, inhp4}!ucbvax!mkent%dewey.soe.berkeley.edu
Kent's heuristic: Look for it first where you'd most like to find it.

From: Patrick Arnold
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <1350015@otter.hple.hp.com>
I think there are two issues at stake here. Decent debuggers and binding
rules.

The first issue should be addressed by the language implementors. There is
no reason why compiled code should not retain enough information to produce
comparable debugging information. From what I remember of scheme it uses
lexical scoping and has a very good debugger (though it sometimes has no
information because of continuations but thats a different story).

The biggest pain with dynamic binding is that it suffers from the
downward (or upward) funarg problem.  This refers to the potential for a
procedure (function?)  to capture variables from the environment in
which it is being used.  This may not always be desirable because it
violates the "black box" notion of a procedure, namely that a procedure
behaves the same in any context. Lexical binding does not have this
problem.

The justification for dynamic binding is that it makes some forms of
abstraction easier to handle (this is important for programming in the
large).  Suppose we had two procedures which share a common
sub-procedure.  Further suppose we want to use implicit parameter
passing (i.e are not passed explicitly) then in a statically scoped
language you would be forced to repeat the definition of the shared
procedure in order to be able to capture the implicit parameters whereas
in a dynamically bound language you would be able to share a single
definition amongst many procedures (carefully).

So ideally a langauge should enable both styles of binding with there being
a set of pragmatic guidelines about how they should and shouldn't be used
in programming. The two types of binding enable two of the most important
aspects of a structured approach to computer software, namely abstraction
and information hiding.

There is a basic (but expositional) discussion of this in Structure and
Interpretation of Computer Programs by Abelson and Sussman on pages 321 to
323.

Hope this helps.

			Patrick.
From: Tom "Hey Man" Hausmann
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <4279@medusa.cs.purdue.edu>
In article <·······@otter.hple.hp.com>, ···@otter.hple.hp.com (Patrick Arnold) writes:
> The first issue should be addressed by the language implementors. There is
> no reason why compiled code should not retain enough information to produce
> comparable debugging information. 

    Optimizations (e.g. code motion) can make debugging the original source
    difficult unless the debugger is a very good one and maintains a great
    deal of information about the original source.

    -Tom
From: ····@zaphod.UUCP
Subject: Re: What's the value of lexical scoping
Date: 
Message-ID: <26500002@zaphod>
First - I hope this actually makes it out.  Most of my postings have stayed
local.

The reasons for lexical scoping don't end with efficiency.  It's true that
compiled code with lexical references is much more efficient then dynamically
scoped code but that isn't the whole story. 

The second good reason for lexical scoping is to solve the fexpr problem.  
This is what happens when you get a collision between a parameter and 
a global variable in dynamically scoped LISP.  So:

> (setq x 50)     ;; Change the global variable x
50
> (defun bar (x)  ;; Creates a new dynamic variable x
    (print x)
    (foo x)
    x)
BAR
> (defun foo (y)
    (print x)  
    (setq x 20) ;; An attempt to change the Global x
    (print x)
    y)
FOO
> (bar 100)
100             ;; Parameter X in BAR
100             ;; Parameter X referenced from FOO by dynamic scope
20              ;; Parameter X changed in FOO
20              ;; Returned from BAR
> (print x)
50              ;; Hasn't changed because of dynamic shadowing

This can cause exceedingly subtle bugs in code that don't mean to exploit 
this kind of scoping rule.  This can also happen between parameters,  it
needn't include a global variable.  Consider that this also causes problems
for the writers of the language itself as if a particular function like
MAPCAR uses a parameter used by a user and that user is APPLYing his own
function that modifies a dynamically scoped variable xyzzy he will change 
the wrong one.

A third argument,  although weak,  is to argue that most programmers, 
especially those who are transplants from 'normal' languages like FORTRAN, 
PL/1, C and PASCAL will expect lexically scoped behavior.  Making LISP
lexically scoped then makes it consistent with expected behavior.

A last argument is that many LISPs have been implemented and used where the 
interpreter is dynamic but the compiled code is lexical.  This is even
nastier.

So Common LISP preserves the ability to screw yourself for the hearty 
adventurer types (you can always do a (declare (special ..))) but saves
the rest of us mere mortals from our own folly.

Douglas Rand
 
  Internet:  ····@zaphod.prime.com
  Usenet:    primerd!doug 
  Phone:     (617) - 879 - 2960
  Mail:      Prime Computer, 500 Old Conn Path, MS10C-17, Framingham, Ma 01701

->  The above opinions are probably mine.  
From: John R. Levine
Subject: Re: What's the value of lexical scoping
Date: 
Message-ID: <1046@ima.ISC.COM>
In article <········@zaphod> ····@zaphod.prime.com writes:
>
>First - I hope this actually makes it out.  Most of my postings have stayed
>local.
>
>The reasons for lexical scoping don't end with efficiency.  It's true that
>compiled code with lexical references is much more efficient then dynamically
>scoped code but that isn't the whole story. 
>
>The second good reason for lexical scoping is to solve the fexpr problem.  
>This is what happens when you get a collision between a parameter and 
>a global variable in dynamically scoped LISP.  ...

That's really a better reason.  Every time I have used a dynamically scoped
language (Lisp, APL, Snobol4) I have been bitten quite painfully by strange
bugs due to unintended aliasing of names.  In APL, many people treat the
dynamic scoping effectively as a bug and create strange local variable names
that are carefully intended not to collide with globals or names in other
routines.

I always had the impression that Lisp's dynamic scoping was an accidental
effect of the a-list that Lisp 1.5 on the 7094 used to keep argument bindings.
Then, like most accidental effects, people started taking advantage of it and
it became enshrined in conventional use. The 1962 Lisp 1.5 manual notes that
compiled functions have lexically bound variables (although like everthing
else in the book, it is explained in a way that makes it almost
incomprehensible) and explains how to make your function's variables special
or common, so it seems reasonable to assume that from the first the semantics
of the binding strategy were muddled.
-- 
John R. Levine, IECC, PO Box 349, Cambridge MA 02238-0349, +1 617 492 3869
{ ihnp4 | decvax | cbosgd | harvard | yale }!ima!johnl, ······@YALE.something
Rome fell, Babylon fell, Scarsdale will have its turn.  -G. B. Shaw
From: Charles A. Cox
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <24531@ucbvax.BERKELEY.EDU>
In article <·····@ucbvax.BERKELEY.EDU> ·····@dewey.soe.berkeley.edu (Marty Kent) writes:
>  [...] In fact, with Allegro Common Lisp the situation
>is even worse, because the compiler doesn't save the names of locals in
>the stack frames, which makes it pretty much impossible to scan at runtime
>to resolve a name-based reference. 

I am more familiar with Allegro Common Lisp that runs on Unix
machines, but I am told that with the MAC-OS Allegro, beginning with
version 1.2, setting the *SAVE-DEFINITIONS* compiler flag will cause
the parameter names and values to be printed in a backtrace.  This
will aid in debugging.

In the UNIX version of Allegro Common Lisp, there is a variable called
COMP:SAVE-LOCAL-NAMES-SWITCH which is bound to a function.  When this
user redefinable function returns T, the compiler will save the
names of all the local variables.  These variable are then accessible
by name using the `:LOCAL' top level command.

Hope this helps.

	Charley Cox
	···@renoir.Berkeley.EDU
From: Flash Sheridan
Subject: Other Lisps for the Mac (was: What's the value of lexical scoping?)
Date: 
Message-ID: <499@sequent.cs.qmc.ac.uk>
> Perhaps there are other "serious" lisp
>systems available for the Mac or Mac II,

Don't think so.  There's ExperCommon[sic]Lisp in the more expensive version
for the II.  Can't afford it, and the version we have won't run on a II.
But from experience with ECL on a Plus, avoid it.

Speaking of XLIsp, has anybody patched it so it can cut&paste?

From: ·····@ee.qmc.ac.uk (Flash Sheridan)
Reply-To: ········@nss.cs.ucl.ac.uk
or_perhaps_Reply_to: ·····@cs.qmc.ac.uk
From: Patrick Arnold
Subject: Re: Other Lisps for the Mac (was: What's the value of lexical scoping?)
Date: 
Message-ID: <1350016@otter.hple.hp.com>
I would like to make it clear that I said that dynamic binding enables
*some* forms of abstraction. Simon Brooke is wrong to say that to have
abstraction you must have dynamic binding. There is a significant body of
software engineering people who think that the black box notion of a
procedure (function? I don't believe lisp has functions) is the only bona
fide form of procedural abstractions.

In my experience (a large Common Lisp program > 20k forms) we have used
exclusivly lexical binding and found it to be no problem. 

When I posted the original note I got involved in a fierce argument with
some of the other guys here (from the aforementioned body) who actually
thought I'd posted an error.

The outcome of this discussion was that the use of dynamic binding as an
abstraction technique was probably only required in programs in which there
is a great deal of local context (e.g. windowing systems).

I personally think that lexical scoping is easier to use in most cases.
Just because the black box notion of a procedure didn't originate in Lisp
doesn't mean we shouldn't allow Lisps to incorporate these VERY IMPORTANT
SOFTWARE ENGINEERING PRINCIPLES. Indeed I think tha case for using Lisp
(actually I prefer pop) for serious software engineering becomes much
stronger if we do (how many other languages have real first class
procedures? Pascal definitely no, Modula2, Ada, C).

Again I must emphasise that both types of binding are useful for certain
tasks. It is important that neither of them is ABUSED because this results
in trashy, difficult to maintain software and everyone knows there is far
too much of that about. The problem is to educate the religious fanatics
who are irrevocably attached to one particular form of binding away from
blind assertions about which is best and towards discovering the relative
strengths and weaknesses of either mechanisms.

			Patrick.
From: Ralph J. Marshall
Subject: Functions vs. Procedures in Lisp
Date: 
Message-ID: <34296@linus.UUCP>
In article <·······@otter.hple.hp.com> ···@otter.hple.hp.com (Patrick Arnold) writes:
>(function? I don't believe lisp has functions) 
>
>Just because the black box notion of a procedure didn't originate in Lisp
>doesn't mean we shouldn't allow Lisps to incorporate these VERY IMPORTANT
>SOFTWARE ENGINEERING PRINCIPLES. Indeed I think tha case for using Lisp
>(actually I prefer pop) for serious software engineering becomes much
>stronger if we do (how many other languages have real first class
>procedures? Pascal definitely no, Modula2, Ada, C).
>
>			Patrick.
	What is this all about ??? Why don't you think Lisp has functions,
and what do you mean by first-class procedures ?  I'm willing to believe that
"first-class procedure" means something special about which I am ignorant,
but where I come from a "function" is a subroutine call that returns a value
to the caller (possibly without any global side-effects, depending on how
you interpret the term.)  I think that this is the _MAIN_ type of subroutine
call in LISP, since you have to go out of your way to return nothing, and
global variables have to be declared or the compiler gets all uptight.

	If you have some definitions for these terms that back up your
assertions above, I'd love to hear them, and see some references.

	(BTW, I think the Common Lisp approach to lexical scoping by default
is a much more practical idea, especially since the compiler and interpreter
actually have to work the same way (what a concept !)).


---------------------------------------------------------------------------
Ralph Marshall (·····@mitre-bedford.arpa)

Disclaimer: Often wrong but never in doubt...  All of these concepts
are mine, so don't gripe to my employer if you don't like them.
---------------------------------------------------------------------------


Newsgroups: comp.lang.lisp
Summary: Procedures vs. Functions 
Expires: 
References: <···@sequent.cs.qmc.ac.uk> <·······@otter.hple.hp.com>
Sender: 
Reply-To: ·····@mbunix (Ralph Marshall)
Followup-To: 
Distribution: 
Organization: The MITRE Corporation, Bedford, Mass.
Keywords: 

[line counter food]
[soup]
[salad]
[entree]
[dessert]
[cognac]
[bill]
[doof retnuoc enil]

In article <·······@otter.hple.hp.com> ···@otter.hple.hp.com (Patrick Arnold) writes:
>(function? I don't believe lisp has functions) 
>
>Just because the black box notion of a procedure didn't originate in Lisp
>doesn't mean we shouldn't allow Lisps to incorporate these VERY IMPORTANT
>SOFTWARE ENGINEERING PRINCIPLES. Indeed I think tha case for using Lisp
>(actually I prefer pop) for serious software engineering becomes much
>stronger if we do (how many other languages have real first class
>procedures? Pascal definitely no, Modula2, Ada, C).
>
>			Patrick.
	What is this all about ??? Why don't you think Lisp has functions,
and what do you mean by first-class procedures ?  I'm willing to believe that
"first-class procedure" means something special about which I am ignorant,
but where I come from a "function" is a subroutine call that returns a value
to the caller (possibly without any global side-effects, depending on how
you interpret the term.)  I think that this is the _MAIN_ type of subroutine
call in LISP, since you have to go out of your way to return nothing, and
global variables have to be declared or the compiler gets all uptight.

	If you have some definitions for these terms that back up your
assertions above, I'd love to hear them, and see some references.

	(BTW, I think the Common Lisp approach to lexical scoping by default
is a much more practical idea, especially since the compiler and interpreter
actually have to work the same way (what a concept !)).


---------------------------------------------------------------------------
Ralph Marshall (·····@mitre-bedford.arpa)

Disclaimer: Often wrong but never in doubt...  All of these concepts
are mine, so don't gripe to my employer if you don't like them.
---------------------------------------------------------------------------



Newsgroups: comp.lang.lisp
Subject: Re: Other Lisps for the Mac (was: What's the value of lexical scoping?)
Summary: 
Expires: 
References: <···@sequent.cs.qmc.ac.uk> <·······@otter.hple.hp.com>
Sender: 
Reply-To: ·····@mbunix (Ralph Marshall)
Followup-To: 
Distribution: 
Organization: The MITRE Corporation, Bedford, Mass.
Keywords: 
From: John Gateley
Subject: Re: Functions vs. Procedures in Lisp
Date: 
Message-ID: <51597@ti-csl.CSNET>
In article <·····@linus.UUCP> ·····@mbunix (Ralph Marshall) writes:
>In article <·······@otter.hple.hp.com> ···@otter.hple.hp.com (Patrick Arnold) writes:
>>(function? I don't believe lisp has functions) 
>	What is this all about ??? Why don't you think Lisp has functions,
>and what do you mean by first-class procedures ?  I'm willing to believe that

A first class object is one that can be passed to a procedure as an argument,
and returned as a value by procdedures. I dont have the references, but it
is in common use (especially among Schemers). I dont wish to speak for Patrick,
but "function" has (at least) two different meanings: the definition you gave,
and the definition mathematicians use.
From: Vincent Manis
Subject: Re: Functions vs. Procedures in Lisp
Date: 
Message-ID: <3205@ubc-cs.UUCP>
I first heard the term ``function'' in high-school algebra, to describe a 
particular type of relationship between quantities. In university mathematics, 
we spent a lot of time talking about mappings, bijective and otherwise. None
of these ideas have anything to do with the programming concept of a 
``function'', which returns a value after perhaps assigning some variables or 
doing some I/O. 

Languages with state change operators (such as assignment, data structure 
mutation [rplaca/set-car!], or I/O) do not have functions in the mathematical
sense. We've all been sloppy in the past, and talked as if they did. The
authors of the Scheme report did us all a service in eschewing ``function'',
and using ``procedure'' instead. 

Procedures are ``first-class citizens'' of a language if they may be passed
as parameters, returned as procedure values, and stored in data structures. 
CL, Scheme and most purely functional languages extend these rights; C, and
Modula-2 extend all these rights with limits (they don't allow such procedures
to encapsulate anything except the static global environment).

Vincent Manis                    | ·····@cs.ubc.ca
The Invisible City of Kitezh     | ·····@cs.ubc.cdn
Department of Computer Science   | ·····@ubc.csnet
University of British Columbia   | uunet!ubc-cs!manis
<<NOTE NEW ADDRESS>>             |      
From: Patrick Arnold
Subject: Re: Functions vs. Procedures in Lisp
Date: 
Message-ID: <1350017@otter.hple.hp.com>
John Gately is quite right to point out that function has sevral meanings
depending on the context of the conversation. I'm afraid it's one of my
"religious beliefs" that functions are expressions which denote values and
that calling a function with the same actual parameters will always produce
the same result and will have no insidious effects on other parts of the
system (i.e. side effects). This is the mathematical concept of a function.
I am quite happy with the other use provided the meaning is agreed at the
outset. 

I tend to use the term procedure for a parameterised piece of code which
may return a value (or even many values) and may have side effects. This is
sometimes a useful distinction to make. LISP and indeed most other
languages don't support the real notion of a function because there is
always the ability to access (and change destructively) other values or
parameters.

I hope this explanation gives you some hints as to why I don't think LISP
has functions in the true sense (although you can write inefficient Lisp
programs that are functional).

First class procedures are a very powerful programming tool enabling the
use of generic operations over data structures and object oriented style
programming using procedures with local state (closures) as objects. There
is quite heavy use of them in Abelson and Sussman (a must for any serious
Lisp programmer) and the book illustrates their use in a number of
different programming paradigms including an object oriented style.
(Interesting that what Lisp like languages have been able to do for years
is now in fashion in a different guise!!).

By the way I would just like to say that I don't get anything for plugging
Structure and Interpretation of Computer Programs, but it is quite the best
book on Lisp (Scheme actually) in particular and some programing techniques
in general that I have come across.

				Patrick.
From: Mark William Hopkins
Subject: Re: Functions vs. Procedures in Lisp
Date: 
Message-ID: <6024@uwmcsd1.UUCP>
In article <·······@otter.hple.hp.com> ···@otter.hple.hp.com (Patrick Arnold) writes:
>John Gately is quite right to point out that function has sevral meanings
>depending on the context of the conversation. I'm afraid it's one of my
>"religious beliefs" that functions are expressions which denote values and
>that calling a function with the same actual parameters will always produce
>the same result and will have no insidious effects on other parts of the
>system (i.e. side effects).

After having seen countless examples of this kind of thing in Math texts:

	      "Let g(x) = f(x, a)"
or
	      "We will suppress the subscripts in the following ..."
or
	      "A XXX space is a tuple <U, V, W>, but we will denote
	       such a space by U unless confusion precludes our doing so."

I am more reluctant to believe that Mathematical functions do not have anything
that corresponds to insiduous side-effects.

Remember, a side-effect is just a function parameter (or returned value) that
has not been explicitly parametrized in the definition of the function.  This
applies to I/O as well ... except that in most languages it would be impossible
make the parameters explicit in function with I/O side-effects.

Conclusion: Mathematical functions and programming language functions are MUCH
more closely related than anybody has realised up to now.

Sorta like: "I just found out yesterday that the Prince of Cambodia 
	     is my brother."
--------------------------------------------------------------------------------
Disclaimer: the poster of this article bears no relation to Prince Sihanouk
	    ... to the best of his knowledge.
From: John Gateley
Subject: Re: Functions vs. Procedures in Lisp
Date: 
Message-ID: <51742@ti-csl.CSNET>
In article <····@uwmcsd1.UUCP> ·····@csd4.milw.wisc.edu (Mark William Hopkins) writes:
>[Examples of side-effecting mathematics deleted]
>Conclusion: Mathematical functions and programming language functions are MUCH
>more closely related than anybody has realised up to now.

I dont follow this, but if what you are saying is true, you should be able
to write a mathematical function with side effects. Show me how to do this.
I would like to see, for example, two functions f and g where g always returns
the last argument passed to f.

John Gateley
From: Mark William Hopkins
Subject: Re: Functions vs. Procedures in Lisp
Date: 
Message-ID: <6035@uwmcsd1.UUCP>
In article <·····@ti-csl.CSNET> ·······@mips.UUCP (John Gateley) writes:
>In article <····@uwmcsd1.UUCP> ·····@csd4.milw.wisc.edu (Mark William Hopkins) writes:
>>[Examples of side-effecting mathematics deleted]
>>Conclusion: Mathematical functions and programming language functions are MUCH
>>more closely related than anybody has realised up to now.
>
>I dont follow this, but if what you are saying is true, you should be able
>to write a mathematical function with side effects. Show me how to do this.
>I would like to see, for example, two functions f and g where g always returns
>the last argument passed to f.
>
>John Gateley

This is a tall order, but here it goes:

   Given a function f, let f' be the corresponding function defined on finite 
sequences:

	     f'({x1, x2, ... , xn}) = {f(x1), f(x2), ... , f(xn)}

Let F be an ordered pair (f', g) with f defined as above and g defined for 
sequences as follows:

	                g({x1, x2, ... , xn}) = xn

(Here's where the mathematical correlates of side-effects come in)

"For brevity we will suppress the prime on f' unless confusion otherwise
dictates, and we will refer to g as the 'last-argument' function of f."

... or something like that.
From: William J. Bouma
Subject: Re: Functions vs. Procedures in Lisp
Date: 
Message-ID: <4369@medusa.cs.purdue.edu>
In article <·······@otter.hple.hp.com> ···@otter.hple.hp.com (Patrick Arnold) writes:
>John Gately is quite right to point out that function has sevral meanings
>depending on the context of the conversation. I'm afraid it's one of my
>"religious beliefs" that functions are expressions which denote values and
>that calling a function with the same actual parameters will always produce
>the same result and will have no insidious effects on other parts of the
>system (i.e. side effects). This is the mathematical concept of a function.
>I am quite happy with the other use provided the meaning is agreed at the
>outset. 
>
...
>
>I hope this explanation gives you some hints as to why I don't think LISP
>has functions in the true sense (although you can write inefficient Lisp
>programs that are functional).

    Whether a language has functions or not seems to me a very different thing
from whether a language is FUNCTIONAL (ie. everything is a function). I believe
one can write functionally in LISP and have the result be as efficient as the
non-functional equivalent. For one thing some compilers will optimize out tail
recursion into an iterative loop. For the other part it depends on what you are
programming and what algorithms you use.

    I don't know what version of LISP you are talking about, but I don't
remember any language since BASIC that FORCED me to write a FUNCTION (or
whatever the crap ypu want to call it) that had side effects. THUS LISP
DOES have functions in the mathematical sence. Isn't + a FUNCTION? How about
(lambda (x) x)? Aren't these TRUE enough functions for you???

    Personally I am sick of this whole discussion. What is the point? Who cares
if what one person calls a FUNCTION, another calls a "Procedure with returned
value"? Yes, in the context of a specific conversation it would be important
for the parties to have equivalent definitions, but in general NO. Who cares
what mathematics' definition of function is? Mathematics is NOT programming!
Any mathematicians reading this USE MACSYMA.

    ASSIDE: I cast my vote FOR Common Lisp. I have never had much problem 
writting programs in it whether I used dynamic binding or not. There is one
little thing that annoys me about it, but it is hardly enough to get all
excited and write 100+ line article about. Well, maybe tomorrow...
-------------------------------------------------------------------------------
·····@medusa.cs.purdue.edu   | Just spending my days,
...!purdue!bouma             |   Soaking up those cathode rays.
From: Simon Brooke
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <515@dcl-csvax.comp.lancs.ac.uk>
In article <·····@ucbvax.BERKELEY.EDU> ·····@dewey.soe.berkeley.edu (Marty Kent) writes:
>I've been wondering lately why it is that "modern" lisps like Common Lisp
>and Scheme are committed to lexical scoping.  

Good! somebody else prepared to stand up and say Common LISP is a mess. If
you share this opinion, read the end of this posting even if you skip the
middle... it is important.

[I'm just commenting here on bits from Marty's posting - serious stuff
later]
>
>I realize that discarding variable names from compiled code makes for
>faster and smaller object modules, but it seems to me this kind of
>"optimization" should be dependent on something like the setting of
>"speed" in an optimize declaration.
>
This sort of 'optimisation' is pointless anyway, now that we work in
32-bit address spaces and memory is cheap. It must, surely, always be
better to keep your local names with your code.

>Well, I don't really mean  to just sit here and bitch,  what I'm really
>hoping is that someone will tell me either:
>1) actually it's easy to set up a decent runtime debugger using stock
>Common Lisp functions, you simply have to ...

Throw away that cruddy fortran-with-brackets and buy yourself a real LISP.
I don't know if Metacomco have yet ported Cambridge LISP onto the Mac, but
they easily could, and probably would if they felt there was a demand;
this wouldn't solve your problem, as it static binds when compiled (ugh)
but it is otherwise a nice lisp. More seriously, LeLisp has certainly been
ported onto the Mac, and - I haven't played with it - it is reported to be
a really nice LISP. I understand that the manuals are still only available
in French, though. Finally, if you (or your employer) have a wallet as
deep as the Marianas trench, there's the much-heralded micro-explorer.
That *ought* to give a decent LISP environment, but again I haven't seen
one.

>or
>2) while it's true that Common Lisp's scoping makes it difficult to write
>debuggers, lexical scoping is still a good trade-off because it buys
>you...
>
We had a long discussion about this on the uk.lisp newsgroup. I still have
much of this on file and could post it if people are interested (I can't
easily mail to the States). Advocates of lexical scoping offered a number
of extremely tricky programming examples which couldn't be done with
anything else. These were very impressive *as tricks*, but I couldn't ever
imagine using any of them in a serious programming situation. In short, I
wasn't convinced - but I should add that I didn't convince anyone else
either.
>
>
*** If you don't like Common LISP, the future is hopeful - but you should
*** do something about it now!

As you *ought* to know, an ISO working group is currently prepareing a new
LISP standard, to be known as ISLISP. They hope to have this ready for the
end of 1989, so the time to influence it is *as soon as possible*.
Regrettably, this group is working from Common LISP as a basis; however,
the good news is that it appears that dynamic binding a la EuLisp will be
incorporated, and there will be no packages. The character set is being
looked after by the Japanese, which has to be good news, because it
guarantees that we will get an extended character set (how the CL
committee were ever allowed to get away with upper case only - and, for
G*d's sake, why they wanted to - is far beyond me). 

Obviously, I have my ideas about what a good LISP looks like (all right,
as a minimum it has dynamic binding, both LAMBDA and NLAMBDA forms, at
least the option of non-intrusive garbage collection; although it allows
macros, there is nothing you can't do with a function; and it does not
have packages, PROG, GO, stupid tokens in parameter lists, SETF....) -
everybody else out there has their own list. If you *care* about your
working language, the best way to make sure that this committee does not
produce another ugly camel is to identify your nearest working group
member and lobby as hard as you can. *DO IT NOW*.


** Simon Brooke *********************************************************
*  e-mail : ·····@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Thought for today: isn't it time you learned the Language            * 
********************* International Superieur de Programmation? *********
From: Jerry Jackson
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <199@esosun.UUCP>
>>Obviously, I have my ideas about what a good LISP looks like (all right,
>>as a minimum it has dynamic binding, both LAMBDA and NLAMBDA forms, at
>>least the option of non-intrusive garbage collection; although it allows
>>macros, there is nothing you can't do with a function; and it does not
>>have packages, PROG, GO, stupid tokens in parameter lists, SETF....) -
>>everybody else out there has their own list. If you *care* about your
>>working language, the best way to make sure that this committee does not
>>produce another ugly camel is to identify your nearest working group
>>member and lobby as hard as you can. *DO IT NOW*.


FLAME ON

This is really incredible.... I've heard people flame about CommonLisp
many times.. (I have even done it myself on a few occasions..), but
I've never heard anyone attack some of these features -- 

*ahem* -- First of all, CL supports dynamic binding for those cases where
it is useful (I admit they definitely exist), although dynamic binding
is quite clearly a *BUG* (the names you give to local variables should not
matter...)

NLAMBDA -- cannot be made efficient (unless you consider a run-time call
to EVAL efficient)

packages -- Ok, I agree with this one, however a case may be made for 
an environment oriented package system (requiring lexical scoping)

PROG,GO -- For people who never have to write powerful tools I would
agree that these are not necessary, but if you had ever tried to compile
a special purpose language to lisp and make it reasonably efficient, you
would appreciate the value of having things like PROG and GO as 
compilation targets

tokens in parameter lists -- Isn't it really obvious that something
like member with a few options is better than the excessive proliferation
of look-alike functions (ala memq memql memqual ...)

SETF -- I can't believe my eyes... This is one of the BEST things about
CL... I don't know what to say.  Anyone who has actually USED CL with setf
for a while knows what I'm talking about.

HAVE YOU EVER USED LISP????? (I'm quite sure you have never used CL --
no one who had could have said the things you said.)

FLAME OFF

+-----------------------------------------------------------------------------+
|   Jerry Jackson                       UUCP:  seismo!esosun!jackson          |
|   Geophysics Division, MS/22          ARPA:  ··············@seismo.css.gov  |
|   SAIC                                SOUND: (619)458-4924                  |
|   10210 Campus Point Drive                                                  |
|   San Diego, CA  92121                                                      |
+-----------------------------------------------------------------------------+
From: Simon Brooke
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <525@dcl-csvax.comp.lancs.ac.uk>
In article <···@esosun.UUCP> ·······@esosun.UUCP (Jerry Jackson) disagrees
with some of the things which I see as valuable in LISP. I'd like to
advance some defence of them, point by point. Fistly:

>CL supports dynamic binding for those cases where
>it is useful (I admit they definitely exist), although dynamic binding
>is quite clearly a *BUG* (the names you give to local variables should not
>matter...)
>
If you wish to gain information from your environment, then clearly, the
names of the symbols you use do matter. If you bind your locals either
in an arg list or in a let statement, then they don't matter. If you
*don't* do this, then you are using globals, which will get you into equal
trouble no matter what binding scheme you use. So this argument is simply
not tenable. I agree that we can debate (and disagree) about which binding
scheme is preferable, but it makes no sense to describe those you don't
like as bugs.

>NLAMBDA -- cannot be made efficient (unless you consider a run-time call
>to EVAL efficient)
>
No, I agree that it cannot. I use LISP for it's expressiveness, not its
efficiency; and while I appreciate that generally you can do with a macro
all that you can do with an NLAMBDA, few people can read a macro of more
than moderate complexity. We use LISP to convey information, not only to a
machine but also to other people. Writing code they can't read doesn't
achieve this object.

>PROG,GO -- For people who never have to write powerful tools I would
>agree that these are not necessary, but if you had ever tried to compile
>a special purpose language to lisp and make it reasonably efficient, you
>would appreciate the value of having things like PROG and GO as 
>compilation targets
>
Whilst we still programme largely for von Neumann architectures, there is
need for an iterative construct in LISP; however, there are many more
elegant iterative structures than PROG available to the designers of
modern LISPs. If you are using PROG for any purpose other than iteration,
then (if I were advising you - and of course, you might not accept my
advice) I would suggest that you probably need a clearer analysis of your
problem. Myself, I would never use GO or GOTO in any language.

>tokens in parameter lists -- Isn't it really obvious that something
>like member with a few options is better than the excessive proliferation
>of look-alike functions (ala memq memql memqual ...)
>
Obviously it it, but it isn't at all obvious to me that sticking tokens in
the parameter list even helps with this.

>SETF -- I can't believe my eyes... This is one of the BEST things about
>CL... I don't know what to say.  Anyone who has actually USED CL with setf
>for a while knows what I'm talking about.
>
So you actually like overwriting cons cells without knowing what else is
pointing to them !? Either you aren't serious, or you haven't looked at
what SETF does. We all *know* REPLACs are dangerous; we all use them with
care (I hope). But SETF allows us to overwrite a cons cell without even
getting hold of it to identify it first! That is *terrifying*! and you are
going to put that horror into the hands of the innocent?

>HAVE YOU EVER USED LISP????? 

Yes. Why do you think I care about it so much?


** Simon Brooke *********************************************************
*  e-mail : ·····@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
* Thought for today: The task of a compiler is to take programs ... and *
******************** mutilate them beyond recognition [Elson] ***********
From: Bruce Krulwich
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <31720@yale-celray.yale.UUCP>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon
Brooke) writes:
>If you wish to gain information from your environment, then clearly, the
>names of the symbols you use do matter. If you bind your locals either
>in an arg list or in a let statement, then they don't matter. If you
>*don't* do this, then you are using globals, which will get you into equal
>trouble no matter what binding scheme you use.

This is simply not true, especially when using programming techniques
encouraged in lexically scoped LISPs.  Suppose you pass around a function.
In a lexically scoped LISP such a function can reference variables from the
function that created it.  In a dynamically scoped LISP this variable
references can be blocked by other variables in the system.  This is
something you may not have done, having not worked with lexically scoped
LISPs, but it is incredibly powerful.  (See, for example, the book AI
PROGRAMMING, by Charniak et al.)

>>HAVE YOU EVER USED LISP????? 
>Yes. Why do you think I care about it so much?

There is a big difference between the capabilities available (and thus the
techniques used) in modern LISPs as opposed to older LISPs.  I really
suggest looking at AI PROGRAMMING or a similar book before claiming that
such capabilities are not needed.


Bruce Krulwich

Net-mail: ········@{yale.arpa, cs.yale.edu, yalecs.bitnet, yale.UUCP}

	Goal in life: to sit on a quiet beach solving math problems for a
		      quarter and soaking in the rays.   
From: Jerry Jackson
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <209@esosun.UUCP>
I must admit that after reading the measured response of Simon Brooke
to my *inflammatory* posting I felt somewhat abashed, but I would still
like to respond to some of his points... I think we are converging on
the good and bad points of both sides..


>>CL supports dynamic binding for those cases where
>>it is useful (I admit they definitely exist), although dynamic binding
>>is quite clearly a *BUG* (the names you give to local variables should not
>>matter...)
>>
>If you wish to gain information from your environment, then clearly, the
>names of the symbols you use do matter. If you bind your locals either
>in an arg list or in a let statement, then they don't matter. If you
>*don't* do this, then you are using globals, which will get you into equal
>trouble no matter what binding scheme you use. So this argument is simply
>not tenable. I agree that we can debate (and disagree) about which binding
>scheme is preferable, but it makes no sense to describe those you don't
>like as bugs.

I would like to elaborate on why I called this a *bug*.  It is not that
I just don't like it.  Here is an example of what I was talking about --

(defun foo (l)
  (my-mapcar #'(lambda (z)
		 (eql z l))
	     '(1 2 3 4)))

(defun my-mapcar (f l)
  (if (null l)
      nil
    (cons (funcall f (car l))
	  (my-mapcar f (cdr l)))))

With lexical scoping, the result of: (foo 2) => (nil t nil nil)
With dynamic scoping, the result of: (foo 2) => (nil nil nil nil)

With dynamic scoping, it is impossible to write a general procedure
which takes functional arguments that doesn't have this problem.  This is
why I said it's a bug -- it violates the notion that the names you pick
for *locals* shouldn't matter -- (notice that "l" in this case was
bound in the arglist)


>>PROG,GO -- For people who never have to write powerful tools I would
>>agree that these are not necessary, but if you had ever tried to compile
>>a special purpose language to lisp and make it reasonably efficient, you
>>would appreciate the value of having things like PROG and GO as 
>>compilation targets
>>
>Whilst we still programme largely for von Neumann architectures, there is
>need for an iterative construct in LISP; however, there are many more
>elegant iterative structures than PROG available to the designers of
>modern LISPs. If you are using PROG for any purpose other than iteration,
>then (if I were advising you - and of course, you might not accept my
>advice) I would suggest that you probably need a clearer analysis of your
>problem. Myself, I would never use GO or GOTO in any language.

As I said in my original statement, I am not advocating the use of the
abominable "go"-man in user code.  What I am saying, is that "go" is
a useful target for compilers for embedded languages -- (I have recently
written a compiler for a lisp-based prolog that compiles to lisp which
takes advantage of this...)

In fact, personally I don't much like iteration at all... That's why I
want implementors to be able to produce tail-recursive control structures
(even for embedded languages)


>>SETF -- I can't believe my eyes... This is one of the BEST things about
>>CL... I don't know what to say.  Anyone who has actually USED CL with setf
>>for a while knows what I'm talking about.
>>
>So you actually like overwriting cons cells without knowing what else is
>pointing to them !? Either you aren't serious, or you haven't looked at
>what SETF does. We all *know* REPLACs are dangerous; we all use them with
>care (I hope). But SETF allows us to overwrite a cons cell without even
>getting hold of it to identify it first! That is *terrifying*! and you are
>going to put that horror into the hands of the innocent?

On the contrary, I think that the benefits of SETF are most apparent when
you *do* know your target -- (I'm not really sure it is even possible to
do the opposite -- SETF is pretty dumb.. you have to tell it where the 
cell you want changed is and it has to know at compile time where that is)
Yes, RPLAC's are bad.  SETF is basically the same as the assignment 
mechanism in a more typical language like 'C':

a[i].wow = 5;  =>  (setf (wow (elt a i)) 5)

Is this bad?


BTW: There are things *I* don't like about CL -- 

1) packages -- The package system of CL is based on the wrong idea..
A programmer doesn't care if someone else uses the same symbol as *data*;
He only cares if it is a variable name or a function name, etc.  Since
what is important is the set of *bindings* for a symbol, an environment
system would be more appropriate.  

2) #' -- By distinguishing function bindings from variable bindings,
CL makes many uses of lexical scoping awkward and nearly opaque 
(as well as requiring extra special forms)

3) the equality predicates -- I admit that I don't have a good answer
to this problem, but I think equalp was not well thought out (couldn't
we at least have a function just like equalp except that it is case-sensitive
for strings; or an option to equalp? -- I know, I know, everyone has his
own set)

4) A nit-pik -- has anyone ever found a use for the top level form: '-' ? 

However, if you consider the magnitude of the task of designing this
language, they did pretty well. (I never thought I'd say this -- I used
to be an Interlisp-D hacker..)

+-----------------------------------------------------------------------------+
|   Jerry Jackson                       UUCP:  seismo!esosun!jackson          |
|   Geophysics Division, MS/22          ARPA:  ··············@seismo.css.gov  |
|   SAIC                                SOUND: (619)458-4924                  |
|   10210 Campus Point Drive                                                  |
|   San Diego, CA  92121                                                      |
+-----------------------------------------------------------------------------+
From: Barry Margolin
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <22214@think.UUCP>
In article <···@esosun.UUCP> ·······@esosun.UUCP (Jerry Jackson) writes:
>1) packages -- The package system of CL is based on the wrong idea..
>A programmer doesn't care if someone else uses the same symbol as *data*;
>He only cares if it is a variable name or a function name, etc.  Since
>what is important is the set of *bindings* for a symbol, an environment
>system would be more appropriate.  

One program's function name is another program's data.  Macros, for
instance, are programs whose data will later be interpreted as a
program.

And what about symbols used in property lists?  Both a quantum physics
program and an auto inventory program might use the COLOR property of
symbols.

>4) A nit-pik -- has anyone ever found a use for the top level form: '-' ? 

Not for anything serious.  It's just a holdover from MacLisp.  It's
trivial to implement, and I guess the CL designers saw no reason to
drop it.

In MacLisp, which didn't have the LABELS construct, it could be used
to do recursion without actually defining a new function.  For
example, factorial(10) could be done with:

((lambda (n)
   (if (< x 2) 1
       (* n (funcall (car -) (1- n)))))
 10)

Barry Margolin
Thinking Machines Corp.

······@think.com
{uunet,harvard}!think!barmar
From: Barry Margolin
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <22212@think.UUCP>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>>SETF
>So you actually like overwriting cons cells without knowing what else is
>pointing to them !? Either you aren't serious, or you haven't looked at
>what SETF does. We all *know* REPLACs are dangerous; we all use them with
>care (I hope). But SETF allows us to overwrite a cons cell without even
>getting hold of it to identify it first! That is *terrifying*! and you are
>going to put that horror into the hands of the innocent?

I don't understand this point at all.  How does SETF allow you to
overwrite something without requiring you to know what you're
overwriting?

Maybe the problem you are referring to is the difference in behavior
of SETF depending upon whether it is operating on a structured object
or not.  If it is modifying a structured object, it modifies the
object, so all references to that object see the change.  On the other
hand, if it is given a character or a number, it modifies only the
referent it is given.  Examples:

Structured:
	(setq x (cons 1 2))
	(setq y x)
	(eql x y) => T
	(setf (car x) 3)
	x => (3 . 2)
	y => (3 . 2)
	(eql x y) => T

Non-structured:
	(setq x #\a)
	(setq y x)
	(eql x y) => T
	(setf (char-bit x :meta) t)
	x => #\meta-a
	y => #\a
	(eql x y) => NIL

However, these same inconsistencies would exist if you were forced to
use the pre-SETF equivalents:
	(setq x (rplaca x 3))
	(setq x (set-char-bit x :meta t))

(excuse the anachronism).  The inconsistency isn't in SETF, but in the
fact that the language allows side-effects to some data types but not
others.  If side effects on conses were not permitted, e.g.

(defun rplaca (cons new-car)
  (cons new-car (cdr cons)))

the two SETFs would be consistent regarding side effects.

Barry Margolin
Thinking Machines Corp.

······@think.com
{uunet,harvard}!think!barmar
From: Jeff Dalton
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <481@aiva.ed.ac.uk>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>If you wish to gain information from your environment, then clearly, the
>names of the symbols you use do matter. If you bind your locals either
>in an arg list or in a let statement, then they don't matter. 

But if the Lisp provides only dynamic scope, the names of variables
bound in arg lists and LETs do matter even though you often don't
want them to.

] NLAMBDA -- cannot be made efficient (unless you consider a run-time call
] to EVAL efficient)

>No, I agree that it cannot. I use LISP for it's expressiveness, not its
>efficiency; and while I appreciate that generally you can do with a macro
>all that you can do with an NLAMBDA, few people can read a macro of more
>than moderate complexity.

You can easily write NLAMBDAs in Common Lisp by using a function
together with a macro that adds quotes to the arguments.  Whether
is is desirable to do so is another matter.  The problems are not
just of efficiency but also of understanding.

] SETF -- I can't believe my eyes... This is one of the BEST things about
] CL... I don't know what to say.  Anyone who has actually USED CL with setf
] for a while knows what I'm talking about.

>So you actually like overwriting cons cells without knowing what else is
>pointing to them !? Either you aren't serious, or you haven't looked at
>what SETF does. We all *know* REPLACs are dangerous; we all use them with
>care (I hope). But SETF allows us to overwrite a cons cell without even
>getting hold of it to identify it first! That is *terrifying*! and you are
>going to put that horror into the hands of the innocent?

SETF of CAR and REPLACA are the same thing as far as what you've said
is concerned.  You have not given a reason why SETF is more terrifying
than REPLAC, for it does not let you modify cons cells without getting
hold of them first any more than REPLAC does.

Jeff Dalton,                      JANET: ········@uk.ac.ed             
AI Applications Institute,        ARPA:  ·················@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton
From: Jeff Dalton
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <452@aiva.ed.ac.uk>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>In article <·····@ucbvax.BERKELEY.EDU> ·····@dewey.soe.berkeley.edu (Marty Kent) writes:
>>I've been wondering lately why it is that "modern" lisps like Common Lisp
>>and Scheme are committed to lexical scoping.  
>
>Good! somebody else prepared to stand up and say Common LISP is a mess. If
>you share this opinion, read the end of this posting even if you skip the
>middle... it is important.

I will respond to this, because I have talked with Simon before on
Uk.lisp, and because I am familiar with the various standardization
efforts mentioned in his message.  Nothing I say has any official
standing of course.

For one thing, it is significant that Marty Kent mentioned Scheme as
well as Common Lisp.  He did not say Common Lisp is a mess, nor did
he mention any problem with Common Lisp other than the effects of
its use of lexical scope on debugging.  I therefore think it a bit
unfair to enlist him in the anti-Common Lisp cause just yet.

This exchange is a sign of a general problem faced by the Lisp
community, namely that we are trying to standardize Lisp at a
time when our conception of the language is changing.  One aspect
of this change is the move towards lexical scoping.

In addition, there is a problem of "uneven development": some
people have gone further than others or in a different direction.
I do not mean to imply that those who have gone further are right.
Nonetheless, it may be difficult to explain "modern" Lisp without
retracing a lot of history.

It may even be that the only way to understand is to try out the
other point of view and see what it's like.

My own thinking on these matters was informed in part by the
first Scheme papers by Steele and Sussman starting in 1975,
by Stoy's book on Denotational Semantics (see especially his
comments on 1st-class functions), and by two papers in the
Conference Record of the 1980 Lisp Conference:

     Kent M. Pitman. Special Forms in Lisp. Pages 179-187.
     [An argument that macros are better than fexprs]

     Steven S. Muchnick and Uwe F. Pleban.  A Semantic
     Comparison of Lisp and Scheme.  Pages 56-64.
     [In part a reconstruction of Lisp development, as is
     Steele and Sussman's "The Art of the Interpreter".]

     [For other references, see the R*RS Scheme reports.]

Scheme-like ways of thinking were not immediately convincing, but
experience with Scheme showed that such an approach had virtues
despite seeming to impose a number of restrictions.  For one thing,
the lexical resolution of the differences between interpreter and
compiler semantics (many Lisps have partial lexical scoping in
compiled code but only dynamic scoping in interpreted) began to
seem better overall than the dynamic resolution (have compiled
code use dynamic scoping too), as did the lexical form of
functional values.

These ideas eventually became strong enough to influence Common Lisp:
  -  Common Lisp has lexical scoping with indefinite extent.
  -  Common Lisp does not support user-defined special forms.
  -  Common Lisp uses the same rules of scope and extent
     for both interpreted and compiled code.

It is important to note that for many these are all good things.
In particular, it is likely that all three will be true of any
Lisp standard, whether developed by x3j13 or ISO's WG-16.  They
are also true of the suggestions made by the EuLisp committee.
Indeed, one of the key questions now is how much further in the
Scheme direction the standard should move.

However, to others these things are all bad things and represent 
capitulation to the forces of Pascal, or something of that sort.
In addition, some, and I think Simon is among them, find Common
Lisp a confusing mixture.  They prefer both Scheme and "dynamic
Lisp" to Common Lisp, but still prefer dynamic Lisp to Scheme.
Such a position is not simply a complaint that Common Lisp has
lexical scoping; it is more complex.  If they would like a standard
that improves on Common Lisp in some respects, it may be possible
to achieve one; but I do not think the Lisp community has the
resources or the inclination to build a standard for dynamic Lisp.

This seems enough for one message; I will respond further in the
next.

Jeff Dalton,                      JANET: ········@uk.ac.ed             
AI Applications Institute,        ARPA:  ·················@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton
From: Simon Brooke
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <519@dcl-csvax.comp.lancs.ac.uk>
Before I start: thanks to Jeff Dalton for his piece, and I accept his
correction. I was clearly wrong to claim that Marty Kent neccesarily
shared my opinion of CL. Also, Jeff is perfectly right to suggest that I
would prefer Scheme - which appears a clean, elegant, well designed
language - to CL. But for the rest....

Things are bubbling! good. Let's look at some of the arguments that have
been advanced. Firstly, the objection to dynamic binding that's most
commonly advanced: namely, that you fall down holes when you confuse
locals with globals. As John Levine writes:

]	Every time I have used a dynamically scoped language (Lisp, APL,
]	Snobol4) I have been bitten quite painfully by strange bugs due
]	to unintended aliasing of names.

If, in a lexically scoped lisp, you refer to a global variable
thinking it's a local, or vice-versa, you'll still fall down a hole.
Lexical scoping does not obviate the need for good software engineering
practice - namely, in this case a *naming scheme*.

The Xerox community has got used to a scheme under which, among other
things, all globals are marked out with asteriscs:
	*thisIsAGlobal*
and locals are not:
	thisIsALocal
Even common lisp people, despite the fact that they use a language which
throws away 50% of all the information in it's input stream (that really
is unbelievable!), could adopt a simple convention like this. Then you
won't fall down *that* kind of hole.

Secondly, the argument that Lisp should become more like conventional
languages so that people switching to it will find it easier to learn.
Douglas Rand expressed it thus:

]	... most programmers, especially those who are transplants from 
]	'normal' languages like FORTRAN, PL/1, C, and PASCAL will expect
]	lexically scoped behaviour.

I don't buy this. The reason these people are switching to LISP is because
they are *dissatisfied* with the *expressiveness* of their current
language. They know that LISP is significantly different; they know they
are going to have to re-learn. What they don't want is to find, when
they've put in that effort, that we have castrated LISP to the extent that
it gives them no advantage over what they've left.

The argument advanced by Patrick Arnold:

]	the potential for a procedure (function) to capture variables from
]	the environment ... violates the "black box" notion of a
]	procedure 

seems to me to be the same thing in different clothes. The '"black box"
notion of a procedure' cannot come from LISP, because LISP has no
procedures. It comes, in fact, from conventional programming in the ALGOL
tradition. Part of the power and expressiveness of LISP is that we can,
when we want to, and when we know what we're doing, write functions which
are sensitive to changes in their environment. If you don't like this, you
will find that there are plenty of other *very good* languages (Pascal, 
Modula, Ada - even Scheme) which cater for your needs. Don't come and mess
up the one language which has the expressiveness to do this.

The key point I want to make is one which Patrick made admirably:

]	...dynamic binding... makes some forms of abatraction easier to 
]	handle (this is important for programming in the large).

Precisely so. And it is precisely for it's ability to handle abstraction
that we choose LISP as a language. If we reduce its power to do so, we
reduce it's value to 'just another programming language' - and one,
furthermore, which is greedy of machine resources and doesn't integrate
well with others.

One last point, quickly. Douglas Rand says:

]	... Common Lisp preserves the ability to screw yourself for the
]	hearty adventurer types (you can always do a (declare (special ..)))
]	but saves the rest of us mere mortals from our folly.

This is, in my opinion (not, I admit, widely shared as yet) one of the
worst of the Common LISP messes. It is the nature of LISP that code is
built up incrementally. You build your system on the top of my system. Let
us say that you are a mortal and I am a hearty adventurer. How are you to
know which tokens I have declared special? Well, I *ought* to have
documented them; or you could always read the source file; or, as a last
gasp, you could always, *every single time you use a new variable* ask the
system whether I've already declared it special. But are you *really*
going to do these things? No. Mixing your binding schemes is asking for
trouble - and trouble of a particularly nasty sort.

Perhaps what is being said in all this is that what we actually need is
two standard languages: say ISO Scheme and ISO Dynamic LISP...? I do not
reject the value of standardisation. To be able to transfer programs -
and, perhaps even more important, programming experience - from one
computing environment to another will be of steadily increasing importance
as LISP becomes accepted as a tool for some types of commercial
programming.

Happy Lisping!

** Simon Brooke *********************************************************
*  e-mail : ·····@uk.ac.lancs.comp                                      * 
*  surface: Dept of Computing, University of Lancaster,  LA 1 4 YW, UK. *
*                                                                       *
*  Neural Nets: "It doesn't matter if you don't know how your program   *
***************  works, so long as it's parallel" - R. O'Keefe **********
From: Barry Margolin
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <21847@think.UUCP>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>The Xerox community has got used to a scheme under which, among other
>things, all globals are marked out with asteriscs:
>	*thisIsAGlobal*
>and locals are not:
>	thisIsALocal
>Even common lisp people, despite the fact that they use a language which
>throws away 50% of all the information in it's input stream (that really
>is unbelievable!), could adopt a simple convention like this. Then you
>won't fall down *that* kind of hole.

The above is the same convention as is used by the Common Lisp
community.  All the special variables defined in CLtL use this naming
scheme.  I don't know whether the convention was developed at Xerox or
not, but it has been used at MIT since about 1981 (unfortunately, a
large part of the MIT Lisp Machine OS was written before the
convention caught on, and many of the old non-starred names still
exist in the Symbolics system).

>Secondly, the argument that Lisp should become more like conventional
>languages so that people switching to it will find it easier to learn.
>Douglas Rand expressed it thus:
>]	... most programmers, especially those who are transplants from 
>]	'normal' languages like FORTRAN, PL/1, C, and PASCAL will expect
>]	lexically scoped behaviour.
>I don't buy this. The reason these people are switching to LISP is because
>they are *dissatisfied* with the *expressiveness* of their current
>language. They know that LISP is significantly different; they know they
>are going to have to re-learn. What they don't want is to find, when
>they've put in that effort, that we have castrated LISP to the extent that
>it gives them no advantage over what they've left.

First of all, the original text said "especially", not "only", meaning
that even people who aren't switching are likely to expect
lexically-scoped behavior.  Lexical scoping simplifies understanding
of a program, because one can look at a function call and the function
definition and determine the behavior, without having to know the
entire history of the call tree.

Second, just because some feature is used in the traditional languages
does not mean that it should automatically be excluded from Lisp.
Lexical scoping is a good thing, and we should not be prejudiced just
because it was used in algebraic languages first.

>The argument advanced by Patrick Arnold:
>
>]	the potential for a procedure (function) to capture variables from
>]	the environment ... violates the "black box" notion of a
>]	procedure 
>
>seems to me to be the same thing in different clothes. The '"black box"
>notion of a procedure' cannot come from LISP, because LISP has no
>procedures. 

Huh?  I think you are using too restricted a definition of
"procedure".  In the above context, I think "procedure", "function",
and "subroutine" should all be considered synonymous.

>	     It comes, in fact, from conventional programming in the ALGOL
>tradition. Part of the power and expressiveness of LISP is that we can,
>when we want to, and when we know what we're doing, write functions which
>are sensitive to changes in their environment. If you don't like this, you
>will find that there are plenty of other *very good* languages (Pascal, 
>Modula, Ada - even Scheme) which cater for your needs. Don't come and mess
>up the one language which has the expressiveness to do this.

This distinction implies that only Lisp allows one to write functions
that are dependent upon the environment.  All the other languages
mentioned allow functions and procedures to refer to global variables.
The only unique feature of Lisp is that it does not force formal
parameter variables to be lexically-scoped local variables.  I don't
see this as a major feature, and I doubt computer science would have
been held back had Lisp required programmers to write:

(defun do-something (new-read-base)
  (let ((*read-base* new-read-base))
    ...))

or even

(defun do-something (new-read-base)
  (fluid-let ((*read-base* new-read-base))
    ...))

instead of

(defun do-something (*read-base*)
  ...)

>One last point, quickly. Douglas Rand says:
>
>]	... Common Lisp preserves the ability to screw yourself for the
>]	hearty adventurer types (you can always do a (declare (special ..)))
>]	but saves the rest of us mere mortals from our folly.
>
>This is, in my opinion (not, I admit, widely shared as yet) one of the
>worst of the Common LISP messes. It is the nature of LISP that code is
>built up incrementally. You build your system on the top of my system. Let
>us say that you are a mortal and I am a hearty adventurer. How are you to
>know which tokens I have declared special? Well, I *ought* to have
>documented them; or you could always read the source file; or, as a last
>gasp, you could always, *every single time you use a new variable* ask the
>system whether I've already declared it special. But are you *really*
>going to do these things? No. Mixing your binding schemes is asking for
>trouble - and trouble of a particularly nasty sort.

Packages, while they are not perfect, are the solution to the above
problem.  You can make sure that your variables don't collide with
mine by using a different package from me.  Yes, if you make use of
inherited packages, you run into the above problem.  One solution is
to not use inherited packages when you are not intimately familiar
with the system whose package you are inheriting from; another is to
make use of the *naming scheme* mentioned above (unfortunately, if the
provider of the system doesn't follow this convention, you lose).

>Perhaps what is being said in all this is that what we actually need is
>two standard languages: say ISO Scheme and ISO Dynamic LISP...?

A dynamic-only Lisp would be a bad idea.  We had one -- Maclisp -- and
we've abandoned it.  Most of the Common Lisp designers are former
Maclisp developers and programmers, and they consciously chose to
switch to lexical scoping by default.

Barry Margolin
Thinking Machines Corp.

······@think.com
uunet!think!barmar
From: Olin Shivers
Subject: Lexical scoping
Date: 
Message-ID: <2005@pt.cs.cmu.edu>
   If, in a lexically scoped lisp, you refer to a global variable
   thinking it's a local, or vice-versa, you'll still fall down a hole.
   Lexical scoping does not obviate the need for good software engineering
   practice - namely, in this case a *naming scheme*.

This is an uninformed statement. You can always tell a local variable when you
see one in a lexically scoped program -- its definition (or binding site) is
found at some syntactically related place in the program. That's why it's
called a "lexical" variable: the variable's declaration/definition is
*lexically* (or textually) apparent.

For example, suppose we have the following code written in a dynamically
scoped lisp, like Maclisp:
    (defun foo1 (n) (bar))
    (defun foo2 (n) (bar))
    (defun bar () n)

Now, what variable binding is the reference to N in BAR referring to? It could
be the one in FOO1, the one in FOO2, or some other one sitting in some other
file that's going to be loaded into our Lisp at run time. That's why
dynamic variable binding is *not* lexical: you can't textually go from a
variable reference to its binding. In a lexically scoped lisp, you can.
So it's easy to distinguish globals from locals.

Not only does lexical scoping make life  easier for the  poor programmer, who
never has to say, "Where is this mystery variable defined?" but it also makes
life easier on the compiler, which can generate more optimal code for the same
reason.

These issues are covered at length in the Steele/Sussman Lambda papers,
of which perhaps the best is "The Art of The Interpreter." The Abelson/Sussman
book, *Structure and Interpretation of Computer Programs* also treats this
sort of stuff.
	-Olin
From: Jeff Dalton
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <478@aiva.ed.ac.uk>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>Things are bubbling! good. Let's look at some of the arguments that have
>been advanced. Firstly, the objection to dynamic binding that's most
>commonly advanced: namely, that you fall down holes when you confuse
>locals with globals. [...]  If, in a lexically scoped lisp, you refer
>to a global variable thinking it's a local, or vice-versa, you'll still
>fall down a hole.  Lexical scoping does not obviate the need for good
>software engineering practice - namely, in this case a *naming scheme*.

There are many cases where both lexical and dynamic binding produce
the same result.  If you stick to these cases, problems with dynamic
binding will, of course, not appear.

The problem with dynamic binding is not so much that (incorrect)
references to (supposedly) local variables might refer to a global
instead -- that is clearly a problem in any language that has global
varibales -- but that there is no way to have a local variable
whose value is not visible everywhere.  One cannot determine by
local inspection what references to a variable exist: any function
called might refer to it.  *All* variables are globally visible,
not just the ones meant to be global.

A naming scheme can handle this problem, but bugs are much harder
to localize when it breaks down.  Suppose F has a local M, G has
a local N, F calls G, and the author of G mistakenly typed M in one
place instead of N.  Neither N nor M were meant to be global, so a
naming scheme for globals would have helped.  A naming scheme that
forbid local variables names such as N and M would not be acceptable.
And by "local" here I should really say something like "dynamic
variables meant to have only local (i.e. lexically valid) references".

In most code, dynamic scope is needed in a minority of cases and
all other cases that turn out to refer to the dynamic binding of
a variable will be bugs.  This suggests that some explicit step be
required to get dynamic scope and that lexical scope be the default.
And so it is a Good Thing that Common Lisp (and the varieties of
Scheme that provide dynamic scope) require such explicit steps and
a Bad Thing when a Lisp provides only dynamic variables.

Jeff Dalton,                      JANET: ········@uk.ac.ed             
AI Applications Institute,        ARPA:  ·················@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton
From: Michael Rys
Subject: What's the value of lexical scoping?
Date: 
Message-ID: <512@ethz.UUCP>
In article <···@aiva.ed.ac.uk> ····@uk.ac.ed.aiva (Jeff Dalton) writes:
>In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>>Things are bubbling! good. Let's look at some of the arguments that have
>>been advanced. Firstly, the objection to dynamic binding that's most
>>commonly advanced: namely, that you fall down holes when you confuse
>>locals with globals. [...]  If, in a lexically scoped lisp, you refer
>>to a global variable thinking it's a local, or vice-versa, you'll still
>>fall down a hole.  Lexical scoping does not obviate the need for good
>>software engineering practice - namely, in this case a *naming scheme*.
>
>There are many cases where both lexical and dynamic binding produce
>the same result.  If you stick to these cases, problems with dynamic
>binding will, of course, not appear.
>
>...
>A naming scheme can handle this problem, but bugs are much harder

In APL there exists only a dynamic scoping. A possible way to get
the same result as with static scoping (aka lexical scoping) is to
introduce 3 new scope classes. For a detailed description see
the paper of Seeds, Arpin and LaBarre in an APL Quote Quad 1978, or
my paper 'Scope and access classes in APL' in the APL88 Conference
Proceedings (avail. at ACM). Of course this new scheme would cause
new ideas for the symbol table...

Michael Rys

IPSANet : ····@ipsaint
UUCP    : ····@ethz.uucp
From: Jeff Dalton
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <479@aiva.ed.ac.uk>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>The key point I want to make is one which Patrick made admirably:

]	...dynamic binding... makes some forms of abatraction easier to 
]	handle (this is important for programming in the large).

>Precisely so. And it is precisely for it's ability to handle abstraction
>that we choose LISP as a language. If we reduce its power to do so, we
>reduce it's value to 'just another programming language[...]

The key point is that dynamic binding makes *some* forms of abstraction
easier to handle.  Lexical scoping also has this property, though for
different abstractions.  If you are going to have only one or the other,
the question is which abstraction forms are more important.  A change
from dynamic scoping to lexical is not necessarily a reduction in
power (particularly since dyamic scoping is usually implemented
without a way to get closures over the dynamic environment).

The designers of Common Lisp decided to avoid both reductions in
power by providing both forms of scoping.

Jeff Dalton,                      JANET: ········@uk.ac.ed             
AI Applications Institute,        ARPA:  ·················@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton
From: Jeff Dalton
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <480@aiva.ed.ac.uk>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>This is, in my opinion (not, I admit, widely shared as yet) one of the
>worst of the Common LISP messes. It is the nature of LISP that code is
>built up incrementally. You build your system on the top of my system. Let
>us say that you are a mortal and I am a hearty adventurer. How are you to
>know which tokens I have declared special? Well, I *ought* to have
>documented them; or you could always read the source file; or, as a last
>gasp, you could always, *every single time you use a new variable* ask the
>system whether I've already declared it special. But are you *really*
>going to do these things? No. Mixing your binding schemes is asking for
>trouble - and trouble of a particularly nasty sort.

Actually, you are not the only one who makes this argument.  The reasons
I do not find it convincing are:

1. The same naming convention you suggest earlier in your message -- 
   that the names of dynamic variables begin and end with "*" -- can
   be (and is) used in Common Lisp.  So I don't have to, as a last
   gasp, every single time, etc.

2. While it is true that I might not know that someone whose code I
   use has declared X special (i.e., dynamicly scoped), I also may
   not know that s/he has defined a function F.  Name conflicts of 
   this sort are not introduced by having both lexical and dynamic
   scope.  Indeed, they are *less* likely than in Lisps where every
   variable is dynamic.

The Common Lisp mixture of lexical and dynamic scoping is not perfect,
but the problems for the most part involve technical details not the
mere fact that both lexical and dynamic scope are available.

For example, there is no way in Common Lisp to guarentee that a
variable is not special.  In (LET ((A 10) ...), A might have been
proclaimed special somewhere, and there's no way to turn that off.
But this is because special proclamations affect all bindings as
well as all references, not because special variables exist at all.

Jeff Dalton,                      JANET: ········@uk.ac.ed             
AI Applications Institute,        ARPA:  ·················@nss.cs.ucl.ac.uk
Edinburgh University.             UUCP:  ...!ukc!ed.ac.uk!J.Dalton
From: Marc P. Rinfret
Subject: Re: What's the value of lexical scoping?
Date: 
Message-ID: <330@mosart.UUCP>
In article <···@dcl-csvax.comp.lancs.ac.uk> ·····@comp.lancs.ac.uk (Simon Brooke) writes:
>In article <·····@ucbvax.BERKELEY.EDU> ·····@dewey.soe.berkeley.edu (Marty Kent) writes:
>>I've been wondering lately why it is that "modern" lisps like Common Lisp
>>and Scheme are committed to lexical scoping.  
>
>Good! somebody else prepared to stand up and say Common LISP is a mess.

I think you are jumping the gun.  Wondering about a feature of a language
and saying it "is a mess" are two different things.

>...
>Finally, if you (or your employer) have a wallet as
>deep as the Marianas trench, there's the much-heralded micro-explorer.
>That *ought* to give a decent LISP environment, but again I haven't seen
>one.
	I haven't seen one but I understand it gives you pretty much the same
environment as the one you get on the Explorer (or the Lambda).  This is
real good system, and yes it does support lexical scoping.  And yes it also
keeps the names of the local variables around, and yes it has a good debugger.


>>2) while it's true that Common Lisp's scoping makes it difficult to write
>>debuggers, lexical scoping is still a good trade-off because it buys
>>you...
>>
Ok, tell me how many of us will have the pleasure of WRITING a debugger? You
cannot assess the value of a language by how easy or hard it is to write
a debugger.  A language is good if you can easily develop code, a good
debugger is good to have but this is a feature of the implementation.
If the system you have doesn't include one that's bad, go get a good one now.
If you are developing your system it is worth the additional effort,
but then you don't have to design the debugger on top of the implementation
you make it a part of it.

>We had a long discussion about this on the uk.lisp newsgroup. I still have
>much of this on file and could post it if people are interested (I can't
>easily mail to the States). 

Sure post.

>Advocates of lexical scoping offered a number
>of extremely tricky programming examples which couldn't be done with
>anything else. These were very impressive *as tricks*, but I couldn't ever
>imagine using any of them in a serious programming situation. In short, I
>wasn't convinced - but I should add that I didn't convince anyone else
>either.

The question here is not what feature enables you to pull the best
tricks, but to have the language play the less tricks on you.  I believe
that accidental dynamic capture of identifiers is a dirty trick.  It
makes for hard to find problems.

>...
>the good news is that it appears that dynamic binding a la EuLisp will be
>incorporated

Common Lisp as it is currently defined DOES incorporate dynamic
bindings, you simply have to ask for it. (DEFVAR ...)  I think it is
nice to have the choice of both.  One of the nice things about having
lexical binding be the default is (on top of preventing accidental
shadowing) that it enable the compiler to warn you when you use an
undeclared variable (pretty often a typo, that with full dynamic would
only be detected at runtime).  Unless you don't like to declare variable
then you may as well use F......!

> and there will be no packages. 

The Package system of Common Lisp is something that no one likes, but you do
need something like it if you are to develop any  decent size system.
I don't know the kind of projects you have been involved with but when
you work on a large (> 25 man years, with people coming and going), you
need something to manage your namespace.  Saying "no packages" is not good
enough, do you have any ideas how to replace it?

>Obviously, I have my ideas about what a good LISP looks like (all right,
>as a minimum it has dynamic binding, both LAMBDA and NLAMBDA forms, at
>least the option of non-intrusive garbage collection; although it allows
>macros, there is nothing you can't do with a function; and it does not
>have packages, PROG, GO, stupid tokens in parameter lists, SETF....) -

So what's your problem with CL.  It has all you want, the features you
don't like you simply don't use.  If LISP is to be more than an academic
toy or a philosophical statement it must include some "impure features".
If you want to stay pure, keep away from these.  I have never personally
used PROG and GO but I've seen cases where they were well used.

SETF is something real nice, it is based on an easy to grasp concept.
It reduces the namespace cluttering (nice when you don't like packages!),
you don't have to look around to figure out the name of the modifier function
(provided you know the accessor).  Again this is something you may better
appreciate when you're working with large systems.