From: ···········@gmail.com
Subject: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149023839.682144.169540@j73g2000cwa.googlegroups.com>
What are the relative merits of a Lisp-1 (one namespace for both
functions and variable) versus a Lisp-2 (one namespace for functions,
and one for variables)? I can think of the following:

For Lisp-1:
 * No #'
 * It's always clear what a name refers to; can't do potentially
confusing things like (foo foo)
 * Somewhat simplified language description, somewhat simplified
implementation

For Lisp-2:
 * Can't accidentially shadow function names (especially important for
safe non-hygienic macros)
 * The size of the namespace is doubled, because you can reuse each
name twice

Looking at the above lists, I would tentatively choose Lisp-2, since
the arguments for Lisp-1 are mostly aesthetic, and the arguments for
Lisp-2 are quite practical. But, what do you think? Are there any items
you would add to the lists?

From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-03C5C6.19172130052006@news.gha.chartermi.net>
In article <························@j73g2000cwa.googlegroups.com>,
 ···········@gmail.com wrote:

> For Lisp-2:
>  * Can't accidentially shadow function names (especially important for
> safe non-hygienic macros)

To be strictly correct: you can't shadow function names with 
lambda-binding.  You can still shadow them using flet (but as a result, 
accidental shadowing is very rare).

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e3spmF1d0fknU1@individual.net>
···········@gmail.com wrote:
> What are the relative merits of a Lisp-1 (one namespace for both
> functions and variable) versus a Lisp-2 (one namespace for functions,
> and one for variables)? I can think of the following:
> 
> For Lisp-1:
>  * No #'
>  * It's always clear what a name refers to; can't do potentially
> confusing things like (foo foo)
>  * Somewhat simplified language description, somewhat simplified
> implementation
> 
> For Lisp-2:
>  * Can't accidentially shadow function names (especially important for
> safe non-hygienic macros)
>  * The size of the namespace is doubled, because you can reuse each
> name twice
> 
> Looking at the above lists, I would tentatively choose Lisp-2, since
> the arguments for Lisp-1 are mostly aesthetic, and the arguments for
> Lisp-2 are quite practical. But, what do you think? Are there any items
> you would add to the lists?

Googling for Lisp-1, Lisp-2 and Lisp-n in comp.lang.lisp gives you quite 
a lot of postings on this question.

A standard reference is 
http://www.nhplace.com/kent/Papers/Technical-Issues.html

I mostly agree with your assessment. But note that a functional 
programming style is somewhat more convenient with a Lisp-1. It's easier 
to express things like (((f x) y) z) whereas in a Lisp-2 you would have 
to say (funcall (funcall (f x) y) z). So if you have a strong preference 
for functional programming (or would like to explore it more deeply) you 
might want to try out a Lisp-1 (i.e., Scheme).

A very real use of reusing a name for different purposes is CLOS. In a 
class definition, it is quite common to give an accessor the same name 
as the slot it accesses, like this:

(defclass person ()
   ((name :accessor name)))

In a method for person, you can then easily distinguish between 
accessing a slot directly and accessing it via its accessor, like this:

(defmethod m ((p person))
   (with-slots (name) p
     ... (name p) ...       ; the accessor
     ... name ...))         ; the slot


The fact that a Lisp-2 seriously lessens the need for hygienic macros is 
indeed also important IMHO.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: ···········@gmail.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149029292.840915.78660@y43g2000cwc.googlegroups.com>
> I mostly agree with your assessment. But note that a functional
> programming style is somewhat more convenient with a Lisp-1. It's easier
> to express things like (((f x) y) z) whereas in a Lisp-2 you would have
> to say (funcall (funcall (f x) y) z). So if you have a strong preference
> for functional programming (or would like to explore it more deeply) you
> might want to try out a Lisp-1 (i.e., Scheme).

How about using a simple read-macro for this? Something along the lines
of:

(defun square-brace-reader (stream char)
  (declare (ignore char))
  `(funcall ,@(read-delimited-list #\] stream t)))

(set-macro-character #\[ #'square-brace-reader)

That will expand [foo bar] to (funcall foo bar), or [[(f x) y] z] to
(funcall (funcall (f x) y) z).

On one project which involved a lot of functional programming, I used a
macro like this. The version I used was slightly fancier, in that it
would also expand [f x y . z] to (apply f x y z), and I had an
additional reader macro that would expand {x y : (f y x)} to #'(lambda
(x y) (f y x)). What do you think?

> A standard reference is
> http://www.nhplace.com/kent/Papers/Technical-Issues.html

Thanks! This is good stuff.
From: Marco Baringer
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m2zmgznkcc.fsf@bese.it>
···········@gmail.com writes:

> How about using a simple read-macro for this? Something along the lines
> of:
>
> (defun square-brace-reader (stream char)
>   (declare (ignore char))
>   `(funcall ,@(read-delimited-list #\] stream t)))
>
> (set-macro-character #\[ #'square-brace-reader)
>
> That will expand [foo bar] to (funcall foo bar), or [[(f x) y] z] to
> (funcall (funcall (f x) y) z).

Hoan Ton-That has written a with-lisp1 macro which allows stuff like
this:

(with-lisp1
  (let ((a +))
    (a 2 3))) ==> 5

(with-lisp1
  (let ((a (lambda (x) 
             (lambda (y)
               (+ x y)))))
    ((a 2) 3))) ==> 5

see
http://groups.google.com/group/comp.lang.lisp/browse_thread/thread/82994055009163e9
and the a recent version of arnesi
(http://common-lisp.net/project/bese/repos/arnesi_dev/) for the code.

-- 
-Marco
Ring the bells that still can ring.
Forget the perfect offering.
There is a crack in everything.
That's how the light gets in.
	-Leonard Cohen
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e4spiF1ck764U2@individual.net>
···········@gmail.com wrote:
>> I mostly agree with your assessment. But note that a functional
>> programming style is somewhat more convenient with a Lisp-1. It's easier
>> to express things like (((f x) y) z) whereas in a Lisp-2 you would have
>> to say (funcall (funcall (f x) y) z). So if you have a strong preference
>> for functional programming (or would like to explore it more deeply) you
>> might want to try out a Lisp-1 (i.e., Scheme).
> 
> How about using a simple read-macro for this? Something along the lines
> of:
> 
> (defun square-brace-reader (stream char)
>   (declare (ignore char))
>   `(funcall ,@(read-delimited-list #\] stream t)))
> 
> (set-macro-character #\[ #'square-brace-reader)
> 
> That will expand [foo bar] to (funcall foo bar), or [[(f x) y] z] to
> (funcall (funcall (f x) y) z).

...because you still have the cognitive overhead of having to know when 
or when not to use brackets (which may or may not be relevant to you).


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Marcus Breiing
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <e5jqjt$7gn$1@chessie.cirr.com>
···········@gmail.com writes:

> How about using a simple read-macro ...  that will expand [foo bar]
> to (funcall foo bar), or [[(f x) y] z] to (funcall (funcall (f x) y)
> z)

I switched to this syntax a while ago. It cured me of any and all
residual Lisp-1 envy. (If anything, such code tends to be even *more*
readable than equivalent Lisp-1 would be.) Strongly recommended.


> I had an additional reader macro that would expand {x y : (f y x)}
> to #'(lambda (x y) (f y x)). What do you think?

If you find yourself using the same parameter names in most or all of
your "small" lambdas, a possible alternative to reader syntax is name
capture. For example, with:

  (defmacro /xy (&rest form)
    `(lambda (x y) (declare (ignorable x y)) ,form))
        
your example condenses down to (/xy f y x)


-- 
Marcus Breiing
From: Pascal Bourguignon
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <87bqtfywzl.fsf@thalassa.informatimago.com>
Pascal Costanza <··@p-cos.net> writes:
> I mostly agree with your assessment. But note that a functional
> programming style is somewhat more convenient with a Lisp-1. It's
> easier to express things like (((f x) y) z) whereas in a Lisp-2 you
> would have to say (funcall (funcall (f x) y) z). So if you have a
> strong preference for functional programming (or would like to explore
> it more deeply) you might want to try out a Lisp-1 (i.e., Scheme).

I would note that it's hard to know what's going on when you have
expressions that start with several parentheses.  How many?

Perhaps it's only that I'm used to lisp-2, but after that recent HtDP
exercise, I find (((f x) y) z) _less_ readable
than: (funcall (funcall (f x) y) z)


-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

PLEASE NOTE: Some quantum physics theories suggest that when the
consumer is not directly observing this product, it may cease to
exist or will exist only in a vague and undetermined state.
From: Rob Thorpe
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149152017.943076.91030@g10g2000cwb.googlegroups.com>
Pascal Bourguignon wrote:
> Pascal Costanza <··@p-cos.net> writes:
> > I mostly agree with your assessment. But note that a functional
> > programming style is somewhat more convenient with a Lisp-1. It's
> > easier to express things like (((f x) y) z) whereas in a Lisp-2 you
> > would have to say (funcall (funcall (f x) y) z). So if you have a
> > strong preference for functional programming (or would like to explore
> > it more deeply) you might want to try out a Lisp-1 (i.e., Scheme).
>
> I would note that it's hard to know what's going on when you have
> expressions that start with several parentheses.  How many?
>
> Perhaps it's only that I'm used to lisp-2, but after that recent HtDP
> exercise, I find (((f x) y) z) _less_ readable
> than: (funcall (funcall (f x) y) z)

Regardless of the readability of the code it does something quite
unusual, that should be noted.
In either case I would write:-
((((f x) y) ;; Comment explaining why this is done and what is done
    z)
((funcall (funcall (f x) y) ;; Comment explaining why this is done and
what is done
  z)

I would even comment ((f x) y) since it is rare, though only to say why
it's done, it's clear what is done.
From: Thomas F. Burdick
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <xcvverma7a3.fsf@conquest.OCF.Berkeley.EDU>
Pascal Costanza <··@p-cos.net> writes:

> I mostly agree with your assessment. But note that a functional
> programming style is somewhat more convenient with a Lisp-1. It's
> easier to express things like (((f x) y) z) whereas in a Lisp-2 you
> would have to say (funcall (funcall (f x) y) z). So if you have a
> strong preference for functional programming (or would like to explore
> it more deeply) you might want to try out a Lisp-1 (i.e., Scheme).

Allowing complex expressions for the first place in a form is
orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
doesn't support things like ((make-counter 10) 3), but there's no
reason that another Lisp-2 couldn't.  It still wouldn't be as
convenient for functional programming, though.
From: Marcin 'Qrczak' Kowalczyk
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <87hd34yjaj.fsf@qrnik.zagroda>
···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:

> Allowing complex expressions for the first place in a form is
> orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
> doesn't support things like ((make-counter 10) 3), but there's no
> reason that another Lisp-2 couldn't.  It still wouldn't be as
> convenient for functional programming, though.

There is a reason: a symbol is an expression, so arbitrary expressions
couldn't be allowed without making it a Lisp-1.

-- 
   __("<         Marcin Kowalczyk
   \__/       ······@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-CF7E49.09334801062006@news.gha.chartermi.net>
In article <··············@qrnik.zagroda>,
 Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> wrote:

> ···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> 
> > Allowing complex expressions for the first place in a form is
> > orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
> > doesn't support things like ((make-counter 10) 3), but there's no
> > reason that another Lisp-2 couldn't.  It still wouldn't be as
> > convenient for functional programming, though.
> 
> There is a reason: a symbol is an expression, so arbitrary expressions
> couldn't be allowed without making it a Lisp-1.

Of course it could.  Symbols in the CAR position would simply be a 
special case.

In fact doing this is trivial.  In MCL the changes required amount to 
4-6 lines of code depending on how you count, and unless other Lisps do 
something really weird it can't be much harder than that.  Basically all 
you have to do is grep through the source code of your Lisp and find a 
call that looks something like:

(error "~S is not a symbol or lambda expression" expr)

and replace it with

(eval-or-compile (cons 'funcall expr))

or maybe:

(if *allow-combinations-in-car*
  (eval-or-compile (cons 'funcall expr))
  (error "~S is not a symbol or lambda expression" expr))

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e8mtrF1d8d9qU1@individual.net>
Ron Garret wrote:
> In article <··············@qrnik.zagroda>,
>  Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> wrote:
> 
>> ···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>>
>>> Allowing complex expressions for the first place in a form is
>>> orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
>>> doesn't support things like ((make-counter 10) 3), but there's no
>>> reason that another Lisp-2 couldn't.  It still wouldn't be as
>>> convenient for functional programming, though.
>> There is a reason: a symbol is an expression, so arbitrary expressions
>> couldn't be allowed without making it a Lisp-1.
> 
> Of course it could.  Symbols in the CAR position would simply be a 
> special case.

It's not so easy (and thanks to Marcin to mention this, because it 
reminded me of another case I thought of before).

Consider this:

(defmacro foo () 'bar)

(defun test ()
   (flet ((bar () (print 1)))
     (let ((bar (lambda () (print 2))))
       ((foo)))))

What does this print and why?

Also consider what you would have to do to make a code walker still work 
correctly.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-AE5F07.12234301062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <··············@qrnik.zagroda>,
> >  Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> wrote:
> > 
> >> ···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> >>
> >>> Allowing complex expressions for the first place in a form is
> >>> orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
> >>> doesn't support things like ((make-counter 10) 3), but there's no
> >>> reason that another Lisp-2 couldn't.  It still wouldn't be as
> >>> convenient for functional programming, though.
> >> There is a reason: a symbol is an expression, so arbitrary expressions
> >> couldn't be allowed without making it a Lisp-1.
> > 
> > Of course it could.  Symbols in the CAR position would simply be a 
> > special case.
> 
> It's not so easy

Yes it is.  Remember, I have an existence proof.

> (and thanks to Marcin to mention this, because it 
> reminded me of another case I thought of before).
> 
> Consider this:
> 
> (defmacro foo () 'bar)
> 
> (defun test ()
>    (flet ((bar () (print 1)))
>      (let ((bar (lambda () (print 2))))
>        ((foo)))))
> 
> What does this print and why?

Well, it depends on what semantics you assign to ((...) ...) but the 
simplest thing to do is to specify that ((...) ...) is equivalent to 
(funcall (...) ...).  In that case the result is:

Welcome to Macintosh Common Lisp Version 5.1!
...
? (defmacro foo () 'bar)

FOO
? (defun test ()
   (flet ((bar () (print 1)))
     (let ((bar (lambda () (print 2))))
       ((foo)))))
TEST
? (test)

2 
2
? 

> Also consider what you would have to do to make a code walker still work 
> correctly.

You would have to add a line of code that looked something like:

    ((consp (car form)) (walk (cons 'funcall form)))

or perhaps

    ((consp (car form)) (walk (funcall *car-is-combination-hook* form)))

rg
From: Peter Seibel
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m2mzcwiw47.fsf@gigamonkeys.com>
Pascal Costanza <··@p-cos.net> writes:

> Ron Garret wrote:
>> In article <··············@qrnik.zagroda>,
>>  Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> wrote:
>> 
>>> ···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>>>
>>>> Allowing complex expressions for the first place in a form is
>>>> orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
>>>> doesn't support things like ((make-counter 10) 3), but there's no
>>>> reason that another Lisp-2 couldn't.  It still wouldn't be as
>>>> convenient for functional programming, though.
>>> There is a reason: a symbol is an expression, so arbitrary expressions
>>> couldn't be allowed without making it a Lisp-1.
>> Of course it could.  Symbols in the CAR position would simply be a
>> special case.
>
> It's not so easy (and thanks to Marcin to mention this, because it
> reminded me of another case I thought of before).
>
> Consider this:
>
> (defmacro foo () 'bar)
>
> (defun test ()
>    (flet ((bar () (print 1)))
>      (let ((bar (lambda () (print 2))))
>        ((foo)))))
>
> What does this print and why?

Not that I actually think this language extension is worth the bother,
but it seems that if you added it, you could simply say: the basic
evaluation rule is applied after all macro expansions. Thefore the
above is equivalent (after expanding the FOO macro) to:

  (defun test ()
    (flet ((bar () (print 1)))
      (let ((bar (lambda () (print 2))))
        (bar))))

and thus prints 1 because the basic evaluation rule evaluates a symbol
in the CAR position of a cons expression is a function name not a
variable name.

To have it return 2 you'd have to intertwingle macroexpansion with the
basic evaluation rule so that despite the macro having been expanded
some bit of information is kept around to tell the evaluator that the
expansion was originally a non-symbol form and therefore should be
evaluated according to the non-symbol form rule even though it is now
a symbol. And that just seems perverse. Or maybe I haven't thought
this all the way through.

-Peter

-- 
Peter Seibel           * ·····@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp  * http://www.gigamonkeys.com/book/
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e8t2nF1e227eU1@individual.net>
Peter Seibel wrote:
> Pascal Costanza <··@p-cos.net> writes:
> 
>> Ron Garret wrote:
>>> In article <··············@qrnik.zagroda>,
>>>  Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> wrote:
>>>
>>>> ···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>>>>
>>>>> Allowing complex expressions for the first place in a form is
>>>>> orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
>>>>> doesn't support things like ((make-counter 10) 3), but there's no
>>>>> reason that another Lisp-2 couldn't.  It still wouldn't be as
>>>>> convenient for functional programming, though.
>>>> There is a reason: a symbol is an expression, so arbitrary expressions
>>>> couldn't be allowed without making it a Lisp-1.
>>> Of course it could.  Symbols in the CAR position would simply be a
>>> special case.
>> It's not so easy (and thanks to Marcin to mention this, because it
>> reminded me of another case I thought of before).
>>
>> Consider this:
>>
>> (defmacro foo () 'bar)
>>
>> (defun test ()
>>    (flet ((bar () (print 1)))
>>      (let ((bar (lambda () (print 2))))
>>        ((foo)))))
>>
>> What does this print and why?
> 
> Not that I actually think this language extension is worth the bother,
> but it seems that if you added it, you could simply say: the basic
> evaluation rule is applied after all macro expansions. Thefore the
> above is equivalent (after expanding the FOO macro) to:
> 
>   (defun test ()
>     (flet ((bar () (print 1)))
>       (let ((bar (lambda () (print 2))))
>         (bar))))
> 
> and thus prints 1 because the basic evaluation rule evaluates a symbol
> in the CAR position of a cons expression is a function name not a
> variable name.
> 
> To have it return 2 you'd have to intertwingle macroexpansion with the
> basic evaluation rule so that despite the macro having been expanded
> some bit of information is kept around to tell the evaluator that the
> expansion was originally a non-symbol form and therefore should be
> evaluated according to the non-symbol form rule even though it is now
> a symbol. And that just seems perverse. Or maybe I haven't thought
> this all the way through.

Ron's original suggestion was that whenever the car of a form is a cons, 
it gets automatically treated as if it were preceded by a funcall. So 
according to this rule, the above would be equivalent to this:

(defun test ()
   (flet ((bar () (print 1)))
     (let ((bar (lambda () (print 2))))
       (funcall bar))))

This is also why the non-equivalence between ((lambda ...) ...) and (let 
((x (lambda ...))) (x ...)) is relevant.

As I said, I think doing anything with conses in the car of a form in a 
Lisp-2 is more confusing than not. For that matter, I think that the 
special case of having a cons that starts with 'lambda that is already 
part of ANSI Common Lisp should be removed/deprecated as well (or, for 
that matter, explained as an exceptional case that one shouldn't worry 
about too much).


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-30DCC9.12342901062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Peter Seibel wrote:
> > Pascal Costanza <··@p-cos.net> writes:
> > 
> >> Ron Garret wrote:
> >>> In article <··············@qrnik.zagroda>,
> >>>  Marcin 'Qrczak' Kowalczyk <······@knm.org.pl> wrote:
> >>>
> >>>> ···@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
> >>>>
> >>>>> Allowing complex expressions for the first place in a form is
> >>>>> orthogonal to a language being a Lisp-1 or Lisp-2.  Common Lisp
> >>>>> doesn't support things like ((make-counter 10) 3), but there's no
> >>>>> reason that another Lisp-2 couldn't.  It still wouldn't be as
> >>>>> convenient for functional programming, though.
> >>>> There is a reason: a symbol is an expression, so arbitrary expressions
> >>>> couldn't be allowed without making it a Lisp-1.
> >>> Of course it could.  Symbols in the CAR position would simply be a
> >>> special case.
> >> It's not so easy (and thanks to Marcin to mention this, because it
> >> reminded me of another case I thought of before).
> >>
> >> Consider this:
> >>
> >> (defmacro foo () 'bar)
> >>
> >> (defun test ()
> >>    (flet ((bar () (print 1)))
> >>      (let ((bar (lambda () (print 2))))
> >>        ((foo)))))
> >>
> >> What does this print and why?
> > 
> > Not that I actually think this language extension is worth the bother,
> > but it seems that if you added it, you could simply say: the basic
> > evaluation rule is applied after all macro expansions. Thefore the
> > above is equivalent (after expanding the FOO macro) to:
> > 
> >   (defun test ()
> >     (flet ((bar () (print 1)))
> >       (let ((bar (lambda () (print 2))))
> >         (bar))))
> > 
> > and thus prints 1 because the basic evaluation rule evaluates a symbol
> > in the CAR position of a cons expression is a function name not a
> > variable name.
> > 
> > To have it return 2 you'd have to intertwingle macroexpansion with the
> > basic evaluation rule so that despite the macro having been expanded
> > some bit of information is kept around to tell the evaluator that the
> > expansion was originally a non-symbol form and therefore should be
> > evaluated according to the non-symbol form rule even though it is now
> > a symbol. And that just seems perverse. Or maybe I haven't thought
> > this all the way through.
> 
> Ron's original suggestion was that whenever the car of a form is a cons, 
> it gets automatically treated as if it were preceded by a funcall. So 
> according to this rule, the above would be equivalent to this:
> 
> (defun test ()
>    (flet ((bar () (print 1)))
>      (let ((bar (lambda () (print 2))))
>        (funcall bar))))
> 
> This is also why the non-equivalence between ((lambda ...) ...) and (let 
> ((x (lambda ...))) (x ...)) is relevant.
> 
> As I said, I think doing anything with conses in the car of a form in a 
> Lisp-2 is more confusing than not.

And because YOU find it confusing no one else should be allowed to do it.

> For that matter, I think that the 
> special case of having a cons that starts with 'lambda that is already 
> part of ANSI Common Lisp should be removed/deprecated as well (or, for 
> that matter, explained as an exceptional case that one shouldn't worry 
> about too much).

Right.  Every feature that Pascal Costanza finds confusing or doesn't 
see a use for ought to be deprecated.

What dismays me most about all this is that these arguments all make the 
tacit assumption that all code is written (and read) by humans.  It is 
hard to imagine a more colossal irony than to see that assumption being 
made by advocates of Lisp.  Remind me, what exactly is the point of all 
those little irritating silly parentheses?

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e8upnF1cmnddU1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>> Peter Seibel wrote:
>>> Pascal Costanza <··@p-cos.net> writes:
>>>
>>>> Consider this:
>>>>
>>>> (defmacro foo () 'bar)
>>>>
>>>> (defun test ()
>>>>    (flet ((bar () (print 1)))
>>>>      (let ((bar (lambda () (print 2))))
>>>>        ((foo)))))
>>>>
>>>> What does this print and why?
>>> Not that I actually think this language extension is worth the bother,
>>> but it seems that if you added it, you could simply say: the basic
>>> evaluation rule is applied after all macro expansions. Thefore the
>>> above is equivalent (after expanding the FOO macro) to:
>>>
>>>   (defun test ()
>>>     (flet ((bar () (print 1)))
>>>       (let ((bar (lambda () (print 2))))
>>>         (bar))))
>>>
>>> and thus prints 1 because the basic evaluation rule evaluates a symbol
>>> in the CAR position of a cons expression is a function name not a
>>> variable name.
>>>
>>> To have it return 2 you'd have to intertwingle macroexpansion with the
>>> basic evaluation rule so that despite the macro having been expanded
>>> some bit of information is kept around to tell the evaluator that the
>>> expansion was originally a non-symbol form and therefore should be
>>> evaluated according to the non-symbol form rule even though it is now
>>> a symbol. And that just seems perverse. Or maybe I haven't thought
>>> this all the way through.
>> Ron's original suggestion was that whenever the car of a form is a cons, 
>> it gets automatically treated as if it were preceded by a funcall. So 
>> according to this rule, the above would be equivalent to this:
>>
>> (defun test ()
>>    (flet ((bar () (print 1)))
>>      (let ((bar (lambda () (print 2))))
>>        (funcall bar))))
>>
>> This is also why the non-equivalence between ((lambda ...) ...) and (let 
>> ((x (lambda ...))) (x ...)) is relevant.
>>
>> As I said, I think doing anything with conses in the car of a form in a 
>> Lisp-2 is more confusing than not.
> 
> And because YOU find it confusing no one else should be allowed to do it.

The fact that Peter came up with a different answer than you at least 
shows that it is not clear-cut what this should yield even amongst 
experienced Lispers.

That I am purportedly disallowing anyone to do anything is complete and 
utter nonsense. The fact that you have an existence proof is clear 
evidence that I cannot disallow you to do this.

>> For that matter, I think that the 
>> special case of having a cons that starts with 'lambda that is already 
>> part of ANSI Common Lisp should be removed/deprecated as well (or, for 
>> that matter, explained as an exceptional case that one shouldn't worry 
>> about too much).
> 
> Right.  Every feature that Pascal Costanza finds confusing or doesn't 
> see a use for ought to be deprecated.

No, it just means that I have an opinion about this specific topic.

> What dismays me most about all this is that these arguments all make the 
> tacit assumption that all code is written (and read) by humans.  It is 
> hard to imagine a more colossal irony than to see that assumption being 
> made by advocates of Lisp.  Remind me, what exactly is the point of all 
> those little irritating silly parentheses?

To a (non-human) code generator and/or consumer, it doesn't matter one 
single iota whether it can be ((foo x) y) or it has to be (funcall (foo 
x) y). It's a completely local, totally trivial transformation, without 
any gain in expressiveness. Any non-ambiguous rule would do the job, no 
matter how contorted it may be.

If I remember correctly, your motivation to enable treating conses in 
the car of a form in a meaningful way was to make a certain programming 
style more convenient. Correct me if I am wrong, but programming style 
is an issue that involves the ease with which humans can write, read and 
reason about code.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-927C2F.13213401062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> >> Peter Seibel wrote:
> >>> Pascal Costanza <··@p-cos.net> writes:
> >>>
> >>>> Consider this:
> >>>>
> >>>> (defmacro foo () 'bar)
> >>>>
> >>>> (defun test ()
> >>>>    (flet ((bar () (print 1)))
> >>>>      (let ((bar (lambda () (print 2))))
> >>>>        ((foo)))))
> >>>>
> >>>> What does this print and why?
> >>> Not that I actually think this language extension is worth the bother,
> >>> but it seems that if you added it, you could simply say: the basic
> >>> evaluation rule is applied after all macro expansions. Thefore the
> >>> above is equivalent (after expanding the FOO macro) to:
> >>>
> >>>   (defun test ()
> >>>     (flet ((bar () (print 1)))
> >>>       (let ((bar (lambda () (print 2))))
> >>>         (bar))))
> >>>
> >>> and thus prints 1 because the basic evaluation rule evaluates a symbol
> >>> in the CAR position of a cons expression is a function name not a
> >>> variable name.
> >>>
> >>> To have it return 2 you'd have to intertwingle macroexpansion with the
> >>> basic evaluation rule so that despite the macro having been expanded
> >>> some bit of information is kept around to tell the evaluator that the
> >>> expansion was originally a non-symbol form and therefore should be
> >>> evaluated according to the non-symbol form rule even though it is now
> >>> a symbol. And that just seems perverse. Or maybe I haven't thought
> >>> this all the way through.
> >> Ron's original suggestion was that whenever the car of a form is a cons, 
> >> it gets automatically treated as if it were preceded by a funcall. So 
> >> according to this rule, the above would be equivalent to this:
> >>
> >> (defun test ()
> >>    (flet ((bar () (print 1)))
> >>      (let ((bar (lambda () (print 2))))
> >>        (funcall bar))))
> >>
> >> This is also why the non-equivalence between ((lambda ...) ...) and (let 
> >> ((x (lambda ...))) (x ...)) is relevant.
> >>
> >> As I said, I think doing anything with conses in the car of a form in a 
> >> Lisp-2 is more confusing than not.
> > 
> > And because YOU find it confusing no one else should be allowed to do it.
> 
> The fact that Peter came up with a different answer than you at least 
> shows that it is not clear-cut what this should yield even amongst 
> experienced Lispers.

I would say that it shows that more than one set of semantics is 
possible.  That amounts to the same thing, but without the pejorative 
spin.

> That I am purportedly disallowing anyone to do anything is complete and 
> utter nonsense.

What is the point of deprecating a feature of the language if it is not 
your desire to disallow people from using that feature to the extent 
that it is possible to do so?

> The fact that you have an existence proof is clear 
> evidence that I cannot disallow you to do this.

But I am only able to do it by resorting to implementation-specific 
hacks (or a full code walker, which offends my sensibilities for 
something so trivial).  And that is a direct result of you and people 
like you who wish to impose gratuitous constraints on the language on 
the grounds that *they* find it confusing or useless or whatever.


> >> For that matter, I think that the 
> >> special case of having a cons that starts with 'lambda that is already 
> >> part of ANSI Common Lisp should be removed/deprecated as well (or, for 
> >> that matter, explained as an exceptional case that one shouldn't worry 
> >> about too much).
> > 
> > Right.  Every feature that Pascal Costanza finds confusing or doesn't 
> > see a use for ought to be deprecated.
> 
> No, it just means that I have an opinion about this specific topic.
> 
> > What dismays me most about all this is that these arguments all make the 
> > tacit assumption that all code is written (and read) by humans.  It is 
> > hard to imagine a more colossal irony than to see that assumption being 
> > made by advocates of Lisp.  Remind me, what exactly is the point of all 
> > those little irritating silly parentheses?
> 
> To a (non-human) code generator and/or consumer, it doesn't matter one 
> single iota whether it can be ((foo x) y) or it has to be (funcall (foo 
> x) y). It's a completely local, totally trivial transformation, without 
> any gain in expressiveness. Any non-ambiguous rule would do the job, no 
> matter how contorted it may be.
> 
> If I remember correctly, your motivation to enable treating conses in 
> the car of a form in a meaningful way was to make a certain programming 
> style more convenient.

That is one reason.  It is not the only reason.

> Correct me if I am wrong, but programming style 
> is an issue that involves the ease with which humans can write, read and 
> reason about code.

Why do you persist with these unwarranted tacit assumptions?  Just 
because I once cited a reason for wanting to do something, why do you 
assume that that is (or even was) my only reason?

Why do you think that everyone ought to live by the constraints imposed 
by your lack of imagination?

rg
From: Thomas A. Russ
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <ymik680fsxa.fsf@sevak.isi.edu>
Ron Garret <·········@flownet.com> writes:

> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> >
> > That I am purportedly disallowing anyone to do anything is complete and 
> > utter nonsense.
> 
> What is the point of deprecating a feature of the language if it is not 
> your desire to disallow people from using that feature to the extent 
> that it is possible to do so?
> 
> > The fact that you have an existence proof is clear 
> > evidence that I cannot disallow you to do this.
> 
> But I am only able to do it by resorting to implementation-specific 
> hacks (or a full code walker, which offends my sensibilities for 
> something so trivial).  And that is a direct result of you and people 
> like you who wish to impose gratuitous constraints on the language on 
> the grounds that *they* find it confusing or useless or whatever.

And just how does this differs from people who wish to gratuitously
change an existing, standardized language on the grounds that they don't
like a deliberate design choice made in that language?  Because they
find it inconvenient to have to insert one extra symbol?

-- 
Thomas A. Russ,  USC/Information Sciences Institute
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-F25995.15592301062006@news.gha.chartermi.net>
In article <···············@sevak.isi.edu>,
 ···@sevak.isi.edu (Thomas A. Russ) wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > >
> > > That I am purportedly disallowing anyone to do anything is complete and 
> > > utter nonsense.
> > 
> > What is the point of deprecating a feature of the language if it is not 
> > your desire to disallow people from using that feature to the extent 
> > that it is possible to do so?
> > 
> > > The fact that you have an existence proof is clear 
> > > evidence that I cannot disallow you to do this.
> > 
> > But I am only able to do it by resorting to implementation-specific 
> > hacks (or a full code walker, which offends my sensibilities for 
> > something so trivial).  And that is a direct result of you and people 
> > like you who wish to impose gratuitous constraints on the language on 
> > the grounds that *they* find it confusing or useless or whatever.
> 
> And just how does this differs from people who wish to gratuitously
> change an existing, standardized language on the grounds that they don't
> like a deliberate design choice made in that language?

It's different because 1) the change I propose is strictly 
backwards-compatible and therefore cannot possibly impact anyone who 
doesn't actually use it and 2) the current state of affairs is 
demonstrably harmful in that it forces one to do orders of magnitude 
more work in order to perform certain potentially useful tasks.

> Because they
> find it inconvenient to have to insert one extra symbol?

No.  And the fact that you have to resort to such straw-man arguments to 
make your point is further evidence that your position is untenable.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e92c0F1dfmdbU1@individual.net>
Ron Garret wrote:

>>>> As I said, I think doing anything with conses in the car of a form in a 
>>>> Lisp-2 is more confusing than not.
>>> And because YOU find it confusing no one else should be allowed to do it.
>> The fact that Peter came up with a different answer than you at least 
>> shows that it is not clear-cut what this should yield even amongst 
>> experienced Lispers.
> 
> I would say that it shows that more than one set of semantics is 
> possible.  That amounts to the same thing, but without the pejorative 
> spin.

There is no pejorative spin in what I said. Look up "confuse" in a 
dictionary. Merriam Webster gives the following description, among 
others: "to fail to differentiate from an often similar or related 
other", and gives "<confuse money with comfort>" as an example.

Your suggestion to treat the car of a form in a meaningful way when it 
is a cons fails to differentiate clearly between function definitions 
and variable definitions, at least in corner cases. The fact that the 
same form gives rise to two reasonable, but completely different 
interpretations _is_ confusing.

>> That I am purportedly disallowing anyone to do anything is complete and 
>> utter nonsense.
> 
> What is the point of deprecating a feature of the language if it is not 
> your desire to disallow people from using that feature to the extent 
> that it is possible to do so?

The desire is to discourage people from using (or enabling) that feature 
in Common Lisp. This desire is not unreasonable. I think (!) that the 
gain (which is purely aesthetical, it doesn't increase expressiveness of 
the language at all) is not worth the price to pay in terms of the 
overhead in understanding the exact semantics.

>> The fact that you have an existence proof is clear 
>> evidence that I cannot disallow you to do this.
> 
> But I am only able to do it by resorting to implementation-specific 
> hacks (or a full code walker, which offends my sensibilities for 
> something so trivial).  And that is a direct result of you and people 
> like you who wish to impose gratuitous constraints on the language on 
> the grounds that *they* find it confusing or useless or whatever.

I wasn't a member of the ANSI CL standardization committee.

Maybe you can mobilize enough people to make this change to the 
language, or design your own language with this feature and make it 
catch on, which would mean that in the long run you could prove me 
wrong. I am actually looking forward to this, because I appreciate 
substantial improvements in language design, even those which I am not 
able to appreciate immediately.

In the meantime, I worry about the things that I think will have more 
impact. I hope that in the long run in turns out that I have not made 
the wrong decision wrt what I am focusing on now.

>> If I remember correctly, your motivation to enable treating conses in 
>> the car of a form in a meaningful way was to make a certain programming 
>> style more convenient.
> 
> That is one reason.  It is not the only reason.

What are your other reasons?



Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-E063BF.14421101062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> The fact that the 
> same form gives rise to two reasonable, but completely different 
> interpretations _is_ confusing.

On that view, many things in CL are already confusing.  Personally I 
find the idea of ((...) ...) == (funcall (...) ...) to be quite clear, 
unambiguous, and easy to understand.  It also has the nice property that 
it subsumes the ((lambda ...) ...) rule, which is also (you must at this 
point agree) quite confusing.

> >> That I am purportedly disallowing anyone to do anything is complete and 
> >> utter nonsense.
> > 
> > What is the point of deprecating a feature of the language if it is not 
> > your desire to disallow people from using that feature to the extent 
> > that it is possible to do so?
> 
> The desire is to discourage people from using (or enabling) that feature 
> in Common Lisp. This desire is not unreasonable. I think (!) that the 
> gain (which is purely aesthetical, it doesn't increase expressiveness of 
> the language at all) is not worth the price to pay in terms of the 
> overhead in understanding the exact semantics.

But the gain is not purely aesthetic.  The current semantics are 
actually quite limiting in at least one important way.  See below.

However, this is not the only issue.  Even if the gain were purely 
aesthetic, the cost is near zero.  Even an aesthetic gain is worth it.  
That coupled with even the *possibility* of additional gain makes it 
even more worthwhile.

> >> The fact that you have an existence proof is clear 
> >> evidence that I cannot disallow you to do this.
> > 
> > But I am only able to do it by resorting to implementation-specific 
> > hacks (or a full code walker, which offends my sensibilities for 
> > something so trivial).  And that is a direct result of you and people 
> > like you who wish to impose gratuitous constraints on the language on 
> > the grounds that *they* find it confusing or useless or whatever.
> 
> I wasn't a member of the ANSI CL standardization committee.

No, but you are a member of the faction that supports treating their 
work as immutable gospel.

> >> If I remember correctly, your motivation to enable treating conses in 
> >> the car of a form in a meaningful way was to make a certain programming 
> >> style more convenient.
> > 
> > That is one reason.  It is not the only reason.
> 
> What are your other reasons?

I am working on adding a system of first-class global lexical 
environments to CL (originally called locales after the terminology in 
T, now called lexicons to avoid confusion with I18N).  It turns out that 
adding these to the language has all sorts of happy consequences, 
including the seamless integration of Lisp-1 and Lisp-2 semantics 
without a code walker, in addition to a host of ancillary benefits which 
I will not detail here.  I'll just say that great swaths of complexity 
and controversy can be mowed down and discarded if you have first-class 
global lexical environments.

One of the additional benefits is (possibly) a new macro system which 
combines the simplicity of Lisp-1 semantics and the simplicity of Lisp-2 
macros while avoiding the risk of accidental name clashes, which some 
people might find an attractive alternative to what is currently 
available.  But to prototype the system I need to be able to convert 
(^foo ...) into something like (funcall (top-level-binding 'foo) ...).

If I have ((...) ...) == (funcall (...) ...) then this is trivial.  All 
I need is a reader macro for #\^, and everything works just fine.  But 
without that I have to resort to one of a number of horrible hacks, a 
code walker being among the least horrible of them.

There are other useful things one can do as well, but that's the one 
that has my attention at the moment.

rg
From: Peter Seibel
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m2d5dsiloo.fsf@gigamonkeys.com>
Ron Garret <·········@flownet.com> writes:

> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
>
>> What are your other reasons?
>
> I am working on adding a system of first-class global lexical
> environments to CL (originally called locales after the terminology
> in T, now called lexicons to avoid confusion with I18N). It turns
> out that adding these to the language has all sorts of happy
> consequences, including the seamless integration of Lisp-1 and
> Lisp-2 semantics without a code walker, in addition to a host of
> ancillary benefits which I will not detail here. I'll just say that
> great swaths of complexity and controversy can be mowed down and
> discarded if you have first-class global lexical environments.
>
> One of the additional benefits is (possibly) a new macro system
> which combines the simplicity of Lisp-1 semantics and the simplicity
> of Lisp-2 macros while avoiding the risk of accidental name clashes,
> which some people might find an attractive alternative to what is
> currently available. But to prototype the system I need to be able
> to convert (^foo ...) into something like (funcall
> (top-level-binding 'foo) ...).
>
> If I have ((...) ...) == (funcall (...) ...) then this is trivial.
> All I need is a reader macro for #\^, and everything works just
> fine. But without that I have to resort to one of a number of
> horrible hacks, a code walker being among the least horrible of
> them.

So all of this sounds like a big enough experiment in a new language
that it seems worthwhile and easier to implement it as a GarretLisp ->
CL compiler. Here's a start:

  (defun garret-lisp->cl (sexp)
    (typecase sexp
      (cons 
       (destructuring-bind (car &rest rest) sexp
         (typecase car
           (cons `(funcall ,car ,@(mapcar #'compile-my-lisp rest)))
           (symbol `(,car ,@(mapcar #'compile-my-lisp rest)))
           (t (error "Malformed expression: ~s" sexp)))))
      (t sexp)))

  (defun garret-lisp:eval (sexp) (eval (garret-lisp->cl sexp)))

  (defun garret-lisp:load (file)
    (with-open-file (in file)
      (loop for form = (read in nil in) until (eql form in)
         do (garret-lisp:eval form))))

-Peter

-- 
Peter Seibel           * ·····@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp  * http://www.gigamonkeys.com/book/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-12A020.15495401062006@news.gha.chartermi.net>
In article <··············@gigamonkeys.com>,
 Peter Seibel <·····@gigamonkeys.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> >
> >> What are your other reasons?
> >
> > I am working on adding a system of first-class global lexical
> > environments to CL (originally called locales after the terminology
> > in T, now called lexicons to avoid confusion with I18N). It turns
> > out that adding these to the language has all sorts of happy
> > consequences, including the seamless integration of Lisp-1 and
> > Lisp-2 semantics without a code walker, in addition to a host of
> > ancillary benefits which I will not detail here. I'll just say that
> > great swaths of complexity and controversy can be mowed down and
> > discarded if you have first-class global lexical environments.
> >
> > One of the additional benefits is (possibly) a new macro system
> > which combines the simplicity of Lisp-1 semantics and the simplicity
> > of Lisp-2 macros while avoiding the risk of accidental name clashes,
> > which some people might find an attractive alternative to what is
> > currently available. But to prototype the system I need to be able
> > to convert (^foo ...) into something like (funcall
> > (top-level-binding 'foo) ...).
> >
> > If I have ((...) ...) == (funcall (...) ...) then this is trivial.
> > All I need is a reader macro for #\^, and everything works just
> > fine. But without that I have to resort to one of a number of
> > horrible hacks, a code walker being among the least horrible of
> > them.
> 
> So all of this sounds like a big enough experiment in a new language
> that it seems worthwhile and easier to implement it as a GarretLisp ->
> CL compiler.  Here's a start:

Your "start" is already twice as many lines of code as I had to write in 
order to make it work by hacking the implementation.  I don't see any 
sense in which your approach could be considered "easier".

> 
>   (defun garret-lisp->cl (sexp)
>     (typecase sexp
>       (cons 
>        (destructuring-bind (car &rest rest) sexp
>          (typecase car
>            (cons `(funcall ,car ,@(mapcar #'compile-my-lisp rest)))
>            (symbol `(,car ,@(mapcar #'compile-my-lisp rest)))
>            (t (error "Malformed expression: ~s" sexp)))))
>       (t sexp)))
> 
>   (defun garret-lisp:eval (sexp) (eval (garret-lisp->cl sexp)))
> 
>   (defun garret-lisp:load (file)
>     (with-open-file (in file)
>       (loop for form = (read in nil in) until (eql form in)
>          do (garret-lisp:eval form))))
> 
> -Peter

Are you the same Peter Seibel who wrote "Practical Common Lisp"?  Do you 
understand what a code walker is and why it is hard to write one for 
Common Lisp?  And if the answers to both of those questions are yes, why 
do you think that posting a few lines of a broken code walker is a 
constructive response to the statement that this could be done with a 
code walker but that it would be a lot of work?

rg
From: Peter Seibel
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m28xogi6cq.fsf@gigamonkeys.com>
Ron Garret <·········@flownet.com> writes:

> Are you the same Peter Seibel who wrote "Practical Common Lisp"? Do
> you understand what a code walker is and why it is hard to write one
> for Common Lisp? And if the answers to both of those questions are
> yes, why do you think that posting a few lines of a broken code
> walker is a constructive response to the statement that this could
> be done with a code walker but that it would be a lot of work?

I'm not proposing writing a code walker--I'm proposing writing a
compiler for one language with a sexp syntax (GarretLisp) to another
(CommonLisp). This gives you complete control over the horizontal and
the vertical without having to wrestle at all with the underlying
design choices of Common Lisp while at the same time getting to take
advantage of the implementation work that has gone into existing
Common Lisp implementations. I.e. you don't have to write a native
code generator--you just have to generate reasonably efficient Common
Lisp and let the Common Lisp compiler take care of code generation for
you. And you don't have to write a garbage collector. You also can
likely use the Common Lisp reader unless you want to, say, change the
meaning of #\:.

Now the cost of this approach is that it's harder to then integrate
code written in GarretLisp with existing Common Lisp code--basically
you have to define a FFI in GarretLisp for connecting to Common Lisp.
But if you really find Common Lisp technically and culturally bankrupt
then that shouldn't matter--using Common Lisp underneath can just be a
boootstrapping strategy. Once folks see how much better GarretLisp is
than Common Lisp you'll probably be able to attract some help to get
rid of the Common Lisp intermediate step.

-Peter

-- 
Peter Seibel           * ·····@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp  * http://www.gigamonkeys.com/book/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-C13DA8.21092601062006@news.gha.chartermi.net>
In article <··············@gigamonkeys.com>,
 Peter Seibel <·····@gigamonkeys.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > Are you the same Peter Seibel who wrote "Practical Common Lisp"? Do
> > you understand what a code walker is and why it is hard to write one
> > for Common Lisp? And if the answers to both of those questions are
> > yes, why do you think that posting a few lines of a broken code
> > walker is a constructive response to the statement that this could
> > be done with a code walker but that it would be a lot of work?
> 
> I'm not proposing writing a code walker--I'm proposing writing a
> compiler for one language with a sexp syntax (GarretLisp) to another
> (CommonLisp).

But GarretLisp is your invention.  I don't want GarretLisp, I want 
Common Lisp with one small modification.  To do that in the manner that 
you suggest requires a code walker for Common Lisp.

> I.e. you don't have to write a native code generator

Are you sure you understand what a code walker is?  This idea of writing 
a native code generator is a complete non-sequitur.

> But if you really find Common Lisp technically and culturally bankrupt

When have I ever said anything even remotely like that?  Why do you 
imagine I would put all this effort into CL if I thought it was 
technically and culturally bankrupt?  What are you hoping to accomplish 
by caricaturing my position in this way?

rg
From: Peter Seibel
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m24pz3isew.fsf@gigamonkeys.com>
Ron Garret <·········@flownet.com> writes:

> In article <··············@gigamonkeys.com>,
>  Peter Seibel <·····@gigamonkeys.com> wrote:
>
>> Ron Garret <·········@flownet.com> writes:
>> 
>> > Are you the same Peter Seibel who wrote "Practical Common Lisp"?
>> > Do you understand what a code walker is and why it is hard to
>> > write one for Common Lisp? And if the answers to both of those
>> > questions are yes, why do you think that posting a few lines of a
>> > broken code walker is a constructive response to the statement
>> > that this could be done with a code walker but that it would be a
>> > lot of work?
>> 
>> I'm not proposing writing a code walker--I'm proposing writing a
>> compiler for one language with a sexp syntax (GarretLisp) to
>> another (CommonLisp).
>
> But GarretLisp is your invention.  I don't want GarretLisp, I want 
> Common Lisp with one small modification.  To do that in the manner that 
> you suggest requires a code walker for Common Lisp.

Okay. If you really just want that one small change then I guess your
only choices are to write a code walker or to convince implementors to
provide this change for you. I'd say writing the code walker--as bad
as it is--is less work. Or I suppose you could go further down the
path you've arleady started on and provide patches against each of the
major implementations so Lisp users can try out this feature--if it's
really as great as you say then perhaps the implementors will adopt
your patches and it will become a defacto standard extension to the
language.

>> I.e. you don't have to write a native code generator
>
> Are you sure you understand what a code walker is?  This idea of writing 
> a native code generator is a complete non-sequitur.

Not if the question is, how do I quickly produce a relatively
high-quality implementation of a new language so I can show the world
how great it is. If that's not what you want to do, then fine.

>> But if you really find Common Lisp technically and culturally bankrupt
>
> When have I ever said anything even remotely like that?

A couple weeks ago when we met in Oakland, supplemented with your
recent comments here. If I misunderstood your position I apologize for
mischaracterizing it here but that was my impression of how you feel
about it.

> Why do you imagine I would put all this effort into CL if I thought
> it was technically and culturally bankrupt?

Because it's is still the best vehicle for where you want to go?

> What are you hoping to accomplish by caricaturing my position in
> this way?

Dunno. Probably to point out to anyone who's listening in that Common
Lisp is such a great language that even if you think--as you may or
may not--that it has fundamental design flaws that require inventing a
new language it's *still* the best tool to use to implement that
language.

-Peter

-- 
Peter Seibel           * ·····@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp  * http://www.gigamonkeys.com/book/
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149281585.658505.9120@y43g2000cwc.googlegroups.com>
Peter Seibel wrote:

> >> I.e. you don't have to write a native code generator
> >
> > Are you sure you understand what a code walker is?  This idea of writing
> > a native code generator is a complete non-sequitur.
>
> Not if the question is, how do I quickly produce a relatively
> high-quality implementation of a new language so I can show the world
> how great it is. If that's not what you want to do, then fine.

In this particular case it is not.

> >> But if you really find Common Lisp technically and culturally bankrupt
> >
> > When have I ever said anything even remotely like that?
>
> A couple weeks ago when we met in Oakland, supplemented with your
> recent comments here. If I misunderstood your position I apologize for
> mischaracterizing it here but that was my impression of how you feel
> about it.

Well, since you've chosen to air our private conversations in public,
let me give you my version of events: where we left things was that we
were going to pursue a number of different strategies, including my
providing financial support for continuing development of your
Lisp-in-a-Box project.  The ball was in your court to provide me with a
proposal outlining specific goals, schedules, and costs.  You haven't
done that.  Instead you've chosen to spend your time inventing and
publicising paranoid fantasies about me and my motives.  I can't help
but wonder what you were hoping to accomplish.

> > Why do you imagine I would put all this effort into CL if I thought
> > it was technically and culturally bankrupt?
>
> Because it's is still the best vehicle for where you want to go?

No.  (Good grief, I actually TOLD you the answer to that question when
we met!  Are you being intentionally obtuse here?)

> > What are you hoping to accomplish by caricaturing my position in
> > this way?
>
> Dunno. Probably to point out to anyone who's listening in that Common
> Lisp is such a great language that even if you think--as you may or
> may not--that it has fundamental design flaws that require inventing a
> new language it's *still* the best tool to use to implement that
> language.

Well, let's not get too carried away here.  The language I want to
implement in this particular case is identical to Common Lisp but for
one tiny semantic change, so the fact that Common Lisp might be the
best vehicle for implementing THAT language does not necessarily speak
volumes about its superiority for other tasks.  In fact, the reason I
am making this particular point is to show that CL is NOT as flexible
as its proponents think, and that it DOES lock you into certain ways of
doing things.  As a programmable programming language, Lisp has
limitations.

There is still a very big distance between that and technical and
cultural bankruptcy.

rg
From: Peter Seibel
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m2fyings9w.fsf@gigamonkeys.com>
···@flownet.com writes:

> Well, since you've chosen to air our private conversations in
> public, let me give you my version of events: where we left things
> was that we were going to pursue a number of different strategies,
> including my providing financial support for continuing development
> of your Lisp-in-a-Box project. The ball was in your court to provide
> me with a proposal outlining specific goals, schedules, and costs.
> You haven't done that.

Oh. Well then I dropped the ball, or more accurately, never realized
the ball was in my court. I thought the participants of that get
together were still discussing the "what to do next" question more
generally via email and that *you* had gone silent on that front. If,
after this exchange, you're still interested in carrying on the
conversation, it can probably go back to email. In the meantime, who
dropped the ball is independent of a) the technical issues of how to
prototype interesting new language features b) the cultural issues
surrounding Common Lisp. I'm happy to hear that you're a bigger fan of
Common Lisp than I thought--my impression based both on our private
interactions and an your public comments is that you think Lisp qua
Lisp is a great thing and that Commmon Lisp has outlived it's
usefulness. If that's not what you think, then I'm sorry to have
misunderstood you.

-Peter

-- 
Peter Seibel           * ·····@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp  * http://www.gigamonkeys.com/book/
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149355303.445180.201340@i40g2000cwc.googlegroups.com>
Peter Seibel wrote:
> ···@flownet.com writes:
>
> > Well, since you've chosen to air our private conversations in
> > public, let me give you my version of events: where we left things
> > was that we were going to pursue a number of different strategies,
> > including my providing financial support for continuing development
> > of your Lisp-in-a-Box project. The ball was in your court to provide
> > me with a proposal outlining specific goals, schedules, and costs.
> > You haven't done that.
>
> Oh. Well then I dropped the ball, or more accurately, never realized
> the ball was in my court.

I'm on the road at the moment and don't have access to my email archive
so I can't check was was and wasn't said on the record, but I have a
pretty clear recollection of *you* suggesting that you write up a
proposal and me saying that sounded like a fine plan.  But leaving
aside what may or may not have been said, how else did you imagine it
would work?

> I thought the participants of that get
> together were still discussing the "what to do next" question more
> generally via email

Yes, we were doing that too.  I often pursue multiple strategies in
parallel, especially when the best course of action is unclear.

> and that *you* had gone silent on that front.

Yes, I've been occupied with summer-of-code stuff (among other things).
 And I have other reasons which are best not duscussed in public.

>  In the meantime, who
> dropped the ball is independent of a) the technical issues of how to
> prototype interesting new language features

You are the one who took this disucssion off a technical track by
raising baseless questions about my motives and claiming private
communications as your source.  I would not have brought this up
otherwise.

> b) the cultural issues surrounding Common Lisp.

You brought up that topic, not me.  If you want to discuss the cultural
issues surrounding CL I would be happy to do that, but might I suggest
that a better opening would be "Hey, let's talk about the cultural
issues surrounding CL" rather than setting up a straw man to attack.

> I'm happy to hear that you're a bigger fan of
> Common Lisp than I thought--my impression based both on our private
> interactions and an your public comments is that you think Lisp qua
> Lisp is a great thing and that Commmon Lisp has outlived it's
> usefulness. If that's not what you think, then I'm sorry to have
> misunderstood you.

"Outliving its usefulness" is a far cry from technical and cultural
bankruptcy.  This is not a misunderstanding (unless you want to claim
ignorance of the meaning of the word "bankrupt".)  It is a deliberate
misrepresentation on your part.

rg
From: Ken Tilton
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <Fbkgg.5$Xq2.3@fe08.lga>
···@flownet.com wrote:
> Peter Seibel wrote:
> 
>>···@flownet.com writes:
>>
>>
>>>Well, since you've chosen to air our private conversations in
>>>public, let me give you my version of events: where we left things
>>>was that we were going to pursue a number of different strategies,
>>>including my providing financial support for continuing development
>>>of your Lisp-in-a-Box project. The ball was in your court to provide
>>>me with a proposal outlining specific goals, schedules, and costs.
>>>You haven't done that.
>>
>>Oh. Well then I dropped the ball, or more accurately, never realized
>>the ball was in my court.
> 
> 
> I'm on the road at the moment and don't have access to my email archive
> so I can't check was was and wasn't said on the record, but I have a
> pretty clear recollection of *you* suggesting that you write up a
> proposal and me saying that sounded like a fine plan.  But leaving
> aside what may or may not have been said, how else did you imagine it
> would work?
> 
> 
>>I thought the participants of that get
>>together were still discussing the "what to do next" question more
>>generally via email
> 
> 
> Yes, we were doing that too.  I often pursue multiple strategies in
> parallel, especially when the best course of action is unclear.
> 
> 
>>and that *you* had gone silent on that front.
> 
> 
> Yes, I've been occupied with summer-of-code stuff (among other things).
>  And I have other reasons which are best not duscussed in public.
> 
> 
>> In the meantime, who
>>dropped the ball is independent of a) the technical issues of how to
>>prototype interesting new language features
> 
> 
> You are the one who took this disucssion off a technical track by
> raising baseless questions about my motives and claiming private
> communications as your source.  I would not have brought this up
> otherwise.
> 
> 
>>b) the cultural issues surrounding Common Lisp.
> 
> 
> You brought up that topic, not me.  If you want to discuss the cultural
> issues surrounding CL I would be happy to do that, but might I suggest
> that a better opening would be "Hey, let's talk about the cultural
> issues surrounding CL" rather than setting up a straw man to attack.
> 
> 
>>I'm happy to hear that you're a bigger fan of
>>Common Lisp than I thought--my impression based both on our private
>>interactions and an your public comments is that you think Lisp qua
>>Lisp is a great thing and that Commmon Lisp has outlived it's
>>usefulness. If that's not what you think, then I'm sorry to have
>>misunderstood you.
> 
> 
> "Outliving its usefulness" is a far cry from technical and cultural
> bankruptcy.  This is not a misunderstanding (unless you want to claim
> ignorance of the meaning of the word "bankrupt".)  It is a deliberate
> misrepresentation on your part.
> 

You kids watching at home, now /that/ is how to accept an apology!

kenzo

-- 
Cells: http://common-lisp.net/project/cells/

"Have you ever been in a relationship?"
    Attorney for Mary Winkler, confessed killer of her
    minister husband, when asked if the couple had
    marital problems.
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149367009.338380.320830@i39g2000cwa.googlegroups.com>
Ken Tilton wrote:
> ···@flownet.com wrote:
> > Peter Seibel wrote:
> >
> >>···@flownet.com writes:
> >>
> >>
> >>>Well, since you've chosen to air our private conversations in
> >>>public, let me give you my version of events: where we left things
> >>>was that we were going to pursue a number of different strategies,
> >>>including my providing financial support for continuing development
> >>>of your Lisp-in-a-Box project. The ball was in your court to provide
> >>>me with a proposal outlining specific goals, schedules, and costs.
> >>>You haven't done that.
> >>
> >>Oh. Well then I dropped the ball, or more accurately, never realized
> >>the ball was in my court.
> >
> >
> > I'm on the road at the moment and don't have access to my email archive
> > so I can't check was was and wasn't said on the record, but I have a
> > pretty clear recollection of *you* suggesting that you write up a
> > proposal and me saying that sounded like a fine plan.  But leaving
> > aside what may or may not have been said, how else did you imagine it
> > would work?
> >
> >
> >>I thought the participants of that get
> >>together were still discussing the "what to do next" question more
> >>generally via email
> >
> >
> > Yes, we were doing that too.  I often pursue multiple strategies in
> > parallel, especially when the best course of action is unclear.
> >
> >
> >>and that *you* had gone silent on that front.
> >
> >
> > Yes, I've been occupied with summer-of-code stuff (among other things).
> >  And I have other reasons which are best not duscussed in public.
> >
> >
> >> In the meantime, who
> >>dropped the ball is independent of a) the technical issues of how to
> >>prototype interesting new language features
> >
> >
> > You are the one who took this disucssion off a technical track by
> > raising baseless questions about my motives and claiming private
> > communications as your source.  I would not have brought this up
> > otherwise.
> >
> >
> >>b) the cultural issues surrounding Common Lisp.
> >
> >
> > You brought up that topic, not me.  If you want to discuss the cultural
> > issues surrounding CL I would be happy to do that, but might I suggest
> > that a better opening would be "Hey, let's talk about the cultural
> > issues surrounding CL" rather than setting up a straw man to attack.
> >
> >
> >>I'm happy to hear that you're a bigger fan of
> >>Common Lisp than I thought--my impression based both on our private
> >>interactions and an your public comments is that you think Lisp qua
> >>Lisp is a great thing and that Commmon Lisp has outlived it's
> >>usefulness. If that's not what you think, then I'm sorry to have
> >>misunderstood you.
> >
> >
> > "Outliving its usefulness" is a far cry from technical and cultural
> > bankruptcy.  This is not a misunderstanding (unless you want to claim
> > ignorance of the meaning of the word "bankrupt".)  It is a deliberate
> > misrepresentation on your part.
> >
>
> You kids watching at home, now /that/ is how to accept an apology!

Oh, is that what that was?  It sounded more like an excuse to me.  I'm
sorry if I misunderstood.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4ebn9iF1dlv9rU1@individual.net>
···@flownet.com wrote:

>>> What are you hoping to accomplish by caricaturing my position in
>>> this way?
>> Dunno. Probably to point out to anyone who's listening in that Common
>> Lisp is such a great language that even if you think--as you may or
>> may not--that it has fundamental design flaws that require inventing a
>> new language it's *still* the best tool to use to implement that
>> language.
> 
> Well, let's not get too carried away here.  The language I want to
> implement in this particular case is identical to Common Lisp but for
> one tiny semantic change, so the fact that Common Lisp might be the
> best vehicle for implementing THAT language does not necessarily speak
> volumes about its superiority for other tasks.  In fact, the reason I
> am making this particular point is to show that CL is NOT as flexible
> as its proponents think, and that it DOES lock you into certain ways of
> doing things.  As a programmable programming language, Lisp has
> limitations.

...but that's obviously true. Is that all?!?


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149356314.852918.225110@h76g2000cwa.googlegroups.com>
Pascal Costanza wrote:
> ···@flownet.com wrote:
>
> >>> What are you hoping to accomplish by caricaturing my position in
> >>> this way?
> >> Dunno. Probably to point out to anyone who's listening in that Common
> >> Lisp is such a great language that even if you think--as you may or
> >> may not--that it has fundamental design flaws that require inventing a
> >> new language it's *still* the best tool to use to implement that
> >> language.
> >
> > Well, let's not get too carried away here.  The language I want to
> > implement in this particular case is identical to Common Lisp but for
> > one tiny semantic change, so the fact that Common Lisp might be the
> > best vehicle for implementing THAT language does not necessarily speak
> > volumes about its superiority for other tasks.  In fact, the reason I
> > am making this particular point is to show that CL is NOT as flexible
> > as its proponents think, and that it DOES lock you into certain ways of
> > doing things.  As a programmable programming language, Lisp has
> > limitations.
>
> ...but that's obviously true.

Well, at least we agree on something.

> Is that all?!?

Of course not.  Not that any of this hasn't been said before, but it
might be worth reiterating in light of this newly discovered common
ground: we agree that the current situation w.r.t. lambda expressions
is confusing.  I desire to address that confusion by generalizing the
current state of affairs and make the language less restrictive, while
you, for reasons passing my understanding [1], prefer to leave things
as they are, or if you really had your druthers, by making the language
more restrictive.

I think that pretty much sums it up.

rg

[1]  My puzzlement arises from the fact that CL tends to pride itself
on the fact that it doesn't dictate a particular programming style.
The term "fascist" is often used derisively in CL circles to describe
langauges designed to force you into a particular way of doing things.
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4ee2gnF1e6l79U1@individual.net>
···@flownet.com wrote:
> Pascal Costanza wrote:
>> ···@flownet.com wrote:
>>
>>>>> What are you hoping to accomplish by caricaturing my position in
>>>>> this way?
>>>> Dunno. Probably to point out to anyone who's listening in that Common
>>>> Lisp is such a great language that even if you think--as you may or
>>>> may not--that it has fundamental design flaws that require inventing a
>>>> new language it's *still* the best tool to use to implement that
>>>> language.
>>> Well, let's not get too carried away here.  The language I want to
>>> implement in this particular case is identical to Common Lisp but for
>>> one tiny semantic change, so the fact that Common Lisp might be the
>>> best vehicle for implementing THAT language does not necessarily speak
>>> volumes about its superiority for other tasks.  In fact, the reason I
>>> am making this particular point is to show that CL is NOT as flexible
>>> as its proponents think, and that it DOES lock you into certain ways of
>>> doing things.  As a programmable programming language, Lisp has
>>> limitations.
>> ...but that's obviously true.
> 
> Well, at least we agree on something.
> 
>> Is that all?!?
> 
> Of course not.  Not that any of this hasn't been said before, but it
> might be worth reiterating in light of this newly discovered common
> ground: we agree that the current situation w.r.t. lambda expressions
> is confusing.

It is confusing, but not confusing enough to warrant any action, IMHO.

> I desire to address that confusion by generalizing the
> current state of affairs and make the language less restrictive, while
> you, for reasons passing my understanding [1], prefer to leave things
> as they are, or if you really had your druthers, by making the language
> more restrictive.

I can try to explain my position again:

- Your suggestions so far potentially (!) make the language even more 
confusing. I have come up with examples of which at least one had two 
different possible semantics, and different people had different ideas 
what the correct semantics should be. And I don't think that we can be 
convinced that we have already seen an exhaustive list of potentially 
problematic corner cases.

- I am also not convinced that the desired programming style 
(functional, I assume) is sufficiently captured by the extension that 
you propose. Being able to deal with conses in car positions of forms in 
meaningful is only one element, as far as I can tell.

My claim is as follows: People who suggest to extend Common Lisp the 
Language to deal with conses in car positions of forms in specific 
meaningful ways have to show

- that the extension has substantial benefits (and not only superficial 
ones).

- that all important corner cases can be easily understood, for example, 
derived from a few principles.

I don't think you have made these points yet.

Note that I am not at all against experimental language extensions. Your 
proposal seems interesting enough to play around with and see what comes 
out of it. But asking for a change of the language specification first 
before showing that this doesn't paint us into any corner is, IMHO, 
doing things in the wrong order. Given the fact that there is noone and 
nothing keeping you from implementing your experimental extensions, I 
simply think that you are asking for too much.

As a sidenote, your original proposal included a hook for users 
(programmers) to influence how conses in car positions of forms are 
actually treated. I am making the above statements in the light of that 
proposal. If you would reduce your proposal to implicitly inserting 
'funcall before conses (and nothing else), my current guess is that it 
would be considerably simpler and more likely to catch on. But that's 
just a gut feeling, the above points would still have to be shown.

To stress this again: It's the responsibility of those who are 
interested in concrete changes to a language to show that such changes 
balance the forces such as the ones mentioned above.

> I think that pretty much sums it up.

Now it does. ;)


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149366495.483731.28250@c74g2000cwc.googlegroups.com>
Pascal Costanza wrote:

> > Of course not.  Not that any of this hasn't been said before, but it
> > might be worth reiterating in light of this newly discovered common
> > ground: we agree that the current situation w.r.t. lambda expressions
> > is confusing.
>
> It is confusing, but not confusing enough to warrant any action, IMHO.

But if action were warranted you would favor erring on the side of
being restrictive, i.e. deprecating ((lambda ...) ...) rather than
generalizing ((...) ...)  That is not in keeping with CL's design
philosophy IMO.

> > I desire to address that confusion by generalizing the
> > current state of affairs and make the language less restrictive, while
> > you, for reasons passing my understanding [1], prefer to leave things
> > as they are, or if you really had your druthers, by making the language
> > more restrictive.
>
> I can try to explain my position again:
>
> - Your suggestions so far potentially (!) make the language even more
> confusing. I have come up with examples of which at least one had two
> different possible semantics, and different people had different ideas
> what the correct semantics should be.

Hence the modification to the original proposal making the semantics
user-definable.

> And I don't think that we can be
> convinced that we have already seen an exhaustive list of potentially
> problematic corner cases.

Of course you can't be convinced.  It is always possible that something
has been overlooked no matter how much analysis has been done.  See
e.g.
http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html

> - I am also not convinced that the desired programming style
> (functional, I assume) is sufficiently captured by the extension that
> you propose. Being able to deal with conses in car positions of forms in
> meaningful is only one element, as far as I can tell.

One must begin somewhere.  Also, this happens to be the one item that
turns out to be very difficult to do as a user-level extension.
Finally, my actual concern here is not support for FP (thought that
would be an ancillary benefit) but rather first-class global lexical
environments.

> My claim is as follows: People who suggest to extend Common Lisp the
> Language to deal with conses in car positions of forms in specific
> meaningful ways have to show
>
> - that the extension has substantial benefits (and not only superficial
> ones).

By what metric?  One man's superfice (is that a word?) is another man's
substance.


> - that all important corner cases can be easily understood, for example,
> derived from a few principles.

Well, gee, I don't know how I can make it any plainer than pointing out
yet again that my proposed change is 100% backwards-compatible and
user-configurable, so the only possible corner cases are those brought
about by someone's use of the feature.  And I think you would be hard
pressed to name any feature of any programming language with which it
is not possible to shoot yourself in the foot if you apply enough
imagination to the task.

> I don't think you have made these points yet.

And I think I have.  Now what?

> Note that I am not at all against experimental language extensions.

Really?  What is an example of a language extension that you are not
against?

> Your
> proposal seems interesting enough to play around with and see what comes
> out of it. But asking for a change of the language specification first
> before showing that this doesn't paint us into any corner is, IMHO,
> doing things in the wrong order.

CL has already painted itself into a corner by requiring ((...) ...) to
be an error.  I'm trying to escape from that corner.

> Given the fact that there is noone and
> nothing keeping you from implementing your experimental extensions,

That's not true.  I am able to implement what I want in MCL because by
dint of historical fate I happen to be familiar enough with the inner
workings of that particular implementation to be able to tweak it to do
what I need to do.  That is not the case with the other
implementations.

> As a sidenote, your original proposal included a hook for users
> (programmers) to influence how conses in car positions of forms are
> actually treated. I am making the above statements in the light of that
> proposal. If you would reduce your proposal to implicitly inserting
> 'funcall before conses (and nothing else), my current guess is that it
> would be considerably simpler and more likely to catch on. But that's
> just a gut feeling, the above points would still have to be shown.

I'd be happy either way.  FUNCALL is the only thing that make sense to
me, but making it user-configurable was a response to your objection
that this might later turn out to be wrong.  You can either be
constraining or not, but you cannot do both simultaneously.

> To stress this again: It's the responsibility of those who are
> interested in concrete changes to a language to show that such changes
> balance the forces such as the ones mentioned above.

So to sum up:

1.  The cost of what I propose is near zero (for someone familiar with
an implementation), as demonstrated by the one implementation that I
was able to produce myself.

2.  The impact to existing code is zero because a) it only change the
semantics of code that currently must be an error and b) if you go with
the user-configurable option the user can configure the system to
behave exactly as it currently does, i.e. to continue to produce an
error.

3.  The benefits are potentially significant, but I don't know how I
can convince you of that given the catch-22 conditions that you have
imposed.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4eeh41F1ckv68U1@individual.net>
···@flownet.com wrote:
> Pascal Costanza wrote:
> 
>>> Of course not.  Not that any of this hasn't been said before, but it
>>> might be worth reiterating in light of this newly discovered common
>>> ground: we agree that the current situation w.r.t. lambda expressions
>>> is confusing.
>> It is confusing, but not confusing enough to warrant any action, IMHO.
> 
> But if action were warranted you would favor erring on the side of
> being restrictive, i.e. deprecating ((lambda ...) ...) rather than
> generalizing ((...) ...)  That is not in keeping with CL's design
> philosophy IMO.

Do you mean the design philosophy stated in Chapter 1.1 of CLtL2, or 
some other design philosophy?

> Finally, my actual concern here is not support for FP (thought that
> would be an ancillary benefit) but rather first-class global lexical
> environments.

Why do you need this feature in this context?

>> Note that I am not at all against experimental language extensions.
> 
> Really?  What is an example of a language extension that you are not
> against?

That's a rhetorical question, right?

>> Given the fact that there is noone and
>> nothing keeping you from implementing your experimental extensions,
> 
> That's not true.  I am able to implement what I want in MCL because by
> dint of historical fate I happen to be familiar enough with the inner
> workings of that particular implementation to be able to tweak it to do
> what I need to do.  That is not the case with the other
> implementations.

This is as weird as it gets. It's not only possible to implement your 
extension, it's even straightforward for you to do so. But you're still 
claiming that you are kept from implementing your extension.

Wow.

>> As a sidenote, your original proposal included a hook for users
>> (programmers) to influence how conses in car positions of forms are
>> actually treated. I am making the above statements in the light of that
>> proposal. If you would reduce your proposal to implicitly inserting
>> 'funcall before conses (and nothing else), my current guess is that it
>> would be considerably simpler and more likely to catch on. But that's
>> just a gut feeling, the above points would still have to be shown.
> 
> I'd be happy either way.  FUNCALL is the only thing that make sense to
> me, but making it user-configurable was a response to your objection
> that this might later turn out to be wrong.

I don't remember making this objection, but if that's the case forget 
about it.

>> To stress this again: It's the responsibility of those who are
>> interested in concrete changes to a language to show that such changes
>> balance the forces such as the ones mentioned above.
> 
> So to sum up:
> 
> 1.  The cost of what I propose is near zero (for someone familiar with
> an implementation), as demonstrated by the one implementation that I
> was able to produce myself.
> 
> 2.  The impact to existing code is zero because a) it only change the
> semantics of code that currently must be an error and b) if you go with
> the user-configurable option the user can configure the system to
> behave exactly as it currently does, i.e. to continue to produce an
> error.
> 
> 3.  The benefits are potentially significant, but I don't know how I
> can convince you of that given the catch-22 conditions that you have
> imposed.

Take the usual approach: Show examples, write applications that use your 
extension, build an open source distribution, write documentation, write 
papers about it, build a community, etc.

And don't use me (or other people in this or related threads) as 
scapegoats or targets. If your language extension is important, it 
doesn't matter what objections we/I have.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149437206.305853.294030@i40g2000cwc.googlegroups.com>
Pascal Costanza wrote:
> ···@flownet.com wrote:
> > Pascal Costanza wrote:
> >
> >>> Of course not.  Not that any of this hasn't been said before, but it
> >>> might be worth reiterating in light of this newly discovered common
> >>> ground: we agree that the current situation w.r.t. lambda expressions
> >>> is confusing.
> >> It is confusing, but not confusing enough to warrant any action, IMHO.
> >
> > But if action were warranted you would favor erring on the side of
> > being restrictive, i.e. deprecating ((lambda ...) ...) rather than
> > generalizing ((...) ...)  That is not in keeping with CL's design
> > philosophy IMO.
>
> Do you mean the design philosophy stated in Chapter 1.1 of CLtL2, or
> some other design philosophy?

I mean the part about being a programmable programming language.

> > Finally, my actual concern here is not support for FP (thought that
> > would be an ancillary benefit) but rather first-class global lexical
> > environments.
>
> Why do you need this feature in this context?

So that I can implement a simple syntax for accessing top-level lexical
bindings (so that I can implement a new macro system) without a code
walker.

> >> Note that I am not at all against experimental language extensions.
> >
> > Really?  What is an example of a language extension that you are not
> > against?
>
> That's a rhetorical question, right?

No, I was serious, but looking back at it I realize that I did misread
your statement.  I thought you had written "Note that I am not against
all experiment language extensions."  So I withdraw the question.

> >> Given the fact that there is noone and
> >> nothing keeping you from implementing your experimental extensions,
> >
> > That's not true.  I am able to implement what I want in MCL because by
> > dint of historical fate I happen to be familiar enough with the inner
> > workings of that particular implementation to be able to tweak it to do
> > what I need to do.  That is not the case with the other
> > implementations.
>
> This is as weird as it gets. It's not only possible to implement your
> extension, it's even straightforward for you to do so.

It was only straightforward for *me* in other implementations.

> But you're still
> claiming that you are kept from implementing your extension.

I'm claiming there's a significant obstacle in my way.  But I concede
it is not insurmountable.

> And don't use me (or other people in this or related threads) as
> scapegoats or targets.

I am doing no such thing.  I am merely pointing out some facts.

> If your language extension is important, it
> doesn't matter what objections we/I have.

An ironic position for a Lisper to take.  Lispers of all people should
be keenly aware of the difficulty of even "important" language features
(like parentheses) to overcome prominent objections.

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <u7j3wrf9q.fsf@nhplace.com>
···@flownet.com writes:

> So that I can implement a simple syntax for accessing top-level lexical
> bindings (so that I can implement a new macro system) without a code
> walker.

Zetalisp had something for defining lambda macros.  I don't recall the 
actual name of it--something like deflambda-macro.  It was intended for
defining things like interlisp's NLAMBDA macro, but it was for anything
that would be processed as:
 ((operator ... syntax ...) ...more syntax...)
If you had this definitional operator, you'd be able to write
 (define-var+fun (name bvl &body forms)
   `(progn (defun ,name ,bvl ,@forms)
           (define-symbol-macro ,name #',name)
           (define-lambda-macro ,name ((&rest syntax) &rest more-syntax)
             `(funcall (funcall ,name ,@syntax) ,@more-syntax))
           ',name))
or some such thing.

This would mean the solution was special-purpose to each such variable, but
it would plug-and-play neatly with a Lisp2 without affecting the way other
names were dealt with.

The problem is that this definitional operator has to be given by the system;
you can't build it up from portable Common Lisp without writing your own 
layered language atop the entire language.  And that's a lot harder than
pestering a vendor to add one simple operator that gets used on a fall-through
of ((xxx)) where it's about to siganl an error.

If all you want is that, you needn't evoke the lisp1/lisp2 battle.
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149458368.437246.214770@h76g2000cwa.googlegroups.com>
Hi Kent!  Good to see you back!

Kent M Pitman wrote:
> ···@flownet.com writes:
>
> > So that I can implement a simple syntax for accessing top-level lexical
> > bindings (so that I can implement a new macro system) without a code
> > walker.
>
> Zetalisp had something for defining lambda macros.  I don't recall the
> actual name of it--something like deflambda-macro.  It was intended for
> defining things like interlisp's NLAMBDA macro, but it was for anything
> that would be processed as:
>  ((operator ... syntax ...) ...more syntax...)
> If you had this definitional operator, you'd be able to write
>  (define-var+fun (name bvl &body forms)
>    `(progn (defun ,name ,bvl ,@forms)
>            (define-symbol-macro ,name #',name)
>            (define-lambda-macro ,name ((&rest syntax) &rest more-syntax)
>              `(funcall (funcall ,name ,@syntax) ,@more-syntax))
>            ',name))
> or some such thing.
>
> This would mean the solution was special-purpose to each such variable, but
> it would plug-and-play neatly with a Lisp2 without affecting the way other
> names were dealt with.

That would work for me.  However, the objection that has been raised
against my proposal is not that it doesn't fit neatly, but rather that
the whole notion of assigning semantics to ((...) ...) is confusing and
of little value.  That objection applies to lambda macros as well.

> The problem is that this definitional operator has to be given by the system;
> you can't build it up from portable Common Lisp without writing your own
> layered language atop the entire language.  And that's a lot harder than
> pestering a vendor to add one simple operator that gets used on a fall-through
> of ((xxx)) where it's about to siganl an error.

That's right, and given what a hard-sell the simpler solution turns out
to be, I wouldn't expect to have much more success with the more
complicated one.

> If all you want is that, you needn't evoke the lisp1/lisp2 battle.

I haven't invoked the lisp1/lisp2 battle (except to point out that
Lisp1 is fairly easily integrated into CL, and that IMO therefore
fighting the lisp1/lisp2 battle is silly).  In fact, my motivation for
wanting this change is described in the excerpt that you quoted, so I'm
not sure why you even bring this up.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4eh63rF1ec66lU1@individual.net>
···@flownet.com wrote:
> Hi Kent!  Good to see you back!

I second this.

(Ron, another thing we agree on... ;)

> Kent M Pitman wrote:
>> ···@flownet.com writes:
>>
>>> So that I can implement a simple syntax for accessing top-level lexical
>>> bindings (so that I can implement a new macro system) without a code
>>> walker.
>> Zetalisp had something for defining lambda macros.  I don't recall the
>> actual name of it--something like deflambda-macro.  It was intended for
>> defining things like interlisp's NLAMBDA macro, but it was for anything
>> that would be processed as:
>>  ((operator ... syntax ...) ...more syntax...)
>> If you had this definitional operator, you'd be able to write
>>  (define-var+fun (name bvl &body forms)
>>    `(progn (defun ,name ,bvl ,@forms)
>>            (define-symbol-macro ,name #',name)
>>            (define-lambda-macro ,name ((&rest syntax) &rest more-syntax)
>>              `(funcall (funcall ,name ,@syntax) ,@more-syntax))
>>            ',name))
>> or some such thing.
>>
>> This would mean the solution was special-purpose to each such variable, but
>> it would plug-and-play neatly with a Lisp2 without affecting the way other
>> names were dealt with.
> 
> That would work for me.  However, the objection that has been raised
> against my proposal is not that it doesn't fit neatly, but rather that
> the whole notion of assigning semantics to ((...) ...) is confusing and
> of little value.  That objection applies to lambda macros as well.

No, this variation actually looks much better. Assuming that there would 
also be a lambda-macrolet, this would mean that a) it's clear by looking 
at the program text what definitions apply and b) there would be a way 
to control what's going on when you mix code from different libraries.

Question: Does/can this extend to arbitrary depth? That is, would a 
(define-lambda-macro name ...) apply to ((name ...) ...), (((name ...) 
...) ...), and so on?


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149468524.808282.132920@g10g2000cwb.googlegroups.com>
Pascal Costanza wrote:
> ···@flownet.com wrote:
> > Hi Kent!  Good to see you back!
>
> I second this.
>
> (Ron, another thing we agree on... ;)
>
> > Kent M Pitman wrote:
> >> ···@flownet.com writes:
> >>
> >>> So that I can implement a simple syntax for accessing top-level lexical
> >>> bindings (so that I can implement a new macro system) without a code
> >>> walker.
> >> Zetalisp had something for defining lambda macros.  I don't recall the
> >> actual name of it--something like deflambda-macro.  It was intended for
> >> defining things like interlisp's NLAMBDA macro, but it was for anything
> >> that would be processed as:
> >>  ((operator ... syntax ...) ...more syntax...)
> >> If you had this definitional operator, you'd be able to write
> >>  (define-var+fun (name bvl &body forms)
> >>    `(progn (defun ,name ,bvl ,@forms)
> >>            (define-symbol-macro ,name #',name)
> >>            (define-lambda-macro ,name ((&rest syntax) &rest more-syntax)
> >>              `(funcall (funcall ,name ,@syntax) ,@more-syntax))
> >>            ',name))
> >> or some such thing.
> >>
> >> This would mean the solution was special-purpose to each such variable, but
> >> it would plug-and-play neatly with a Lisp2 without affecting the way other
> >> names were dealt with.
> >
> > That would work for me.  However, the objection that has been raised
> > against my proposal is not that it doesn't fit neatly, but rather that
> > the whole notion of assigning semantics to ((...) ...) is confusing and
> > of little value.  That objection applies to lambda macros as well.
>
> No, this variation actually looks much better. Assuming that there would
> also be a lambda-macrolet, this would mean that a) it's clear by looking
> at the program text what definitions apply and b) there would be a way
> to control what's going on when you mix code from different libraries.

(a) is false [1].  There is always the possibility that someone has
mucked with *macroexpand-hook* or the readtable in an anti-social way,
to say nothing of the fact that macro definitions need not come from
the program text.  (Come on people!  This is comp.lang.LISP!  Surely
you understand the distinction between program and source code?)

(b) is true, but having a global transformation does not preclude the
possibility of having the same capability.

It should also be noted that lambda macros are strictly less powerful
than a global transformation, since the former can be implemented in
terms of the latter but not vice versa.

rg

[1] I wish to be very precise about the sense in which it is false: I
claim that the situation w.r.t. program clarity is exactly the same for
lambda macros as for a global transformation.  It is possible to write
both clear and obfuscated code using either proposal, and to write
clear code requires programmers to adhere to a certain discipline in
both cases.  The only difference is that lambda macros dictate and
(partially) enforce a particular discipline.  But it is not at all
clear that 1) the particular discipline enforced by lambda macros is
the One True Discipline and 2) that the cost in terms of reduced
expressive power (e.g. the inability to write a global funcall
transformation) is worth it.
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4eihh5F1f1si6U1@individual.net>
···@flownet.com wrote:
> Pascal Costanza wrote:
>> ···@flownet.com wrote:
>>> Kent M Pitman wrote:
>>>>
>>>> Zetalisp had something for defining lambda macros.  I don't recall the
>>>> actual name of it--something like deflambda-macro.  It was intended for
>>>> defining things like interlisp's NLAMBDA macro, but it was for anything
>>>> that would be processed as:
>>>>  ((operator ... syntax ...) ...more syntax...)
>>>> If you had this definitional operator, you'd be able to write
>>>>  (define-var+fun (name bvl &body forms)
>>>>    `(progn (defun ,name ,bvl ,@forms)
>>>>            (define-symbol-macro ,name #',name)
>>>>            (define-lambda-macro ,name ((&rest syntax) &rest more-syntax)
>>>>              `(funcall (funcall ,name ,@syntax) ,@more-syntax))
>>>>            ',name))
>>>> or some such thing.
>>>>
>>>> This would mean the solution was special-purpose to each such variable, but
>>>> it would plug-and-play neatly with a Lisp2 without affecting the way other
>>>> names were dealt with.
>>> That would work for me.  However, the objection that has been raised
>>> against my proposal is not that it doesn't fit neatly, but rather that
>>> the whole notion of assigning semantics to ((...) ...) is confusing and
>>> of little value.  That objection applies to lambda macros as well.
>> No, this variation actually looks much better. Assuming that there would
>> also be a lambda-macrolet, this would mean that a) it's clear by looking
>> at the program text what definitions apply and b) there would be a way
>> to control what's going on when you mix code from different libraries.
> 
> (a) is false [1].  There is always the possibility that someone has
> mucked with *macroexpand-hook* or the readtable in an anti-social way,

Read the notes about *macroexpand-hook* in the HyperSpec:

"Users or user programs can assign this variable to customize or trace 
the macro expansion mechanism. Note, however, that this variable is a 
global resource, potentially shared by multiple programs; as such, if 
any two programs depend for their correctness on the setting of this 
variable, those programs may not be able to run in the same Lisp image. 
For this reason, it is frequently best to confine its uses to debugging 
situations."

You don't want a feature that is only used for debugging purposes, but 
you want a feature that (intentionally) allows you to change the 
semantics of the language. It should be more carefully designed that 
*macroexpand-hook*.

> to say nothing of the fact that macro definitions need not come from
> the program text.  (Come on people!  This is comp.lang.LISP!  Surely
> you understand the distinction between program and source code?)

It is good to have features that allow you to disregard the contents of 
the source code, but you should also have a chance to provide constructs 
that support fine-grained control at the source code level. It's not an 
either/or situation.

Many constructs in Common Lisp provide a way to control the scope of 
definitions: lexical scope, dynamic scope, shadowing in packages are 
constrained to that package, metaclass definitions affect only classes 
that are instances of that metaclass, and so on. The features that allow 
you to make changes only in a more unrestricted way, like changes to 
*macroexpand-hook*, *readtable*, and so on, are less often use, AFAICT, 
and I think with good reasons.

> (b) is true, but having a global transformation does not preclude the
> possibility of having the same capability.

In principle yes, but you would need to make more general assumptions, 
which is not necessarily a good thing. Compare to the situation with the 
variables that affect printing - the only way to ensure that you get 
standard behavior is by using the relatively gross 
with-standard-io-syntax. A more careful design could have avoided that.

> It should also be noted that lambda macros are strictly less powerful
> than a global transformation, since the former can be implemented in
> terms of the latter but not vice versa.

My impression is that you are currently too indulged in the language 
extension that you are working on. That's a common mistake (IMHO) that 
language designers make: they think the feature they are working on is 
so cool and has so beneficial properties that it should be the default 
for everyone. There are quite a few languages that result from such 
thinking. Basically all "everything is an X"-languages come from there.

However, in practice such "totalitarian" approaches have little value, 
or only rarely so. It is much more common that you have to mix and match 
programs developed in different styles, and the ability to restrict 
constructs to certain scopes so that they don't interfere with others is 
a lot more important.

> [1] I wish to be very precise about the sense in which it is false: I
> claim that the situation w.r.t. program clarity is exactly the same for
> lambda macros as for a global transformation.  It is possible to write
> both clear and obfuscated code using either proposal, and to write
> clear code requires programmers to adhere to a certain discipline in
> both cases.  The only difference is that lambda macros dictate and
> (partially) enforce a particular discipline.  But it is not at all
> clear that 1) the particular discipline enforced by lambda macros is
> the One True Discipline and 2) that the cost in terms of reduced
> expressive power (e.g. the inability to write a global funcall
> transformation) is worth it.

Based on my experience with changing standard Common Lisp macros around, 
I think you get a lot more than you currently seem to think. It is 
already relatively straightforward to change, say, defun in a way so 
that your whole source code can be affected by it. For example, defun 
can expand into cl:defun and cl:define-symbol-macro, which can give you 
an approximation of a Lisp-1 to a certain degree. I make use of more 
extensive changes in the Closer to MOP library.

It is true that it is somewhat harder to be "complete" this way, but on 
the other hand I get the advantage that the changes only apply to 
packages that use Closer to MOP, and don't interfere with other packages 
that don't want to use that library.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-ADDBFC.10250405062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> ···@flownet.com wrote:
> > Pascal Costanza wrote:
> >> ···@flownet.com wrote:
> >>> Kent M Pitman wrote:
> >>>>
> >>>> Zetalisp had something for defining lambda macros.  I don't recall the
> >>>> actual name of it--something like deflambda-macro.  It was intended for
> >>>> defining things like interlisp's NLAMBDA macro, but it was for anything
> >>>> that would be processed as:
> >>>>  ((operator ... syntax ...) ...more syntax...)
> >>>> If you had this definitional operator, you'd be able to write
> >>>>  (define-var+fun (name bvl &body forms)
> >>>>    `(progn (defun ,name ,bvl ,@forms)
> >>>>            (define-symbol-macro ,name #',name)
> >>>>            (define-lambda-macro ,name ((&rest syntax) &rest more-syntax)
> >>>>              `(funcall (funcall ,name ,@syntax) ,@more-syntax))
> >>>>            ',name))
> >>>> or some such thing.
> >>>>
> >>>> This would mean the solution was special-purpose to each such variable, 
> >>>> but
> >>>> it would plug-and-play neatly with a Lisp2 without affecting the way 
> >>>> other
> >>>> names were dealt with.
> >>> That would work for me.  However, the objection that has been raised
> >>> against my proposal is not that it doesn't fit neatly, but rather that
> >>> the whole notion of assigning semantics to ((...) ...) is confusing and
> >>> of little value.  That objection applies to lambda macros as well.
> >> No, this variation actually looks much better. Assuming that there would
> >> also be a lambda-macrolet, this would mean that a) it's clear by looking
> >> at the program text what definitions apply and b) there would be a way
> >> to control what's going on when you mix code from different libraries.
> > 
> > (a) is false [1].  There is always the possibility that someone has
> > mucked with *macroexpand-hook* or the readtable in an anti-social way,
> 
> Read the notes about *macroexpand-hook* in the HyperSpec:
> 
> "Users or user programs can assign this variable to customize or trace 
> the macro expansion mechanism. Note, however, that this variable is a 
> global resource, potentially shared by multiple programs; as such, if 
> any two programs depend for their correctness on the setting of this 
> variable, those programs may not be able to run in the same Lisp image. 
> For this reason, it is frequently best to confine its uses to debugging 
> situations."
> 
> You don't want a feature that is only used for debugging purposes, but 
> you want a feature that (intentionally) allows you to change the 
> semantics of the language. It should be more carefully designed that 
> *macroexpand-hook*.

I notice you very carefully avoided saying anything about *readtable*.

I actually agree with you that all else being equal a less hacky design 
is to be preferred.  But in this case all else is not equal.


> > to say nothing of the fact that macro definitions need not come from
> > the program text.  (Come on people!  This is comp.lang.LISP!  Surely
> > you understand the distinction between program and source code?)
> 
> It is good to have features that allow you to disregard the contents of 
> the source code, but you should also have a chance to provide constructs 
> that support fine-grained control at the source code level. It's not an 
> either/or situation.
> 
> Many constructs in Common Lisp provide a way to control the scope of 
> definitions: lexical scope, dynamic scope, shadowing in packages are 
> constrained to that package, metaclass definitions affect only classes 
> that are instances of that metaclass, and so on. The features that allow 
> you to make changes only in a more unrestricted way, like changes to 
> *macroexpand-hook*, *readtable*, and so on, are less often use, AFAICT, 
> and I think with good reasons.

Do you include *read-eval* in "and so on"?  If not, why?  And if so, how 
do you suggest programs read input data safely?  And if your answer is 
to use *readtable* or *read-eval*, why do we not see a rash of 
complaints about different uses of these facilities interfering with 
each other?

> > (b) is true, but having a global transformation does not preclude the
> > possibility of having the same capability.
> 
> In principle yes, but you would need to make more general assumptions, 
> which is not necessarily a good thing.

What "more general assumptions" would I need to make?

> Compare to the situation with the 
> variables that affect printing - the only way to ensure that you get 
> standard behavior is by using the relatively gross 
> with-standard-io-syntax.

First, that is not true.  (It's actually doubly untrue.  It is untrue 
that with-standard-io-syntax ensures standard behavior (whatever that 
means), and it is untrue that this is the only way to achieve it.)  And 
second, I see nothing gross about with-standard-io-syntax.

> A more careful design could have avoided that.

Could have avoided what?  There is no problem here.  You just seem to 
have a personal distaste for dynamic scoping.

> > It should also be noted that lambda macros are strictly less powerful
> > than a global transformation, since the former can be implemented in
> > terms of the latter but not vice versa.
> 
> My impression is that you are currently too indulged in the language 
> extension that you are working on.

Your impression is wrong.  The only reason I'm anchoring my argument in 
a specific use case is to forestall the objection that the feature I'm 
advocating has no practical use.

> However, in practice such "totalitarian" approaches have little value, 
> or only rarely so. It is much more common that you have to mix and match 
> programs developed in different styles, and the ability to restrict 
> constructs to certain scopes so that they don't interfere with others is 
> a lot more important.

Yes, and my proposal includes such a mechanism, but it uses dynamic 
rather than lexical scoping.

> > [1] I wish to be very precise about the sense in which it is false: I
> > claim that the situation w.r.t. program clarity is exactly the same for
> > lambda macros as for a global transformation.  It is possible to write
> > both clear and obfuscated code using either proposal, and to write
> > clear code requires programmers to adhere to a certain discipline in
> > both cases.  The only difference is that lambda macros dictate and
> > (partially) enforce a particular discipline.  But it is not at all
> > clear that 1) the particular discipline enforced by lambda macros is
> > the One True Discipline and 2) that the cost in terms of reduced
> > expressive power (e.g. the inability to write a global funcall
> > transformation) is worth it.
> 
> Based on my experience with changing standard Common Lisp macros around, 
> I think you get a lot more than you currently seem to think. It is 
> already relatively straightforward to change, say, defun in a way so 
> that your whole source code can be affected by it. For example, defun 
> can expand into cl:defun and cl:define-symbol-macro, which can give you 
> an approximation of a Lisp-1 to a certain degree. I make use of more 
> extensive changes in the Closer to MOP library.

I am well aware of this, and I actually make extensive use of this 
technique in my current prototype.  That is not the point.

> It is true that it is somewhat harder to be "complete" this way, but on 
> the other hand I get the advantage that the changes only apply to 
> packages that use Closer to MOP, and don't interfere with other packages 
> that don't want to use that library.

And the changes in this case would apply only in the dynamic context 
where *combination-hook* is bound to something other than its standard 
value.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4ejcfoF1f7kasU1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>> You don't want a feature that is only used for debugging purposes, but 
>> you want a feature that (intentionally) allows you to change the 
>> semantics of the language. It should be more carefully designed that 
>> *macroexpand-hook*.
> 
> I notice you very carefully avoided saying anything about *readtable*.

?!?

> I actually agree with you that all else being equal a less hacky design 
> is to be preferred.  But in this case all else is not equal.

What is different?

>> It is good to have features that allow you to disregard the contents of 
>> the source code, but you should also have a chance to provide constructs 
>> that support fine-grained control at the source code level. It's not an 
>> either/or situation.
>>
>> Many constructs in Common Lisp provide a way to control the scope of 
>> definitions: lexical scope, dynamic scope, shadowing in packages are 
>> constrained to that package, metaclass definitions affect only classes 
>> that are instances of that metaclass, and so on. The features that allow 
>> you to make changes only in a more unrestricted way, like changes to 
>> *macroexpand-hook*, *readtable*, and so on, are less often use, AFAICT, 
>> and I think with good reasons.
> 
> Do you include *read-eval* in "and so on"?

No.

> If not, why? 

Because reading (execution of the function 'read) takes place during 
some runtime, so dynamic scoping makes sense in that case.

> And if so, how 
> do you suggest programs read input data safely?  And if your answer is 
> to use *readtable* or *read-eval*, why do we not see a rash of 
> complaints about different uses of these facilities interfering with 
> each other?

When you execute the function 'read, it makes sense to rebind 
*readtable* and *read-eval* beforehand (for thy dynamic extent of the 
invocation of 'read).

The situation is less ideal for changing the read table for some 
compilation unit (that is, some file that contains source code). There 
is no useful hook for the programmer to determine the binding of 
*readtable* for a compilation unit.

Fortunately, ANSI Common Lisp defines that compile-file takes care of 
restoring the binding of *readtable* to its previous value - see 
http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#compile-file

Unfortunately, that's a hack. There is no way to generalize this to your 
own special variables. There is also no convenient way to switch read 
tables within a single text file, or to change for some (nested) part of 
your source code.

*macroexand-hook* is equally inconvenient to use. Since ANSI Common Lisp 
doesn't specify when macro expansion takes place for interpreted code, 
you cannot easily use rebinding for specific parts of your code and 
expect reliable semantics. If *macroexpand-hook* were a (symbol) macro, 
things would be easier, because you could use the idiom sketched in 
http://www.lispworks.com/documentation/HyperSpec/Issues/iss066_w.htm to 
simulate compiler-let.

>>> (b) is true, but having a global transformation does not preclude the
>>> possibility of having the same capability.
>> In principle yes, but you would need to make more general assumptions, 
>> which is not necessarily a good thing.
> 
> What "more general assumptions" would I need to make?

That someone else messes your (parts of your) code up without you having 
a handle to protect your code against it.

>> Compare to the situation with the 
>> variables that affect printing - the only way to ensure that you get 
>> standard behavior is by using the relatively gross 
>> with-standard-io-syntax.
> 
> First, that is not true.  (It's actually doubly untrue.  It is untrue 
> that with-standard-io-syntax ensures standard behavior (whatever that 
> means), and it is untrue that this is the only way to achieve it.)  And 
> second, I see nothing gross about with-standard-io-syntax.

There was a proposal, I think by Steve Haflich, to remove the 
*print-xyz* and *read-xyz* variables and replace them with two variables 
*print-context* and *read-context*. Such contexts could be represented 
by objects that contain the various settings. It would then be possible 
to define and use predefined configurations in a relatively 
straightforward way. (Sorry, but I cannot find the reference at the moment.)

>> A more careful design could have avoided that.
> 
> Could have avoided what?  There is no problem here.  You just seem to 
> have a personal distaste for dynamic scoping.

That's a good one. :)

>>> It should also be noted that lambda macros are strictly less powerful
>>> than a global transformation, since the former can be implemented in
>>> terms of the latter but not vice versa.
>> My impression is that you are currently too indulged in the language 
>> extension that you are working on.
> 
> Your impression is wrong.  The only reason I'm anchoring my argument in 
> a specific use case is to forestall the objection that the feature I'm 
> advocating has no practical use.
> 
>> However, in practice such "totalitarian" approaches have little value, 
>> or only rarely so. It is much more common that you have to mix and match 
>> programs developed in different styles, and the ability to restrict 
>> constructs to certain scopes so that they don't interfere with others is 
>> a lot more important.
> 
> Yes, and my proposal includes such a mechanism, but it uses dynamic 
> rather than lexical scoping.

...which is of relatively little use due to the fact that the exact 
macro-expansion time is underspecified in ANSI Common Lisp, at least for 
interpreted code. (We had that discussion before.)


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-F38478.16115205062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> >> You don't want a feature that is only used for debugging purposes, but 
> >> you want a feature that (intentionally) allows you to change the 
> >> semantics of the language. It should be more carefully designed that 
> >> *macroexpand-hook*.
> > 
> > I notice you very carefully avoided saying anything about *readtable*.
> 
> ?!?

I cited two examples to support my position: *macroexpand-hook* and 
*readtable*.  You responded to one example but not the other.  Your 
response (that it is only for debugging, that the spec includes a 
disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
not to *readtable*.

The fact of the matter is that the situation w.r.t. potential for 
shooting yourself in the foot is exactly the same for *combination-hook* 
as it is for *readtable*.

> > I actually agree with you that all else being equal a less hacky design 
> > is to be preferred.  But in this case all else is not equal.
> 
> What is different?

*combination-hook* provides a superset of the functionality of lambda 
macros.

> >> It is good to have features that allow you to disregard the contents of 
> >> the source code, but you should also have a chance to provide constructs 
> >> that support fine-grained control at the source code level. It's not an 
> >> either/or situation.
> >>
> >> Many constructs in Common Lisp provide a way to control the scope of 
> >> definitions: lexical scope, dynamic scope, shadowing in packages are 
> >> constrained to that package, metaclass definitions affect only classes 
> >> that are instances of that metaclass, and so on. The features that allow 
> >> you to make changes only in a more unrestricted way, like changes to 
> >> *macroexpand-hook*, *readtable*, and so on, are less often use, AFAICT, 
> >> and I think with good reasons.
> > 
> > Do you include *read-eval* in "and so on"?
> 
> No.
> 
> > If not, why? 
> 
> Because reading (execution of the function 'read) takes place during 
> some runtime, so dynamic scoping makes sense in that case.

So?  Does not compilation likewise take place during some runtime?  Or 
do you compile you programs by hand?

> > And if so, how 
> > do you suggest programs read input data safely?  And if your answer is 
> > to use *readtable* or *read-eval*, why do we not see a rash of 
> > complaints about different uses of these facilities interfering with 
> > each other?
> 
> When you execute the function 'read, it makes sense to rebind 
> *readtable* and *read-eval* beforehand (for thy dynamic extent of the 
> invocation of 'read).

Likewise for compile and compile-file.  I don't see the difference.

> The situation is less ideal for changing the read table for some 
> compilation unit (that is, some file that contains source code). There 
> is no useful hook for the programmer to determine the binding of 
> *readtable* for a compilation unit.

The word "determine" is ambiguous.  It could mean "control" or it could 
mean "know".  I'll assume the former meaning (since the latter doesn't 
make any sense) and respond:

So what?  Why would you ever care?

> Fortunately, ANSI Common Lisp defines that compile-file takes care of 
> restoring the binding of *readtable* to its previous value - see 
> http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#compile-fil
> e

The same safeguard could be put in place for *combination-hook*.

> Unfortunately, that's a hack. There is no way to generalize this to your 
> own special variables.

Of course there is.  It involves shadowing compile-file.

> There is also no convenient way to switch read 
> tables within a single text file, or to change for some (nested) part of 
> your source code.

That is true, but I don't hear many people clamoring for these things.

> *macroexand-hook* is equally inconvenient to use. Since ANSI Common Lisp 
> doesn't specify when macro expansion takes place for interpreted code, 
> you cannot easily use rebinding for specific parts of your code and 
> expect reliable semantics. If *macroexpand-hook* were a (symbol) macro, 
> things would be easier, because you could use the idiom sketched in 
> http://www.lispworks.com/documentation/HyperSpec/Issues/iss066_w.htm to 
> simulate compiler-let.

Fine, make *combination-hook* a symbol macro then.

> >>> (b) is true, but having a global transformation does not preclude the
> >>> possibility of having the same capability.
> >> In principle yes, but you would need to make more general assumptions, 
> >> which is not necessarily a good thing.
> > 
> > What "more general assumptions" would I need to make?
> 
> That someone else messes your (parts of your) code up without you having 
> a handle to protect your code against it.

And how does *combination-hook* make the situation any worse than it 
currently is in that regard?

Consider:

(defun make-counter () (let ( (x 0) ) (lambda () (incf x))))
(list (funcall (make-counter)) (funcall (make-counter)))

I'll bet you US$100 that you cannot list all of the conditions necessary 
to guarantee that that bit of code returns (1 1).


> >> Compare to the situation with the 
> >> variables that affect printing - the only way to ensure that you get 
> >> standard behavior is by using the relatively gross 
> >> with-standard-io-syntax.
> > 
> > First, that is not true.  (It's actually doubly untrue.  It is untrue 
> > that with-standard-io-syntax ensures standard behavior (whatever that 
> > means), and it is untrue that this is the only way to achieve it.)  And 
> > second, I see nothing gross about with-standard-io-syntax.
> 
> There was a proposal, I think by Steve Haflich, to remove the 
> *print-xyz* and *read-xyz* variables and replace them with two variables 
> *print-context* and *read-context*. Such contexts could be represented 
> by objects that contain the various settings. It would then be possible 
> to define and use predefined configurations in a relatively 
> straightforward way. (Sorry, but I cannot find the reference at the moment.)

I don't see why it is anything other than an elementary exercise to 
implement this at the user level.

> >> A more careful design could have avoided that.
> > 
> > Could have avoided what?  There is no problem here.  You just seem to 
> > have a personal distaste for dynamic scoping.
> 
> That's a good one. :)

I was being quite serious.

> >>> It should also be noted that lambda macros are strictly less powerful
> >>> than a global transformation, since the former can be implemented in
> >>> terms of the latter but not vice versa.
> >> My impression is that you are currently too indulged in the language 
> >> extension that you are working on.
> > 
> > Your impression is wrong.  The only reason I'm anchoring my argument in 
> > a specific use case is to forestall the objection that the feature I'm 
> > advocating has no practical use.
> > 
> >> However, in practice such "totalitarian" approaches have little value, 
> >> or only rarely so. It is much more common that you have to mix and match 
> >> programs developed in different styles, and the ability to restrict 
> >> constructs to certain scopes so that they don't interfere with others is 
> >> a lot more important.
> > 
> > Yes, and my proposal includes such a mechanism, but it uses dynamic 
> > rather than lexical scoping.
> 
> ...which is of relatively little use due to the fact that the exact 
> macro-expansion time is underspecified in ANSI Common Lisp, at least for 
> interpreted code. (We had that discussion before.)

That's true, but it generally tends to fall between read time and run 
time, which is enough of a constraint to make *combination-hook* useful 
as a dynamic variable.

But I really don't care.  We can make *combination-hook* a symbol-macro 
if that will overcome your objections.  (I suspect it won't though.)

rg
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-06F360.17562805062006@news.gha.chartermi.net>
In article <·······························@news.gha.chartermi.net>,
 Ron Garret <·········@flownet.com> wrote:

> But I really don't care.  We can make *combination-hook* a symbol-macro 

Actually, it turns out that doesn't work so good.  A regular macro turns 
out to be much cleaner.  I've written a version that does it that way 
for SBCL.  (Turns out that it's pretty easy to find the spot in the 
source code where this hook needs to go.  You just evaluate ((foo)) and 
grep on the resulting error message!)

The code is here:

http://www.flownet.com/ron/lisp/ir1-combination-hook.lisp

After compiling and loading this file the behavior of the resulting 
system is unchanged, but you can now do this:

* (without-package-locks (defmacro sb-c::combination-hook (&rest body)
(cons 'funcall body))
)

SB-C::COMBINATION-HOOK
* ((car (list #'cdr)) '(1 2 3))

(2 3)

Woohoo!  Sing with me now...

It's so easy to hack the code
It's so easy to hack the code
(It's so easy it's so easy...)
People tell me hacking's for fools
Here I go breaking all the rules

(Hey, if Kenny can do it, so can I!)

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4ere8kF1gehu1U1@individual.net>
Ron Garret wrote:
> In article <·······························@news.gha.chartermi.net>,
>  Ron Garret <·········@flownet.com> wrote:
> 
>> But I really don't care.  We can make *combination-hook* a symbol-macro 
> 
> Actually, it turns out that doesn't work so good.  A regular macro turns 
> out to be much cleaner.  I've written a version that does it that way 
> for SBCL.  (Turns out that it's pretty easy to find the spot in the 
> source code where this hook needs to go.  You just evaluate ((foo)) and 
> grep on the resulting error message!)
> 
> The code is here:
> 
> http://www.flownet.com/ron/lisp/ir1-combination-hook.lisp
> 
> After compiling and loading this file the behavior of the resulting 
> system is unchanged, but you can now do this:
> 
> * (without-package-locks (defmacro sb-c::combination-hook (&rest body)
> (cons 'funcall body))
> )
> 
> SB-C::COMBINATION-HOOK
> * ((car (list #'cdr)) '(1 2 3))
> 
> (2 3)
> 
> Woohoo!  Sing with me now...
> 
> It's so easy to hack the code
> It's so easy to hack the code
> (It's so easy it's so easy...)
> People tell me hacking's for fools
> Here I go breaking all the rules
> 
> (Hey, if Kenny can do it, so can I!)

You could do the following:

(eval-when (:compile-toplevel :load-toplevel :execute)
   (defvar *combination-hook*
     (lambda (form environment)
       (compiler-error "Illegal function call."))))

(defmacro combination-hook
   (&whole form &body body &environment environment)
   (declare (ignore body))
   (funcall *combination-hook* form environment))

This would allow you to have both dynamic scoping and lexical scoping 
capabilities, and doesn't involve any hackery at that level.

(I'm still not too happy about changing the semantics for _all_ forms 
that have conses in car positions, but at least it gives me a way to 
control scopes in fine-grained ways and protect my code from influences 
from other code.)


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-60833E.14060208062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <·······························@news.gha.chartermi.net>,
> >  Ron Garret <·········@flownet.com> wrote:
> > 
> >> But I really don't care.  We can make *combination-hook* a symbol-macro 
> > 
> > Actually, it turns out that doesn't work so good.  A regular macro turns 
> > out to be much cleaner.  I've written a version that does it that way 
> > for SBCL.  (Turns out that it's pretty easy to find the spot in the 
> > source code where this hook needs to go.  You just evaluate ((foo)) and 
> > grep on the resulting error message!)
> > 
> > The code is here:
> > 
> > http://www.flownet.com/ron/lisp/ir1-combination-hook.lisp
> > 
> > After compiling and loading this file the behavior of the resulting 
> > system is unchanged, but you can now do this:
> > 
> > * (without-package-locks (defmacro sb-c::combination-hook (&rest body)
> > (cons 'funcall body))
> > )
> > 
> > SB-C::COMBINATION-HOOK
> > * ((car (list #'cdr)) '(1 2 3))
> > 
> > (2 3)
> > 
> > Woohoo!  Sing with me now...
> > 
> > It's so easy to hack the code
> > It's so easy to hack the code
> > (It's so easy it's so easy...)
> > People tell me hacking's for fools
> > Here I go breaking all the rules
> > 
> > (Hey, if Kenny can do it, so can I!)
> 
> You could do the following:
> 
> (eval-when (:compile-toplevel :load-toplevel :execute)
>    (defvar *combination-hook*
>      (lambda (form environment)
>        (compiler-error "Illegal function call."))))
> 
> (defmacro combination-hook
>    (&whole form &body body &environment environment)
>    (declare (ignore body))
>    (funcall *combination-hook* form environment))
> 
> This would allow you to have both dynamic scoping and lexical scoping 
> capabilities, and doesn't involve any hackery at that level.
> 
> (I'm still not too happy about changing the semantics for _all_ forms 
> that have conses in car positions, but at least it gives me a way to 
> control scopes in fine-grained ways and protect my code from influences 
> from other code.)
> 
> 
> Pascal

Hah!  I actually came up with the same solution.  See my other post (and 
the lexicon code) :-)

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4erdqqF1furm7U1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>> Ron Garret wrote:
>>> In article <···············@individual.net>,
>>>  Pascal Costanza <··@p-cos.net> wrote:
>>>
>>>> You don't want a feature that is only used for debugging purposes, but 
>>>> you want a feature that (intentionally) allows you to change the 
>>>> semantics of the language. It should be more carefully designed that 
>>>> *macroexpand-hook*.
>>> I notice you very carefully avoided saying anything about *readtable*.
>> ?!?
> 
> I cited two examples to support my position: *macroexpand-hook* and 
> *readtable*.  You responded to one example but not the other.  Your 
> response (that it is only for debugging, that the spec includes a 
> disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
> not to *readtable*.

I see. But I have discussed *readtable* somewhere else in this thread, 
pointing out that it actually doesn't work that well. (Sorry, I am 
currently too tired to look that up...)

> The fact of the matter is that the situation w.r.t. potential for 
> shooting yourself in the foot is exactly the same for *combination-hook* 
> as it is for *readtable*.

Yes, and that's why I think it's far from optimal.

>>> I actually agree with you that all else being equal a less hacky design 
>>> is to be preferred.  But in this case all else is not equal.
>> What is different?
> 
> *combination-hook* provides a superset of the functionality of lambda 
> macros.

No it doesn't. It doesn't provide lexical scope.

>> Because reading (execution of the function 'read) takes place during 
>> some runtime, so dynamic scoping makes sense in that case.
> 
> So?  Does not compilation likewise take place during some runtime?  Or 
> do you compile you programs by hand?

No, but I don't necessarily have a handle on rebinding some special 
variables before a file is compiled.

>>> And if so, how 
>>> do you suggest programs read input data safely?  And if your answer is 
>>> to use *readtable* or *read-eval*, why do we not see a rash of 
>>> complaints about different uses of these facilities interfering with 
>>> each other?
>> When you execute the function 'read, it makes sense to rebind 
>> *readtable* and *read-eval* beforehand (for thy dynamic extent of the 
>> invocation of 'read).
> 
> Likewise for compile and compile-file.  I don't see the difference.

A development environment does not necessarily invoke compile or 
compile-file for compiling a file, and even if it does, I don't 
necessarily have a hook for those functions.

Or do you call compile-file by hand?

>> The situation is less ideal for changing the read table for some 
>> compilation unit (that is, some file that contains source code). There 
>> is no useful hook for the programmer to determine the binding of 
>> *readtable* for a compilation unit.
> 
> The word "determine" is ambiguous.  It could mean "control" or it could 
> mean "know".  I'll assume the former meaning (since the latter doesn't 
> make any sense) and respond:
> 
> So what?  Why would you ever care?
> 
>> Fortunately, ANSI Common Lisp defines that compile-file takes care of 
>> restoring the binding of *readtable* to its previous value - see 
>> http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#compile-fil
>> e
> 
> The same safeguard could be put in place for *combination-hook*.

Why only *readtable*, *package* (as specified for compile-file as well), 
and *combination-hook*? Why not a more general mechanism?

And why not a mechanism that allows you to change things in more 
fine-grained scopes, i.e., within a file?

>> Unfortunately, that's a hack. There is no way to generalize this to your 
>> own special variables.
> 
> Of course there is.  It involves shadowing compile-file.

...which wouldn't be called by my development environment...

>> There is also no convenient way to switch read 
>> tables within a single text file, or to change for some (nested) part of 
>> your source code.
> 
> That is true, but I don't hear many people clamoring for these things.

I vaguely recall wanting that, but unfortunately I don't remember the 
details anymore.

>> *macroexand-hook* is equally inconvenient to use. Since ANSI Common Lisp 
>> doesn't specify when macro expansion takes place for interpreted code, 
>> you cannot easily use rebinding for specific parts of your code and 
>> expect reliable semantics. If *macroexpand-hook* were a (symbol) macro, 
>> things would be easier, because you could use the idiom sketched in 
>> http://www.lispworks.com/documentation/HyperSpec/Issues/iss066_w.htm to 
>> simulate compiler-let.
> 
> Fine, make *combination-hook* a symbol macro then.

That's a gross change of semantics. (But see my next posting.)

>>>>> (b) is true, but having a global transformation does not preclude the
>>>>> possibility of having the same capability.
>>>> In principle yes, but you would need to make more general assumptions, 
>>>> which is not necessarily a good thing.
>>> What "more general assumptions" would I need to make?
>> That someone else messes your (parts of your) code up without you having 
>> a handle to protect your code against it.
> 
> And how does *combination-hook* make the situation any worse than it 
> currently is in that regard?
> 
> Consider:
> 
> (defun make-counter () (let ( (x 0) ) (lambda () (incf x))))
> (list (funcall (make-counter)) (funcall (make-counter)))
> 
> I'll bet you US$100 that you cannot list all of the conditions necessary 
> to guarantee that that bit of code returns (1 1).

But it's simple to protect yourself against outside influences:

(cl:defun make-counter ()
   (cl:let ((x 0))
     (cl:lambda (cl:incf x))))

(cl:list (cl:funcall (make-counter)) (cl:funcall (make-counter)))

>>>> Compare to the situation with the 
>>>> variables that affect printing - the only way to ensure that you get 
>>>> standard behavior is by using the relatively gross 
>>>> with-standard-io-syntax.
>>> First, that is not true.  (It's actually doubly untrue.  It is untrue 
>>> that with-standard-io-syntax ensures standard behavior (whatever that 
>>> means), and it is untrue that this is the only way to achieve it.)  And 
>>> second, I see nothing gross about with-standard-io-syntax.
>> There was a proposal, I think by Steve Haflich, to remove the 
>> *print-xyz* and *read-xyz* variables and replace them with two variables 
>> *print-context* and *read-context*. Such contexts could be represented 
>> by objects that contain the various settings. It would then be possible 
>> to define and use predefined configurations in a relatively 
>> straightforward way. (Sorry, but I cannot find the reference at the moment.)
> 
> I don't see why it is anything other than an elementary exercise to 
> implement this at the user level.

...because you wouldn't get rid of the original *print-xyz* / *read-xyz* 
variables. (The proposal was about replacing the current approach, not 
about working around it.)

>>>> A more careful design could have avoided that.
>>> Could have avoided what?  There is no problem here.  You just seem to 
>>> have a personal distaste for dynamic scoping.
>> That's a good one. :)
> 
> I was being quite serious.

Then I will leave it as a mystery for you to solve why I still think 
that it's a joke.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-24AFF7.14032708062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> >> Ron Garret wrote:
> >>> In article <···············@individual.net>,
> >>>  Pascal Costanza <··@p-cos.net> wrote:
> >>>
> >>>> You don't want a feature that is only used for debugging purposes, but 
> >>>> you want a feature that (intentionally) allows you to change the 
> >>>> semantics of the language. It should be more carefully designed that 
> >>>> *macroexpand-hook*.
> >>> I notice you very carefully avoided saying anything about *readtable*.
> >> ?!?
> > 
> > I cited two examples to support my position: *macroexpand-hook* and 
> > *readtable*.  You responded to one example but not the other.  Your 
> > response (that it is only for debugging, that the spec includes a 
> > disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
> > not to *readtable*.
> 
> I see. But I have discussed *readtable* somewhere else in this thread, 
> pointing out that it actually doesn't work that well. (Sorry, I am 
> currently too tired to look that up...)

I don't dispute that.  It is nonetheless a part of Common Lisp as it 
currently stands.

> > The fact of the matter is that the situation w.r.t. potential for 
> > shooting yourself in the foot is exactly the same for *combination-hook* 
> > as it is for *readtable*.
> 
> Yes, and that's why I think it's far from optimal.

Nothing in CL is optimal.

One of the things that distinguishes CL from Scheme is the quality 
metric used (implicitly) by their respective communities.  Scheme values 
optimality.  CL values practicality.  That is why Scheme has call/cc and 
CL has unwind-protect, catch/throw, handler-case, etc.

> >>> I actually agree with you that all else being equal a less hacky design 
> >>> is to be preferred.  But in this case all else is not equal.
> >> What is different?
> > 
> > *combination-hook* provides a superset of the functionality of lambda 
> > macros.
> 
> No it doesn't. It doesn't provide lexical scope.

Not by itself.  But it is straightforward to implement lambda macros in 
terms of *combination-hook*.  The reverse is not possible.

> >> Because reading (execution of the function 'read) takes place during 
> >> some runtime, so dynamic scoping makes sense in that case.
> > 
> > So?  Does not compilation likewise take place during some runtime?  Or 
> > do you compile you programs by hand?
> 
> No, but I don't necessarily have a handle on rebinding some special 
> variables before a file is compiled.

If you don't have control over the values of *package*, *readtable*, 
*read-base*, *read-eval* etc. before you compile a file then you already 
have bigger problems than *combination-hook* will cause you.

> >>> And if so, how 
> >>> do you suggest programs read input data safely?  And if your answer is 
> >>> to use *readtable* or *read-eval*, why do we not see a rash of 
> >>> complaints about different uses of these facilities interfering with 
> >>> each other?
> >> When you execute the function 'read, it makes sense to rebind 
> >> *readtable* and *read-eval* beforehand (for thy dynamic extent of the 
> >> invocation of 'read).
> > 
> > Likewise for compile and compile-file.  I don't see the difference.
> 
> A development environment does not necessarily invoke compile or 
> compile-file for compiling a file, and even if it does, I don't 
> necessarily have a hook for those functions.
> 
> Or do you call compile-file by hand?

I actually very rarely call compile-file at all because I use MCL where 
EVAL is defined as (funcall (compile nil (lambda () body))) and the 
compiler is fast enough to make that practical.

> >> The situation is less ideal for changing the read table for some 
> >> compilation unit (that is, some file that contains source code). There 
> >> is no useful hook for the programmer to determine the binding of 
> >> *readtable* for a compilation unit.
> > 
> > The word "determine" is ambiguous.  It could mean "control" or it could 
> > mean "know".  I'll assume the former meaning (since the latter doesn't 
> > make any sense) and respond:
> > 
> > So what?  Why would you ever care?
> > 
> >> Fortunately, ANSI Common Lisp defines that compile-file takes care of 
> >> restoring the binding of *readtable* to its previous value - see 
> >> http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#compile-
> >> fil
> >> e
> > 
> > The same safeguard could be put in place for *combination-hook*.
> 
> Why only *readtable*, *package* (as specified for compile-file as well), 
> and *combination-hook*? Why not a more general mechanism?
>
> And why not a mechanism that allows you to change things in more 
> fine-grained scopes, i.e., within a file?

Like what?


> >> Unfortunately, that's a hack. There is no way to generalize this to your 
> >> own special variables.
> > 
> > Of course there is.  It involves shadowing compile-file.
> 
> ...which wouldn't be called by my development environment...

Then you need to get yourself a new development environment.  I 
recommend MCL, where pretty much everything is fully customizable.

> >> There is also no convenient way to switch read 
> >> tables within a single text file, or to change for some (nested) part of 
> >> your source code.
> > 
> > That is true, but I don't hear many people clamoring for these things.
> 
> I vaguely recall wanting that, but unfortunately I don't remember the 
> details anymore.

Then I'll tell you the same thing you told me: make a proposal.


> >> *macroexand-hook* is equally inconvenient to use. Since ANSI Common Lisp 
> >> doesn't specify when macro expansion takes place for interpreted code, 
> >> you cannot easily use rebinding for specific parts of your code and 
> >> expect reliable semantics. If *macroexpand-hook* were a (symbol) macro, 
> >> things would be easier, because you could use the idiom sketched in 
> >> http://www.lispworks.com/documentation/HyperSpec/Issues/iss066_w.htm to 
> >> simulate compiler-let.
> > 
> > Fine, make *combination-hook* a symbol macro then.
> 
> That's a gross change of semantics. (But see my next posting.)

Indeed, and it turns out not to work well at all.  (I actually tried to 
implement it that way and it turned into a freakin' mess.)  But I think 
a regular macro would work.

Actually, in my lexicon code I have both: the dynamic variable 
*current-lexicon* contains the current lexicon, and the macro 
(current-lexicon) expands to the macro-expansion-time value of 
*current-lexicon*.  This seems to work quite well, and allows the 
implementation of e.g. with-lexicon by shadowing the definition of 
current-lexicon (without stars) using macrolet.  I think that solution 
may make us both happy.

> >>>>> (b) is true, but having a global transformation does not preclude the
> >>>>> possibility of having the same capability.
> >>>> In principle yes, but you would need to make more general assumptions, 
> >>>> which is not necessarily a good thing.
> >>> What "more general assumptions" would I need to make?
> >> That someone else messes your (parts of your) code up without you having 
> >> a handle to protect your code against it.
> > 
> > And how does *combination-hook* make the situation any worse than it 
> > currently is in that regard?
> > 
> > Consider:
> > 
> > (defun make-counter () (let ( (x 0) ) (lambda () (incf x))))
> > (list (funcall (make-counter)) (funcall (make-counter)))
> > 
> > I'll bet you US$100 that you cannot list all of the conditions necessary 
> > to guarantee that that bit of code returns (1 1).
> 
> But it's simple to protect yourself against outside influences:
> 
> (cl:defun make-counter ()
>    (cl:let ((x 0))
>      (cl:lambda (cl:incf x))))
> 
> (cl:list (cl:funcall (make-counter)) (cl:funcall (make-counter)))

First, that's cheating.  No one writes code that way.  And second, even 
with your new version of the code I'll still make you the same wager.

> >>>> Compare to the situation with the 
> >>>> variables that affect printing - the only way to ensure that you get 
> >>>> standard behavior is by using the relatively gross 
> >>>> with-standard-io-syntax.
> >>> First, that is not true.  (It's actually doubly untrue.  It is untrue 
> >>> that with-standard-io-syntax ensures standard behavior (whatever that 
> >>> means), and it is untrue that this is the only way to achieve it.)  And 
> >>> second, I see nothing gross about with-standard-io-syntax.
> >> There was a proposal, I think by Steve Haflich, to remove the 
> >> *print-xyz* and *read-xyz* variables and replace them with two variables 
> >> *print-context* and *read-context*. Such contexts could be represented 
> >> by objects that contain the various settings. It would then be possible 
> >> to define and use predefined configurations in a relatively 
> >> straightforward way. (Sorry, but I cannot find the reference at the 
> >> moment.)
> > 
> > I don't see why it is anything other than an elementary exercise to 
> > implement this at the user level.
> 
> ...because you wouldn't get rid of the original *print-xyz* / *read-xyz* 
> variables. (The proposal was about replacing the current approach, not 
> about working around it.)

It is important to distinguish between a proposal and its intentions.  
The intent (as I understand it) is to simplify control of the I/O 
environment.  It seems to me that that intent can be accomplished at the 
user level, and without actually getting rid of *print-xyz*/*read-xyz* 
(which would break backwards compatibility, which is highly valued in 
the CL community).

> >>>> A more careful design could have avoided that.
> >>> Could have avoided what?  There is no problem here.  You just seem to 
> >>> have a personal distaste for dynamic scoping.
> >> That's a good one. :)
> > 
> > I was being quite serious.
> 
> Then I will leave it as a mystery for you to solve why I still think 
> that it's a joke.

Oh, I understand why you thought it was a joke.  Do you understand why I 
was serious?

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4etro0F1gph1hU1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>>> I cited two examples to support my position: *macroexpand-hook* and 
>>> *readtable*.  You responded to one example but not the other.  Your 
>>> response (that it is only for debugging, that the spec includes a 
>>> disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
>>> not to *readtable*.
>> I see. But I have discussed *readtable* somewhere else in this thread, 
>> pointing out that it actually doesn't work that well. (Sorry, I am 
>> currently too tired to look that up...)
> 
> I don't dispute that.  It is nonetheless a part of Common Lisp as it 
> currently stands.

So?

>>> The fact of the matter is that the situation w.r.t. potential for 
>>> shooting yourself in the foot is exactly the same for *combination-hook* 
>>> as it is for *readtable*.
>> Yes, and that's why I think it's far from optimal.
> 
> Nothing in CL is optimal.

...but few things in CL are far from optimal. ;)

>>>>> I actually agree with you that all else being equal a less hacky design 
>>>>> is to be preferred.  But in this case all else is not equal.
>>>> What is different?
>>> *combination-hook* provides a superset of the functionality of lambda 
>>> macros.
>> No it doesn't. It doesn't provide lexical scope.
> 
> Not by itself.  But it is straightforward to implement lambda macros in 
> terms of *combination-hook*.  The reverse is not possible.

...but you have to provide the macro definition as part of your library, 
together with the protocol between the macro and *combination-hook*, 
otherwise it wouldn't work. (And that's what you do know, so everything 
is good.)

>>>> Fortunately, ANSI Common Lisp defines that compile-file takes care of 
>>>> restoring the binding of *readtable* to its previous value - see 
>>>> http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#compile-
>>>> fil
>>>> e
>>> The same safeguard could be put in place for *combination-hook*.
>> Why only *readtable*, *package* (as specified for compile-file as well), 
>> and *combination-hook*? Why not a more general mechanism?
>>
>> And why not a mechanism that allows you to change things in more 
>> fine-grained scopes, i.e., within a file?
> 
> Like what?

Very rough sketch:

(with-readtable (some-readtable)
   ...) ; <= that code is read with some-readtable

[Of course this cannot work, because the whole form is first read with 
the "surrounding" readtable, but you get the idea.]

[Yes, I know, at least Allegro provides an extension in this regard.]

>>>> Unfortunately, that's a hack. There is no way to generalize this to your 
>>>> own special variables.
>>> Of course there is.  It involves shadowing compile-file.
>> ...which wouldn't be called by my development environment...
> 
> Then you need to get yourself a new development environment.  I 
> recommend MCL, where pretty much everything is fully customizable.

I'd prefer if the customizability did not depend on the development 
environment. (But your point is well-taken.)

>>>> There is also no convenient way to switch read 
>>>> tables within a single text file, or to change for some (nested) part of 
>>>> your source code.
>>> That is true, but I don't hear many people clamoring for these things.
>> I vaguely recall wanting that, but unfortunately I don't remember the 
>> details anymore.
> 
> Then I'll tell you the same thing you told me: make a proposal.

No, thanks. I am already working on other things that are of more 
importance to me.

>>>>> What "more general assumptions" would I need to make?
>>>> That someone else messes your (parts of your) code up without you having 
>>>> a handle to protect your code against it.
>>> And how does *combination-hook* make the situation any worse than it 
>>> currently is in that regard?
>>>
>>> Consider:
>>>
>>> (defun make-counter () (let ( (x 0) ) (lambda () (incf x))))
>>> (list (funcall (make-counter)) (funcall (make-counter)))
>>>
>>> I'll bet you US$100 that you cannot list all of the conditions necessary 
>>> to guarantee that that bit of code returns (1 1).
>> But it's simple to protect yourself against outside influences:
>>
>> (cl:defun make-counter ()
>>    (cl:let ((x 0))
>>      (cl:lambda () (cl:incf x))))
>>
>> (cl:list (cl:funcall (make-counter)) (cl:funcall (make-counter)))
> 
> First, that's cheating.  No one writes code that way.

I do, occasionally, exactly for the reasons I have mentioned - i.e., to 
distinguish between "native" and shadowed definitions (in my case, of 
CLOS operators).

> And second, even 
> with your new version of the code I'll still make you the same wager.

You could perform a delete-package or rename-package on (find-package 
"COMMON-LISP"), but that's gross and completely underspecified, so 
anything could happen. Changing definitions in the common-lisp package 
is also undefined behavior according to ANSI Common Lisp, but here I 
trust that if people do that they know exactly what they do.

Someone could declare x as special, but I can protect my code from this 
my placing it in my own package. I would consider code that declares 
variables in foreign packages as special to be malicious (and I am 
explicitly not concerned about security issues here, that's a completely 
different topic).

Apart from these things, I don't see what else could potentially go 
wrong here.

>>>> There was a proposal, I think by Steve Haflich, to remove the 
>>>> *print-xyz* and *read-xyz* variables and replace them with two variables 
>>>> *print-context* and *read-context*. Such contexts could be represented 
>>>> by objects that contain the various settings. It would then be possible 
>>>> to define and use predefined configurations in a relatively 
>>>> straightforward way. (Sorry, but I cannot find the reference at the 
>>>> moment.)
>>> I don't see why it is anything other than an elementary exercise to 
>>> implement this at the user level.
>> ...because you wouldn't get rid of the original *print-xyz* / *read-xyz* 
>> variables. (The proposal was about replacing the current approach, not 
>> about working around it.)
> 
> It is important to distinguish between a proposal and its intentions.  
> The intent (as I understand it) is to simplify control of the I/O 
> environment.

No, the intent was to get rid of special variables because they incur a 
performance overhead in multi-threaded implementations of Common Lisp. 
At least, the number of predefined special variables shouldn't be too 
high. (If I remember correctly.)

> It seems to me that that intent can be accomplished at the 
> user level, and without actually getting rid of *print-xyz*/*read-xyz* 
> (which would break backwards compatibility, which is highly valued in 
> the CL community).
> 
>>>>>> A more careful design could have avoided that.
>>>>> Could have avoided what?  There is no problem here.  You just seem to 
>>>>> have a personal distaste for dynamic scoping.
>>>> That's a good one. :)
>>> I was being quite serious.
>> Then I will leave it as a mystery for you to solve why I still think 
>> that it's a joke.
> 
> Oh, I understand why you thought it was a joke.  Do you understand why I 
> was serious?

Yes.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-1744F0.17191811062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> >>> I cited two examples to support my position: *macroexpand-hook* and 
> >>> *readtable*.  You responded to one example but not the other.  Your 
> >>> response (that it is only for debugging, that the spec includes a 
> >>> disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
> >>> not to *readtable*.
> >> I see. But I have discussed *readtable* somewhere else in this thread, 
> >> pointing out that it actually doesn't work that well. (Sorry, I am 
> >> currently too tired to look that up...)
> > 
> > I don't dispute that.  It is nonetheless a part of Common Lisp as it 
> > currently stands.
> 
> So?

So *combination-hook* can be reasonably expected to cause no more 
problems than *readtable* does.  Since *readtable* does not seem to 
cause unacceptable difficulties there seems to be no reason to believe 
that *combination-hook* would cause unacceptable difficulties.

> >>>>> I actually agree with you that all else being equal a less hacky design 
> >>>>> is to be preferred.  But in this case all else is not equal.
> >>>> What is different?
> >>> *combination-hook* provides a superset of the functionality of lambda 
> >>> macros.
> >> No it doesn't. It doesn't provide lexical scope.
> > 
> > Not by itself.  But it is straightforward to implement lambda macros in 
> > terms of *combination-hook*.  The reverse is not possible.
> 
> ...but you have to provide the macro definition as part of your library, 

Why?

> together with the protocol between the macro and *combination-hook*, 
> otherwise it wouldn't work.

Huh?

> (And that's what you do know, so everything is good.)

Huh????

> >>>> Fortunately, ANSI Common Lisp defines that compile-file takes care of 
> >>>> restoring the binding of *readtable* to its previous value - see 
> >>>> http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#compil
> >>>> e-
> >>>> fil
> >>>> e
> >>> The same safeguard could be put in place for *combination-hook*.
> >> Why only *readtable*, *package* (as specified for compile-file as well), 
> >> and *combination-hook*? Why not a more general mechanism?
> >>
> >> And why not a mechanism that allows you to change things in more 
> >> fine-grained scopes, i.e., within a file?
> > 
> > Like what?
> 
> Very rough sketch:
> 
> (with-readtable (some-readtable)
>    ...) ; <= that code is read with some-readtable
> 
> [Of course this cannot work, because the whole form is first read with 
> the "surrounding" readtable, but you get the idea.]

No, I don't actually.  The READ-EVAL-PRINT loop is called that for a 
reason.  The separation of reading and evaluation is fundamental to 
Lisp.  Have you read this?

http://www.nhplace.com/kent/PS/Ambitious.html


> >>>>> What "more general assumptions" would I need to make?
> >>>> That someone else messes your (parts of your) code up without you having 
> >>>> a handle to protect your code against it.
> >>> And how does *combination-hook* make the situation any worse than it 
> >>> currently is in that regard?
> >>>
> >>> Consider:
> >>>
> >>> (defun make-counter () (let ( (x 0) ) (lambda () (incf x))))
> >>> (list (funcall (make-counter)) (funcall (make-counter)))
> >>>
> >>> I'll bet you US$100 that you cannot list all of the conditions necessary 
> >>> to guarantee that that bit of code returns (1 1).
> >> But it's simple to protect yourself against outside influences:
> >>
> >> (cl:defun make-counter ()
> >>    (cl:let ((x 0))
> >>      (cl:lambda () (cl:incf x))))
> >>
> >> (cl:list (cl:funcall (make-counter)) (cl:funcall (make-counter)))
> > 
> > First, that's cheating.  No one writes code that way.
> 
> I do, occasionally, exactly for the reasons I have mentioned - i.e., to 
> distinguish between "native" and shadowed definitions (in my case, of 
> CLOS operators).
>
> > And second, even 
> > with your new version of the code I'll still make you the same wager.
> 
> You could perform a delete-package or rename-package on (find-package 
> "COMMON-LISP"), but that's gross and completely underspecified, so 
> anything could happen. Changing definitions in the common-lisp package 
> is also undefined behavior according to ANSI Common Lisp, but here I 
> trust that if people do that they know exactly what they do.
> 
> Someone could declare x as special, but I can protect my code from this 
> my placing it in my own package. I would consider code that declares 
> variables in foreign packages as special to be malicious (and I am 
> explicitly not concerned about security issues here, that's a completely 
> different topic).
> 
> Apart from these things, I don't see what else could potentially go 
> wrong here.

Does that mean you accept my wager?

> >>>> There was a proposal, I think by Steve Haflich, to remove the 
> >>>> *print-xyz* and *read-xyz* variables and replace them with two variables 
> >>>> *print-context* and *read-context*. Such contexts could be represented 
> >>>> by objects that contain the various settings. It would then be possible 
> >>>> to define and use predefined configurations in a relatively 
> >>>> straightforward way. (Sorry, but I cannot find the reference at the 
> >>>> moment.)
> >>> I don't see why it is anything other than an elementary exercise to 
> >>> implement this at the user level.
> >> ...because you wouldn't get rid of the original *print-xyz* / *read-xyz* 
> >> variables. (The proposal was about replacing the current approach, not 
> >> about working around it.)
> > 
> > It is important to distinguish between a proposal and its intentions.  
> > The intent (as I understand it) is to simplify control of the I/O 
> > environment.
> 
> No, the intent was to get rid of special variables because they incur a 
> performance overhead in multi-threaded implementations of Common Lisp. 
> At least, the number of predefined special variables shouldn't be too 
> high. (If I remember correctly.)

OK, so the goal is efficiency, not simplicity.  My next comment still 
applies:

> > It seems to me that that intent can be accomplished at the 
> > user level, and without actually getting rid of *print-xyz*/*read-xyz* 
> > (which would break backwards compatibility, which is highly valued in 
> > the CL community).

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <ur71v5g4l.fsf@nhplace.com>
Ron Garret <·········@flownet.com> writes:

> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
> > Ron Garret wrote:
> > > In article <···············@individual.net>,
> > >  Pascal Costanza <··@p-cos.net> wrote:
> > > 
> > >> (cl:defun make-counter ()
> > >>    (cl:let ((x 0))
> > >>      (cl:lambda () (cl:incf x))))
> > >>
> > >> (cl:list (cl:funcall (make-counter)) (cl:funcall (make-counter)))
> > > 
> > > First, that's cheating.  No one writes code that way.
> > 
> > I do, occasionally, exactly for the reasons I have mentioned - i.e., to 
> > distinguish between "native" and shadowed definitions (in my case, of 
> > CLOS operators).

Sorry for not replying to this upthread where it belongs, but I haven't
the time to dredge up the right post.

I just wanted to say that in my mind, the very essence of the
difference between "language design" and "tool design" is that
language design is not allowed to presuppose that code doesn't 
have to work well if someone uses a style you don't want.
Tool design does this kind of presupposition all the time.
Language design is based on semantics, and semantics are not
tripped up by unusual syntaxes because they obey stronger laws.

Admittedly, CL doesn't have the strongest semantics in the world.  In
part this is historical, in part presentational, and in part the
absence of anyone on the committee whose passion was doing a
semantics.  But that doesn't mean it doesn't aspire to be well-formed.
And it doesn't mean some of us involved in putting that aspiration
into motion aren't irked when we see a push to move in a direction
that feels like "heuristic semantics" or "hackish implementation suffices
as semantics".

If you want to build another language, by all means do that.
But my preference would be that you didn't do it at CL's expense.
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-BB8B12.22150311062006@news.gha.chartermi.net>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> > > Ron Garret wrote:
> > > > In article <···············@individual.net>,
> > > >  Pascal Costanza <··@p-cos.net> wrote:
> > > > 
> > > >> (cl:defun make-counter ()
> > > >>    (cl:let ((x 0))
> > > >>      (cl:lambda () (cl:incf x))))
> > > >>
> > > >> (cl:list (cl:funcall (make-counter)) (cl:funcall (make-counter)))
> > > > 
> > > > First, that's cheating.  No one writes code that way.
> > > 
> > > I do, occasionally, exactly for the reasons I have mentioned - i.e., to 
> > > distinguish between "native" and shadowed definitions (in my case, of 
> > > CLOS operators).
> 
> Sorry for not replying to this upthread where it belongs, but I haven't
> the time to dredge up the right post.

You need a better newsreader.

> I just wanted to say that in my mind, the very essence of the
> difference between "language design" and "tool design" is that
> language design is not allowed to presuppose that code doesn't 
> have to work well if someone uses a style you don't want.

You mean, like for example, if someone mucks with the readtable?

> Tool design does this kind of presupposition all the time.
> Language design is based on semantics, and semantics are not
> tripped up by unusual syntaxes because they obey stronger laws.

You know, that's pretty funny coming from someone who played a leading 
role in the definition of a language whose semantics are defined in part 
by:
 
http://www.cs.cmu.edu/Groups/AI/html/hyperspec/HyperSpec/Body/sec_2-2.htm
l

> Admittedly, CL doesn't have the strongest semantics in the world.

That's the understatement of the day.

> But that doesn't mean it doesn't aspire to be well-formed.
> And it doesn't mean some of us involved in putting that aspiration
> into motion aren't irked when we see a push to move in a direction
> that feels like "heuristic semantics" or "hackish implementation suffices
> as semantics".

Huh?  I really don't understand what you're referring to here.  The 
design of *combination-hook* is completely isomorphic in every respect 
with the design of *readtable* (and *macroexpansion-hook*), so I don't 
see how you could be irked by one and not the other.

> If you want to build another language, by all means do that.
> But my preference would be that you didn't do it at CL's expense.

I'm sorry, but I really don't understand what is bothering you.

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <uu06qfnb0.fsf@nhplace.com>
Ron Garret <·········@flownet.com> writes:

> You need a better newsreader.

I find this mildly insulting.  It suggests that I am unaware of the
fact that there is a rich field of choices in the newsreader space.

But the truth is that the particular operation I commented on is not
make or break for me.  Newsreader goodness/badness is not linearly
ordered.  It's a multi-dimensional space in which there's no global
notion of "better" or "worse" such that you should be judging my 
newsreader choice.

This kind of remark just distracts from useful dialog and consumes my
limited available time needlessly.  It responds to an apology that I
preemptively offered in hopes of heading off raised eyebrows about my
responding to the wrong message.  I guess even an apology in advance 
in advance doesn't work.  Next time I'll just not respond to such 
messages.
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-DC6D1B.09494012062006@news.gha.chartermi.net>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > You need a better newsreader.
> 
> I find this mildly insulting.

Then I apologize.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4f8kjjF1h0kboU1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>> Ron Garret wrote:
>>> In article <···············@individual.net>,
>>>  Pascal Costanza <··@p-cos.net> wrote:
>>>
>>>>> I cited two examples to support my position: *macroexpand-hook* and 
>>>>> *readtable*.  You responded to one example but not the other.  Your 
>>>>> response (that it is only for debugging, that the spec includes a 
>>>>> disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
>>>>> not to *readtable*.
>>>> I see. But I have discussed *readtable* somewhere else in this thread, 
>>>> pointing out that it actually doesn't work that well. (Sorry, I am 
>>>> currently too tired to look that up...)
>>> I don't dispute that.  It is nonetheless a part of Common Lisp as it 
>>> currently stands.
>> So?
> 
> So *combination-hook* can be reasonably expected to cause no more 
> problems than *readtable* does.  Since *readtable* does not seem to 
> cause unacceptable difficulties there seems to be no reason to believe 
> that *combination-hook* would cause unacceptable difficulties.

*readtable* is in effect at read time, while *combination-hook* would be 
in effect at macroexpansion time. Consider the following example:

(defmacro m (...)
   `((some-expansion ...) ...))

The meaning of the expansion of this macro will be determined by the 
binding/value *combination-hook* will have at the time the expansion is 
processed. There is no straightforward way to protect the generated code 
from this.

If there was a lexically scoped version of *combination-hook*, you could 
do this:

(defmacro m (...)
   `(macrolet ((combination-hook (...) ...))
      ((some-expansion ...) ...)))

...but [and this is a point Kent is trying to make] you can only do this 
when you are aware of the existence of combination-hook. Programmers can 
be aware of all the influences on the meaning of code that are specified 
in ANSI Common Lisp, but not of influences in future and/or extended 
versions of Common Lisp. So you are effectively proposing a new language.

The fact that currently, an expression of the form ((some-symbol ...) 
...) is an error (unless some-symbol is 'lambda) doesn't help you much. 
Your extension could render erroneous code meaningful, which potentially 
makes it harder to debug code. I would be interested to learn early on 
that the expansion one of my macros creates is erroneous.

>>>>>>> I actually agree with you that all else being equal a less hacky design 
>>>>>>> is to be preferred.  But in this case all else is not equal.
>>>>>> What is different?
>>>>> *combination-hook* provides a superset of the functionality of lambda 
>>>>> macros.
>>>> No it doesn't. It doesn't provide lexical scope.
>>> Not by itself.  But it is straightforward to implement lambda macros in 
>>> terms of *combination-hook*.  The reverse is not possible.
>> ...but you have to provide the macro definition as part of your library, 
> 
> Why?

So that I can control my own expansions in the way I have sketched above.

>> together with the protocol between the macro and *combination-hook*, 
>> otherwise it wouldn't work.
> 
> Huh?

If you would provide only (the special variable) *combination-hook*, I 
wouldn't be able to protect my code by using a macro wrapper around 
*combination-hook* defined by myself, because your language processor 
would still go exclusively through *combination-hook*.

>>>>>> Fortunately, ANSI Common Lisp defines that compile-file takes care of 
>>>>>> restoring the binding of *readtable* to its previous value - see 
>>>>>> http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#compil
>>>>>> e-
>>>>>> fil
>>>>>> e
>>>>> The same safeguard could be put in place for *combination-hook*.
>>>> Why only *readtable*, *package* (as specified for compile-file as well), 
>>>> and *combination-hook*? Why not a more general mechanism?
>>>>
>>>> And why not a mechanism that allows you to change things in more 
>>>> fine-grained scopes, i.e., within a file?
>>> Like what?
>> Very rough sketch:
>>
>> (with-readtable (some-readtable)
>>    ...) ; <= that code is read with some-readtable
>>
>> [Of course this cannot work, because the whole form is first read with 
>> the "surrounding" readtable, but you get the idea.]
> 
> No, I don't actually.

Instead of writing this:

#.(setq *readtable* *some-readtable*)

... some code ...

#.(setq *readtable* *old-readtable*)

...together with the code that saves the old readtable before changing 
it, I would simply like to have a lexically scoped change of readtables.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-2927BE.14315613062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> >> Ron Garret wrote:
> >>> In article <···············@individual.net>,
> >>>  Pascal Costanza <··@p-cos.net> wrote:
> >>>
> >>>>> I cited two examples to support my position: *macroexpand-hook* and 
> >>>>> *readtable*.  You responded to one example but not the other.  Your 
> >>>>> response (that it is only for debugging, that the spec includes a 
> >>>>> disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
> >>>>> not to *readtable*.
> >>>> I see. But I have discussed *readtable* somewhere else in this thread, 
> >>>> pointing out that it actually doesn't work that well. (Sorry, I am 
> >>>> currently too tired to look that up...)
> >>> I don't dispute that.  It is nonetheless a part of Common Lisp as it 
> >>> currently stands.
> >> So?
> > 
> > So *combination-hook* can be reasonably expected to cause no more 
> > problems than *readtable* does.  Since *readtable* does not seem to 
> > cause unacceptable difficulties there seems to be no reason to believe 
> > that *combination-hook* would cause unacceptable difficulties.
> 
> *readtable* is in effect at read time, while *combination-hook* would be 
> in effect at macroexpansion time.

So?  Why is that significant?

> Consider the following example:
> 
> (defmacro m (...)
>    `((some-expansion ...) ...))
> 
> The meaning of the expansion of this macro will be determined by the 
> binding/value *combination-hook* will have at the time the expansion is 
> processed. There is no straightforward way to protect the generated code 
> from this.

Of course there is:

(setq *combination-hook* (lambda (form) (error ...)))

> If there was a lexically scoped version of *combination-hook*, you could 
> do this:
> 
> (defmacro m (...)
>    `(macrolet ((combination-hook (...) ...))
>       ((some-expansion ...) ...)))
> 
> ...but [and this is a point Kent is trying to make] you can only do this 
> when you are aware of the existence of combination-hook. Programmers can 
> be aware of all the influences on the meaning of code that are specified 
> in ANSI Common Lisp, but not of influences in future and/or extended 
> versions of Common Lisp. So you are effectively proposing a new language.

That is true, but it is true for *any* change to the language.  If that 
is to be accepted as a valid argument against *combination-hook* then no 
change to the language would ever be possible (which pretty much seems 
to be the current lamentable state of things).

> The fact that currently, an expression of the form ((some-symbol ...) 
> ...) is an error (unless some-symbol is 'lambda) doesn't help you much. 
> Your extension could render erroneous code meaningful, which potentially 
> makes it harder to debug code.

Do you seriously think that this will be a problem in practice?  Do you 
really think that producing ((...) ...) by accident is a common 
occurrence?

> I would be interested to learn early on 
> that the expansion one of my macros creates is erroneous.

Then do this:

(setq *combination-hook* (lambda (form) (error ...)))

Even better, put it in your init.lisp file.  Better still, build 
yourself an environment where macroexpand, macroexpand-1, compile and 
compile-file are shadowed and invoke the above code before calling the 
corresponding functions in the common-lisp package.

> >>>>>>> I actually agree with you that all else being equal a less hacky 
> >>>>>>> design 
> >>>>>>> is to be preferred.  But in this case all else is not equal.
> >>>>>> What is different?
> >>>>> *combination-hook* provides a superset of the functionality of lambda 
> >>>>> macros.
> >>>> No it doesn't. It doesn't provide lexical scope.
> >>> Not by itself.  But it is straightforward to implement lambda macros in 
> >>> terms of *combination-hook*.  The reverse is not possible.
> >> ...but you have to provide the macro definition as part of your library, 
> > 
> > Why?
> 
> So that I can control my own expansions in the way I have sketched above.

No, I meant why do *I* have to provide the code for define-lambda-macro?  
Why is it not sufficient that *you* can provide that macro if you want 
it?

> >> together with the protocol between the macro and *combination-hook*, 
> >> otherwise it wouldn't work.
> > 
> > Huh?
> 
> If you would provide only (the special variable) *combination-hook*, I 
> wouldn't be able to protect my code by using a macro wrapper around 
> *combination-hook* defined by myself, because your language processor 
> would still go exclusively through *combination-hook*.

No, you would have to dynamically bind (or set) *combination-hook* 
before your own code was macroexpanded.  I do not see why you consider 
this a great burden.

> >>>>>> Fortunately, ANSI Common Lisp defines that compile-file takes care of 
> >>>>>> restoring the binding of *readtable* to its previous value - see 
> >>>>>> http://www.lispworks.com/documentation/HyperSpec/Body/f_cmp_fi.htm#comp
> >>>>>> il
> >>>>>> e-
> >>>>>> fil
> >>>>>> e
> >>>>> The same safeguard could be put in place for *combination-hook*.
> >>>> Why only *readtable*, *package* (as specified for compile-file as well), 
> >>>> and *combination-hook*? Why not a more general mechanism?
> >>>>
> >>>> And why not a mechanism that allows you to change things in more 
> >>>> fine-grained scopes, i.e., within a file?
> >>> Like what?
> >> Very rough sketch:
> >>
> >> (with-readtable (some-readtable)
> >>    ...) ; <= that code is read with some-readtable
> >>
> >> [Of course this cannot work, because the whole form is first read with 
> >> the "surrounding" readtable, but you get the idea.]
> > 
> > No, I don't actually.
> 
> Instead of writing this:
> 
> #.(setq *readtable* *some-readtable*)
> 
> ... some code ...
> 
> #.(setq *readtable* *old-readtable*)
> 
> ...together with the code that saves the old readtable before changing 
> it, I would simply like to have a lexically scoped change of readtables.

Sorry, I still don't get it.  "Lexically scoped readtable" seems to me 
like an oxymoron.  Lexical scope is a static property of Lisp code, 
which is to say, of trees of cons cells.  The readtable is a data 
structure that governs the behavior of a dynamic process that operates 
on character streams.  These two concepts seem to me fundamentally 
incompatible with one another.  I have no idea what "lexically scoped 
readtable" could possibly mean without changing one of the fundamental 
characteristics of Lisp that currently distinguishes it from other 
languages and is the source of much of its power.

Perhaps you could clarify this for me?

rg

P.S. One way to get "lexical scoping" for combinations is to simply 
specify that ((...) ...) macroexpands into some canonical standard form 
like (combo-macro (...) ...).  COMBO-MACRO is a macro that by default is 
defined thusly:

(defmacro combo-macro (args &body body)
  (error "Combinations are by default not legal in Common Lisp"))

but which can be redefined by the user.  Would you find that acceptable?

P.P.S.  Have you taken a look at my lexicon code?  Lexicons offer a very 
nice solution to the combination-hook problem because with first-class 
top-level lexical environment you can actually have multiple 
simultaneous top-level definitions for the COMBO-MACRO macro.  This is 
one of the most dramatic examples I know of to illustrate the 
shortcomings of using packages as erzatz modules.  You cannot modularize 
COMBO-MACRO using packages precisely because they are a read-time and 
not a macroexpand-time (which is to say not a lexical) concept.

P.P.P.S. You can get something that kind of looks like lexical scoping 
but isn't for COMBO-MACRO by using *package*, e.g.:

(defmacro cl::combo-macro (&rest stuff)
  (if (eq *package* (find-package 'common-lisp))
    (error ...)
    `(,(intern "COMBO-MACRO" *package*) ,@stuff)))

or, slightly less hacky:

(defmacro combo-macro (&rest stuff &environment env)
  `(,(assoc *package* cl::*combo-macro-alist*) ,@stuff) env))

(defmacro cl:define-combo-macro (&rest expansion)
  (some-code-that-does-the-right-thing-to cl::*combo-macro-alist* based 
on the current value of *package*))

or maybe:

(defmacro combo-macro (&rest stuff &environment env)
  (macroexpand `(,(get (intern (package-name *package*)
                               (find-package 'org.alu.combo-macro))
                       'org.alu.combo-macro::combo-macro) ,@stuff) env))


Of course, this isn't really lexical scoping, but the situation now is 
no worse than it is with any other Lisp code.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4f8rqfF1hm65cU1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>> Ron Garret wrote:
>>> In article <···············@individual.net>,
>>>  Pascal Costanza <··@p-cos.net> wrote:
>>>
>>>> Ron Garret wrote:
>>>>> In article <···············@individual.net>,
>>>>>  Pascal Costanza <··@p-cos.net> wrote:
>>>>>
>>>>>>> I cited two examples to support my position: *macroexpand-hook* and 
>>>>>>> *readtable*.  You responded to one example but not the other.  Your 
>>>>>>> response (that it is only for debugging, that the spec includes a 
>>>>>>> disclaimer, yadda yadda yadda) applies only to *macroexpand-hook* but 
>>>>>>> not to *readtable*.
>>>>>> I see. But I have discussed *readtable* somewhere else in this thread, 
>>>>>> pointing out that it actually doesn't work that well. (Sorry, I am 
>>>>>> currently too tired to look that up...)
>>>>> I don't dispute that.  It is nonetheless a part of Common Lisp as it 
>>>>> currently stands.
>>>> So?
>>> So *combination-hook* can be reasonably expected to cause no more 
>>> problems than *readtable* does.  Since *readtable* does not seem to 
>>> cause unacceptable difficulties there seems to be no reason to believe 
>>> that *combination-hook* would cause unacceptable difficulties.
>> *readtable* is in effect at read time, while *combination-hook* would be 
>> in effect at macroexpansion time.
> 
> So?  Why is that significant?

I have provided the explanation why that's significant in the rest of my 
posting.

>> Consider the following example:
>>
>> (defmacro m (...)
>>    `((some-expansion ...) ...))
>>
>> The meaning of the expansion of this macro will be determined by the 
>> binding/value *combination-hook* will have at the time the expansion is 
>> processed. There is no straightforward way to protect the generated code 
>> from this.
> 
> Of course there is:
> 
> (setq *combination-hook* (lambda (form) (error ...)))

This doesn't work if I want to use third-party libraries that assume a 
different semantics. In fact, my macro m above could be expanded in code 
that is controlled by that third-party library, and vice versa. There 
must be a way to control this, if you want such a general-purpose feature.

>> If there was a lexically scoped version of *combination-hook*, you could 
>> do this:
>>
>> (defmacro m (...)
>>    `(macrolet ((combination-hook (...) ...))
>>       ((some-expansion ...) ...)))
>>
>> ...but [and this is a point Kent is trying to make] you can only do this 
>> when you are aware of the existence of combination-hook. Programmers can 
>> be aware of all the influences on the meaning of code that are specified 
>> in ANSI Common Lisp, but not of influences in future and/or extended 
>> versions of Common Lisp. So you are effectively proposing a new language.
> 
> That is true, but it is true for *any* change to the language.

No.

>> The fact that currently, an expression of the form ((some-symbol ...) 
>> ...) is an error (unless some-symbol is 'lambda) doesn't help you much. 
>> Your extension could render erroneous code meaningful, which potentially 
>> makes it harder to debug code.
> 
> Do you seriously think that this will be a problem in practice?

Yes.

> Do you 
> really think that producing ((...) ...) by accident is a common 
> occurrence?

The common bugs are not the ones that are hard to find.

>> I would be interested to learn early on 
>> that the expansion one of my macros creates is erroneous.
> 
> Then do this:
> 
> (setq *combination-hook* (lambda (form) (error ...)))
> 
> Even better, put it in your init.lisp file.  Better still, build 
> yourself an environment where macroexpand, macroexpand-1, compile and 
> compile-file are shadowed and invoke the above code before calling the 
> corresponding functions in the common-lisp package.

This wouldn't work. See above.

>>>>>>>>> I actually agree with you that all else being equal a less hacky 
>>>>>>>>> design 
>>>>>>>>> is to be preferred.  But in this case all else is not equal.
>>>>>>>> What is different?
>>>>>>> *combination-hook* provides a superset of the functionality of lambda 
>>>>>>> macros.
>>>>>> No it doesn't. It doesn't provide lexical scope.
>>>>> Not by itself.  But it is straightforward to implement lambda macros in 
>>>>> terms of *combination-hook*.  The reverse is not possible.
>>>> ...but you have to provide the macro definition as part of your library, 
>>> Why?
>> So that I can control my own expansions in the way I have sketched above.
> 
> No, I meant why do *I* have to provide the code for define-lambda-macro?  
> Why is it not sufficient that *you* can provide that macro if you want 
> it?

I wasn't worried about defining define-lambda-macro. I was worried about 
*combination-hook* not being lexically scoped.

>>>> together with the protocol between the macro and *combination-hook*, 
>>>> otherwise it wouldn't work.
>>> Huh?
>> If you would provide only (the special variable) *combination-hook*, I 
>> wouldn't be able to protect my code by using a macro wrapper around 
>> *combination-hook* defined by myself, because your language processor 
>> would still go exclusively through *combination-hook*.
> 
> No, you would have to dynamically bind (or set) *combination-hook* 
> before your own code was macroexpanded.  I do not see why you consider 
> this a great burden.

I explained this quite a few times by now. The problem is with mixing 
and matching code from different libraries.

> P.S. One way to get "lexical scoping" for combinations is to simply 
> specify that ((...) ...) macroexpands into some canonical standard form 
> like (combo-macro (...) ...).  COMBO-MACRO is a macro that by default is 
> defined thusly:
> 
> (defmacro combo-macro (args &body body)
>   (error "Combinations are by default not legal in Common Lisp"))
> 
> but which can be redefined by the user.  Would you find that acceptable?

I guess so.

> P.P.S.  Have you taken a look at my lexicon code?  Lexicons offer a very 
> nice solution to the combination-hook problem because with first-class 
> top-level lexical environment you can actually have multiple 
> simultaneous top-level definitions for the COMBO-MACRO macro.  This is 
> one of the most dramatic examples I know of to illustrate the 
> shortcomings of using packages as erzatz modules.  You cannot modularize 
> COMBO-MACRO using packages precisely because they are a read-time and 
> not a macroexpand-time (which is to say not a lexical) concept.
> 
> P.P.P.S. You can get something that kind of looks like lexical scoping 
> but isn't for COMBO-MACRO by using *package*, e.g.:
[...]

What wrong with macrolet?


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-221245.15302213062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> > P.S. One way to get "lexical scoping" for combinations is to simply 
> > specify that ((...) ...) macroexpands into some canonical standard form 
> > like (combo-macro (...) ...).  COMBO-MACRO is a macro that by default is 
> > defined thusly:
> > 
> > (defmacro combo-macro (args &body body)
> >   (error "Combinations are by default not legal in Common Lisp"))
> > 
> > but which can be redefined by the user.  Would you find that acceptable?
> 
> I guess so.

Hallelujah!

rg
From: Ron Garret
Subject: COMBO-MACRO (was: Re: Relative merits of Lisp-1 vs. Lisp-2?)
Date: 
Message-ID: <rNOSPAMon-599490.15454813062006@news.gha.chartermi.net>
In article <·······························@news.gha.chartermi.net>,
 Ron Garret <·········@flownet.com> wrote:

> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
> > > P.S. One way to get "lexical scoping" for combinations is to simply 
> > > specify that ((...) ...) macroexpands into some canonical standard form 
> > > like (combo-macro (...) ...).  COMBO-MACRO is a macro that by default is 
> > > defined thusly:
> > > 
> > > (defmacro combo-macro (args &body body)
> > >   (error "Combinations are by default not legal in Common Lisp"))
> > > 
> > > but which can be redefined by the user.  Would you find that acceptable?
> > 
> > I guess so.
> 
> Hallelujah!

The next question is: what do we actually call this thing?  I don't 
think COMBO-MACRO is a particularly good choice.  I kinda like:

COMMON-LISP-USER::|((...) ...)|

That is at once suggestive and sufficiently obscure that no one will 
stumble upon it by accident.

Questions:

1.  Is it too obscure?

2.  Should it be in cl-user or its own package?  What should that 
package be called?  (Wow, we could get really perverse and make it be 
|((...) ...)|::|((...) ...)|.  <shudder> )

rg
From: Raffael Cavallaro
Subject: Re: COMBO-MACRO (was: Re: Relative merits of Lisp-1 vs. Lisp-2?)
Date: 
Message-ID: <200606140025288930-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2006-06-13 18:45:48 -0400, Ron Garret <·········@flownet.com> said:

> |((...) ...)|::|((...) ...)|

I didn't know that braille identifiers were legal in common lisp.
;^)
From: Ron Garret
Subject: Re: COMBO-MACRO (was: Re: Relative merits of Lisp-1 vs. Lisp-2?)
Date: 
Message-ID: <rNOSPAMon-628776.23235013062006@news.gha.chartermi.net>
In article 
<···································@pasdespamsilvousplaitmaccom>,
 Raffael Cavallaro 
 <················@pas-d'espam-s'il-vous-plait-mac.com> wrote:

> On 2006-06-13 18:45:48 -0400, Ron Garret <·········@flownet.com> said:
> 
> > |((...) ...)|::|((...) ...)|
> 
> I didn't know that braille identifiers were legal in common lisp.
> ;^)

With vertical bars there are few limits to the syntactic havoc you can 
wreak:

? (defmacro |:) := := \/ :\\:| (&rest |...|) `(defun ,@|...|))
|:) := := / :\\:|
? (|:) := := / :\\:| |:.:| (|::| |""|) (cons |::| |""|))
|:.:|
? (|:.:| '|[ () :': :': () ::| '| :. : .:' :'|)
(|[ () :': :': () ::| . | :. : .:' :'|)
? 

And that's standard CL syntax!  Imagine the potential carnage if you 
started mucking with the readtable in antisocial ways!

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4f9ucbF1gks3nU1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>>> P.S. One way to get "lexical scoping" for combinations is to simply 
>>> specify that ((...) ...) macroexpands into some canonical standard form 
>>> like (combo-macro (...) ...).  COMBO-MACRO is a macro that by default is 
>>> defined thusly:
>>>
>>> (defmacro combo-macro (args &body body)
>>>   (error "Combinations are by default not legal in Common Lisp"))
>>>
>>> but which can be redefined by the user.  Would you find that acceptable?
>> I guess so.
> 
> Hallelujah!

Note that I think that we already had a better solution. It was roughly 
this:

(defvar *combination-hook*
   (lambda (form environment)
     (error "Combinations not supported in ANSI Common Lisp.")))

(defmacro combination-hook (&whole form &environment env &rest body)
   (declare (ignore body))
   (funcall *combination-hook* form env))

...together with the protocol that the language processor uses the 
macro, not the special variable. This gives you the ability to change 
the semantics for the whole language, and me the ability to protect my 
code against such changes. (I would actually prefer the lambda-macro 
facility which I still think gives you already all you need in practice, 
but you seem to insist on the 'forall' semantics of a combination hook.)

The lookup of a 'global' definition of combo-macro via its symbol name 
in *package* is, IMHO, a horrible idea.

But finally note, since it seems to me that it's important to stress 
this again, it's neither you nor me nor anyone else who can 
single-handedly decide whether the proposed extensions, or which of its 
design alternatives, are "good", especially not in a vacuum, as is 
currently the case. Whether the extension balances the various forces 
well can only be shown by implementing "real" programs with this 
extension and see how well they work in practice.

(I am still wondering why partial Lisp-1 semantics are so important for 
Lexicons. If I understand that concept well, it would be still be useful 
in a pure Lisp-2 setting.)


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-3705CE.08462314062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Ron Garret wrote:
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> >>> P.S. One way to get "lexical scoping" for combinations is to simply 
> >>> specify that ((...) ...) macroexpands into some canonical standard form 
> >>> like (combo-macro (...) ...).  COMBO-MACRO is a macro that by default is 
> >>> defined thusly:
> >>>
> >>> (defmacro combo-macro (args &body body)
> >>>   (error "Combinations are by default not legal in Common Lisp"))
> >>>
> >>> but which can be redefined by the user.  Would you find that acceptable?
> >> I guess so.
> > 
> > Hallelujah!
> 
> Note that I think that we already had a better solution. It was roughly 
> this:
> 
> (defvar *combination-hook*
>    (lambda (form environment)
>      (error "Combinations not supported in ANSI Common Lisp.")))
> 
> (defmacro combination-hook (&whole form &environment env &rest body)
>    (declare (ignore body))
>    (funcall *combination-hook* form env))

OK, that works for me too.

> ...together with the protocol that the language processor uses the 
> macro, not the special variable. This gives you the ability to change 
> the semantics for the whole language, and me the ability to protect my 
> code against such changes.

I can change the whole language if I get to redefine the macro.  But no 
matter, I like the hybrid solution too.

> (I would actually prefer the lambda-macro 
> facility which I still think gives you already all you need in practice, 
> but you seem to insist on the 'forall' semantics of a combination hook.)

Because I want to be able to turn CL into Scheme if I want to :-)


> The lookup of a 'global' definition of combo-macro via its symbol name 
> in *package* is, IMHO, a horrible idea.

Yes, I thought so too.  But looking up a lexical definition of 
combo-macro in the current lexicon is a much better idea :-)

> But finally note, since it seems to me that it's important to stress 
> this again, it's neither you nor me nor anyone else who can 
> single-handedly decide whether the proposed extensions, or which of its 
> design alternatives, are "good", especially not in a vacuum, as is 
> currently the case. Whether the extension balances the various forces 
> well can only be shown by implementing "real" programs with this 
> extension and see how well they work in practice.

Implementations for MCL and SBCL will be forthcoming.  I can't make 
heads or tails out of the CLisp code so someone will have to help me 
with that.  (BTW, if anyone wants to quote me a price to do this please 
email me a brief proposal.)

> (I am still wondering why partial Lisp-1 semantics are so important for 
> Lexicons. If I understand that concept well, it would be still be useful 
> in a pure Lisp-2 setting.)

You are conflating a number of different issues and motivations here.  
Partial Lisp-1 semantics are not "important FOR lexicons."  For a while 
I thought that I needed combo-hook (or lambda macros) to implemenet ^FOO 
syntax because I was stuck on the assumption that ^ ought to be a reader 
macro.  But it turns out that I can do it a different way (which turns 
out to have some ancillary benefits that surprised me, so thanks for not 
giving in too quickly :-)

You are correct when you say that lexicons are useful in a pure Lisp-2 
setting.  It is possible (I'm fairly certain -- I haven't actually tried 
it because you have to shadow all the binding forms, which is a royal 
pain) to introduce lexicons into CL at the user level in a way that is 
more or less completely transparent.  But one of the features of 
lexicons is that you can use them to implement both Lisp-1 and Lisp-2 
semantics in a very straightforward way.  In my original implementation 
(when they were still called locales) you could make a lexicon be either 
a Lisp-1 lexicon or a Lisp-2 lexicon.  In the current implementation the 
blend is even more seamless.  If you bind a variable FOO in a lexicon 
you actually bind two variables: FOO and ^FOO.  FOO is a Lisp-1 variable 
and ^FOO is a pseudo-Lisp-2 variable.  (I call it a pseudo-Lisp-2 
variable because its two bindings are local and top-level, not value and 
function.  But it gives you the same functionality w.r.t. macros as long 
as the value of the top-level binding is a function.)  You can also make 
"real" Lisp-2 variables with separate function and value namespaces.  
(That's what the original implementation had.)

So you see, lexicons and combo-hook are two largely orthogonal issues.  
But even now that I know that I don't need combo-hook to implement 
lexicons, I still want combo-hook.  Figuring out why is left as an 
exercise.  :-)

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <uejxo5rvy.fsf@nhplace.com>
Ron Garret <·········@flownet.com> writes:

> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
> > Ron Garret wrote:
...
> > Note that I think that we already had a better solution. It was roughly 
> > this:
> > 
> > (defvar *combination-hook*
> >    (lambda (form environment)
> >      (error "Combinations not supported in ANSI Common Lisp.")))
> > 
> > (defmacro combination-hook (&whole form &environment env &rest body)
> >    (declare (ignore body))
> >    (funcall *combination-hook* form env))
> 
> OK, that works for me too.

For the record, not that at this point I expect to change your mind, Ron:

Not for me.  This is exactly backward of what one wants since it uses 
lexicality to bootstrap dynamic effect.  This should be done lexically so
that each program module can be proven correct independently.

At minimum, you need to have a global macro named something like
IMPLICIT-CALL (because combination-hook is a dumb macro name, IMO--macro
names should be verbish in nature, not nounish).  It needs to be an error
to redefine globally because if anyone does so they will risk redefining
someone else's global definition; you could define it to be ok to MACROLET
it within a lexical contour since then you'd be able to show that the
code that sees it knows what it's doing.

If you're going to do something functional, it should be lexical.
Otherwise, the most you should do is *enable-combinations* with a T/NIL
value and people that either opt in or not.  Saying that people disagree
on the semantics means you should talk more about the semantics, not that
you should enable a Tower of Babel by creating a hook for that.
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-21CFD3.02072917062006@news.gha.chartermi.net>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > In article <···············@individual.net>,
> >  Pascal Costanza <··@p-cos.net> wrote:
> > 
> > > Ron Garret wrote:
> ...
> > > Note that I think that we already had a better solution. It was roughly 
> > > this:
> > > 
> > > (defvar *combination-hook*
> > >    (lambda (form environment)
> > >      (error "Combinations not supported in ANSI Common Lisp.")))
> > > 
> > > (defmacro combination-hook (&whole form &environment env &rest body)
> > >    (declare (ignore body))
> > >    (funcall *combination-hook* form env))
> > 
> > OK, that works for me too.
> 
> For the record, not that at this point I expect to change your mind, Ron:
> 
> Not for me.  This is exactly backward of what one wants since it uses 
> lexicality to bootstrap dynamic effect.  This should be done lexically so
> that each program module can be proven correct independently.
> 
> At minimum, you need to have a global macro named something like
> IMPLICIT-CALL (because combination-hook is a dumb macro name, IMO--macro
> names should be verbish in nature, not nounish).  It needs to be an error
> to redefine globally because if anyone does so they will risk redefining
> someone else's global definition; you could define it to be ok to MACROLET
> it within a lexical contour since then you'd be able to show that the
> code that sees it knows what it's doing.

Your remarks are most puzzling.

First you say that what Pascal and I agreed to is "exactly backward."  
Then you say that we "need to have a global macro".  But that is exactly 
what we do have.  Then you say the name we chose is stupid.  Please take 
a look at this message which I posted several days ago:

http://groups.google.com/group/comp.lang.lisp/msg/f8cccda083f304ef

where I pretty much say exactly that.

It seems to me that the only thing you actually disagree with us on is 
whether or not the combo-hook macro (whatever it ends up being called) 
ought to be globally redefinable by the user.  That hardly seems like 
"exactly backwards" to me.

> If you're going to do something functional, it should be lexical.

I have no idea what you mean by "something functional" here.  This is a 
macro-expansion-time thing we're talking about here.  There are no 
functions, only S-expressions.

I'll also note that the problem with making combinations lexical is that 
CL has no top-level lexical scopes.  (It only has packages, but that 
only helps if there's an actual symbol being read, which in this case 
there is not.)  As I have noted elsewhere, adding lexicons to the 
language would solve this problem, but I'm having enough trouble trying 
to get agreement on this trivial little thing that I think trying to get 
lexicons added would be quite hopeless.

> Otherwise, the most you should do is *enable-combinations* with a T/NIL
> value and people that either opt in or not.  Saying that people disagree
> on the semantics means you should talk more about the semantics,

With whom?

> not that
> you should enable a Tower of Babel by creating a hook for that.

Once again, who are you, and what have you done with Kent Pitman?

Enabling a Tower of Babel is the whole point of Lisp.  That's what being 
a programmable programming language is all about.  That's why we have 
macros and symbol macros and the readtable (to say nothing of all those 
little irritating silly parentheses).

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <uirna58pk.fsf@nhplace.com>
Ron Garret <·········@flownet.com> writes:

> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
> > Ron Garret wrote:
...
> > > *combination-hook* provides a superset of the functionality of lambda 
> > > macros.
> > 
> > No it doesn't. It doesn't provide lexical scope.
> 
> Not by itself.  But it is straightforward to implement lambda macros in 
> terms of *combination-hook*.  The reverse is not possible.

In the latter days of Maclisp, it was effectively cannibalized for the
purpose of prototyping its successor(s), which was primarily expected
to be NIL (the so-called "New Implementation of Lisp" being developed
for the VAX) at the time, though Common Lisp also came along shortly
after and ended up being the winner.

A number of options were provided for "customizing" the language, but they
came to be properly seen, as Pascal astutely observes here, as big-switch
ways of turning the language into an implementation substrate for another
language, not really as customizations of the current language.

It's strictly true what you say Ron that it would implement the feature you
want, but it does so at the expense of any other uses of that switch, since
such uses are likely to be incompatible.  Consequently, the nightmare we
had in the late 1970's with libraries only loading if they were loaded with
other "culturally compatible" libraries is likely.

In case anyone's curious, since it's ill-documented (and would be even if
I'd get off my tail and get the Maclisp manual online, as I keep being on
the verge of doing if I weren't so busy with work), the latter-day features 
included:

 * a switch to allow hunks [a sort of n-ary cons with (cxr n hunk) being
   the accessor and hunks being like a "cons with a middle" so that the
   operations car and cdr were slot 0 and 1, but other slots were after
   allocated in what visibly seemed like "in between" (even though 
   internally they were "at the start"] being able to masquerade as EITHER
   a cons (that is, the "middle" - slots other than 0 and 1 - was invisible
   to most programs and they answered true to consp) or else as a special
   opaque type that had a "handler" so that we could experiment with 
   class systems.

 * strings were either symbols with a special property (sometimes called
   fake strings) or were specially hacked things that stored contents
   rationally (instead of in "pname" (symbol-name) format, which was  
   in that language a linked list of 36-bit integers [it was a 36-bit
   machine], each integer having 5 7-bit ascii characters in it, and access
   time growing badly as the strings got long). 

I think there were other features like this that were incompatible, too,
but those are the big ones.  People diverged into camps who liked one or
another paradigm, but you couldn't co-load all of the support.  It's a bad
plan for a unified community.

Almost better, even though it seems more special purpose, to have 
*open-open-funcall-mode* [and to somehow explain away lambda and setf
as exceptions] because at least an implementation with that isn't tempted
to say "oh, there are OTHER things you could do with *combination-hook*
and then to develop warring factions.
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-CD3A3E.16574211062006@news.gha.chartermi.net>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> A number of options were provided for "customizing" the language, but they
> came to be properly seen, as Pascal astutely observes here, as big-switch
> ways of turning the language into an implementation substrate for another
> language, not really as customizations of the current language.

Sure.  So?  Isn't that (being an implementation substrate for other 
languages) one of the things CL is supposed to be for?

> It's strictly true what you say Ron that it would implement the feature you
> want, but it does so at the expense of any other uses of that switch, since
> such uses are likely to be incompatible.  Consequently, the nightmare we
> had in the late 1970's with libraries only loading if they were loaded with
> other "culturally compatible" libraries is likely.

No.  There are certainly ways to shoot yourself in the foot with 
*combination-hook*, but it takes only a tiny modicum of programmer 
discipline to avoid them.  The situation is no different from all the 
other global variables controlling the behavior of various aspects of 
the system now.

> In case anyone's curious, since it's ill-documented (and would be even if
> I'd get off my tail and get the Maclisp manual online, as I keep being on
> the verge of doing if I weren't so busy with work), the latter-day features 
> included:
> 
>  * a switch to allow hunks [a sort of n-ary cons with (cxr n hunk) being
>    the accessor and hunks being like a "cons with a middle" so that the
>    operations car and cdr were slot 0 and 1, but other slots were after
>    allocated in what visibly seemed like "in between" (even though 
>    internally they were "at the start"] being able to masquerade as EITHER
>    a cons (that is, the "middle" - slots other than 0 and 1 - was invisible
>    to most programs and they answered true to consp) or else as a special
>    opaque type that had a "handler" so that we could experiment with 
>    class systems.
> 
>  * strings were either symbols with a special property (sometimes called
>    fake strings) or were specially hacked things that stored contents
>    rationally (instead of in "pname" (symbol-name) format, which was  
>    in that language a linked list of 36-bit integers [it was a 36-bit
>    machine], each integer having 5 7-bit ascii characters in it, and access
>    time growing badly as the strings got long). 
> 
> I think there were other features like this that were incompatible, too,
> but those are the big ones.  People diverged into camps who liked one or
> another paradigm, but you couldn't co-load all of the support.  It's a bad
> plan for a unified community.

I don't understand this at all.  Leaving aside my impression that these 
things are just incredibly bad ideas (they seem to be just 
non-transparent efficiency hacks), how do they result in 
incompatibilities?

> Almost better, even though it seems more special purpose, to have 
> *open-open-funcall-mode* [and to somehow explain away lambda and setf
> as exceptions] because at least an implementation with that isn't tempted
> to say "oh, there are OTHER things you could do with *combination-hook*
> and then to develop warring factions.

I think you're being paranoid.  I don't see a lot of wars breaking out 
over the use of *readtable*.  Why should *combination-hook* be any 
different?

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <uver75gds.fsf@nhplace.com>
Ron Garret <·········@flownet.com> writes:

> In article <·············@nhplace.com>,
>  Kent M Pitman <······@nhplace.com> wrote:
> 
> > A number of options were provided for "customizing" the language, but they
> > came to be properly seen, as Pascal astutely observes here, as big-switch
> > ways of turning the language into an implementation substrate for another
> > language, not really as customizations of the current language.
> 
> Sure.  So?  Isn't that (being an implementation substrate for other 
> languages) one of the things CL is supposed to be for?

Not at the expense of program combination, no.

Common Lisp's design just about everywhere presupposes the desire to
be able to load cooperative modules that were not written with
specific knowledge of which would be useful to which others.

That is, Common Lisp's design was a SPECIFIC reaction to the sudden
shift from small-address-space situations like Maclisp served (where
you couldn't load much into a Lisp at all just because 256K words was
darned small and we all knew it, we just also knew that a megaword of
memory (called a "moby") cost about a million 1970's-dollars (the
equivalent of even more money today), so we made reasonable
compromises to tolerate the small size) to the large-address-space
situation where you could suddenly get several megabytes small numbers
of thousands or tens of thousands of dollars.  That shift meant that
it was possible to co-load various large applications that had been
independently written and they could hope to plug and play together.

If you go back and study the history, you'll remember that the reason
Common Lisp came about at all was that DARPA was tired of funding 
incompatible efforts and it wanted to build applications where the parts
were developed separately but could be put together.  For this, it was
leaning toward Interlisp until the Maclisp-subfamily of dialects said
"we're really all the same, and we'll prove it by joining to be Common Lisp".
That design was a strong move to acquire DARPA's predisosition to accept
Interlisp as the larger installed base.  

DARPA would never have bought Common Lisp if we'd said "it will have
many features, but one of them will not be that applications can expect
to plug and play without prior cooperative communication to assure they
aren't stepping on each others' toes."

Sections like

  11.1.2.1.1 Constraints on the COMMON-LISP Package for
             Conforming Implementations
     http://www.lispworks.com/documentation/HyperSpec/Body/11_abaa.htm

  11.1.2.1.2 Constraints on the COMMON-LISP Package for 
             Conforming Programs
     http://www.lispworks.com/documentation/HyperSpec/Body/11_abab.htm

are testimony to the intent of the vendors not to have one program step
on another.

That doesn't mean that we didn't intend to be an implementation
substrate.  It means we didn't intend to be ONLY an implementation
substrate, which is a lot of what Maclisp had started to become--a
veritable Tower of Babel where there were pockets of compatibility,
but a number of packages were not cross-compatible.

The Lisp Machine's invention of the package system refined the clumsy
notion of separated obarrays in Maclisp.  Obarrays gave the technical 
power of a package, but were so clumsy to use that almost no one used them
other than the compiler itself, which had a special obarray that separated
it from user code so that executing user code in the compiler didn't break
the compiler.  But the fact that there was no syntax for them meant they
really weren't usable by regular usables.

Maclisp was also not really lexical in any serious way (other than the
compiler's propensity for simply ignoring the dynamic "semantics" of 
variables in the interpreter).  Interpreted variable bindings had dynamic
scope, but compiled bindings had sort-of-lexical scope (other than that
closures didn't get created except in a hodgepodge of specialized
circumstances).

Maclisp was historically important for many things it did, but perhaps one
of its lasting contributions was the number of errors made in its design
that led us to do things better in the future...

... if we remember the lessons.

Otherwise, well, I suppose the obligatory conclusion is:

   Those who don't learn from the lessons of (programming 
   language design) history are doomed to reinvent Maclisp.

> 
> > It's strictly true what you say Ron that it would implement the feature you
> > want, but it does so at the expense of any other uses of that switch, since
> > such uses are likely to be incompatible.  Consequently, the nightmare we
> > had in the late 1970's with libraries only loading if they were loaded with
> > other "culturally compatible" libraries is likely.
> 
> No.  There are certainly ways to shoot yourself in the foot with 
> *combination-hook*, but it takes only a tiny modicum of programmer 
> discipline to avoid them.  The situation is no different from all the 
> other global variables controlling the behavior of various aspects of 
> the system now.

The other variables you cite are problematic at best already, but are
marginally justified because everyone knows about them.  They are there
in the core language, and anyone learning the language knows to bind 
things like *print-level*, *read-eval*, etc. if they run an application
that might be affected.

The same can simply not be said about an implementation extension that
is equivalently powerful.  A portable application will not know to protect
itself.

This situation of independent module design in CL is inevitably 
characterized by the Prisoner's Dilemma.  It is a critical aspect of
that dilemma that
 * the prisoners do not communicate
 * the prisoners have knowledge of all the choices
 * the prisoners know the scoring effect of them making choices in isolation
   and the scoring effect of others making choices in isolation

The CL community shares both rules and style guidelines that embrace and
exploit reasoning in the style of the Prisoner's Dilemma as applied to
decisions about how to affect global variables, how to affect syntax, how
to do export of symbols, etc.

In the case of an implementation extension not shared by the entire
community, you have non-communication among prisoners but incomplete
knowledge about choices and scoring, so unprepared applications are
doomed to be improperly designed because they cannot protect
themselves.

If anything, I'm _more_ comfortable with *read-eval* and *print-length*
being characterized as an "attractive nuisance"
 [ref: http://en.wikipedia.org/wiki/Attractive_nuisance ]
and having people be more conservative about their uses than saying that
these are existence proofs of that it is ok to not worry about the 
attractive nuisance potential of something like a *combination-hook*.

There is certainly value in these special variables, however I think it's
still important to stay on guard for their Dark Side and to develop good
strong programming guidelines that work to diminish the ill effects.

While it might be possible for you to develop a theory of how this could
be done for *combination-hook*, you haven't done so, so I cannot critique
your theory.  It is not reasonable in my theory of public debate for you
to claim that the mere possibility that you could develop such a theory of
good style is enough to trump my concern that you might not.  That's like
saying it's fine to build nuclear power plants without regulation because
I can't prove that every way you could do it will lead to a problem.  It's
sufficient in my book that there are reasonably forseeable ways you could get
to a problem and beyond that it's your burden in wanting to promote the idea
to show that these cannot happen.  You've not done that.

> > In case anyone's curious, since it's ill-documented (and would be even if
> > I'd get off my tail and get the Maclisp manual online, as I keep being on
> > the verge of doing if I weren't so busy with work), the latter-day features 
> > included:
> > 
> >  * a switch to allow hunks [a sort of n-ary cons with (cxr n hunk) being
> >    the accessor and hunks being like a "cons with a middle" so that the
> >    operations car and cdr were slot 0 and 1, but other slots were after
> >    allocated in what visibly seemed like "in between" (even though 
> >    internally they were "at the start"] being able to masquerade as EITHER
> >    a cons (that is, the "middle" - slots other than 0 and 1 - was invisible
> >    to most programs and they answered true to consp) or else as a special
> >    opaque type that had a "handler" so that we could experiment with 
> >    class systems.
> > 
> >  * strings were either symbols with a special property (sometimes called
> >    fake strings) or were specially hacked things that stored contents
> >    rationally (instead of in "pname" (symbol-name) format, which was  
> >    in that language a linked list of 36-bit integers [it was a 36-bit
> >    machine], each integer having 5 7-bit ascii characters in it, and access
> >    time growing badly as the strings got long). 
> > 
> > I think there were other features like this that were incompatible, too,
> > but those are the big ones.  People diverged into camps who liked one or
> > another paradigm, but you couldn't co-load all of the support.  It's a bad
> > plan for a unified community.
> 
> I don't understand this at all.  Leaving aside my impression that these 
> things are just incredibly bad ideas (they seem to be just 
> non-transparent efficiency hacks), how do they result in 
> incompatibilities?

By creating situations that were mutually incompatible when loaded together.

That's probably an overbrief answer.  But I don't have time for something
better.

> > Almost better, even though it seems more special purpose, to have 
> > *open-open-funcall-mode* [and to somehow explain away lambda and setf
> > as exceptions] because at least an implementation with that isn't tempted
> > to say "oh, there are OTHER things you could do with *combination-hook*
> > and then to develop warring factions.
> 
> I think you're being paranoid.  I don't see a lot of wars breaking out 
> over the use of *readtable*.  Why should *combination-hook* be any 
> different?

First, *readtable* is bound per-file. That creates some partial boundary
against problems.

Second, *readtable* is indeed overpowerful and people routinely talk about
the issue of changes to the readtable from one application infecting another.
So it's not a good model of safety.

Third, *readtable* affects readtime, not runtime, and it's
traditionally easier to isolate that.  It's rare (not impossible, just
rare) for a reader for one module to need to call code from another
independently-written module.  A huge percentage of all custom reader
applications assume that they will only see or call code from their
own module while in recursive calls, so that doing (let ((*readtable*
*my-readtable*)) (read)) is often extremely safe.

But, in fact, speaking now to the exceptions (and elaborating on my
second point), it's a common for people to have problems where one
readtable contains some readmacros you need and another contains
others because indeed the modularity that CL promotes is not 100%
optimal here.  You might sometimes wish for something more like a
dynamic chain (a search list of sorts) but then applications would
have to be debugged on a per-application basis--it wouldn't be
possible to prove that an individual module works without knowing that
others were not involved.  And that's where this whole discussion
began--the desire to avoid that.
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-D11CBD.21590811062006@news.gha.chartermi.net>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > In article <·············@nhplace.com>,
> >  Kent M Pitman <······@nhplace.com> wrote:
> > 
> > > A number of options were provided for "customizing" the language, but 
> > > they
> > > came to be properly seen, as Pascal astutely observes here, as big-switch
> > > ways of turning the language into an implementation substrate for another
> > > language, not really as customizations of the current language.
> > 
> > Sure.  So?  Isn't that (being an implementation substrate for other 
> > languages) one of the things CL is supposed to be for?
> 
> Not at the expense of program combination, no.

Then why is *readtable* part of the language?


> DARPA would never have bought Common Lisp if we'd said "it will have
> many features, but one of them will not be that applications can expect
> to plug and play without prior cooperative communication to assure they
> aren't stepping on each others' toes."

But that is (was) in fact the state of affairs.  (Which is to say, that 
applications *could* have stepped on each others toes, and still can, 
but mostly don't because people by and large avoid doing the things that 
cause programs to step on each other's toes.)

BTW, at the risk of stating the obvious, the economic situation w.r.t. 
CL has changed since 1985.  What DARPA would or wouldn't have bought 20 
years ago perhaps ought not be our most pressing concern today.

> Sections like
> 
>   11.1.2.1.1 Constraints on the COMMON-LISP Package for
>              Conforming Implementations
>      http://www.lispworks.com/documentation/HyperSpec/Body/11_abaa.htm
> 
>   11.1.2.1.2 Constraints on the COMMON-LISP Package for 
>              Conforming Programs
>      http://www.lispworks.com/documentation/HyperSpec/Body/11_abab.htm
> 
> are testimony to the intent of the vendors not to have one program step
> on another.

And the existence of *readtable* (to say nothing of the absence of 
first-class top-level global environments a.k.a. modules/lexicons) is 
testimony to their failure to entirely achieve this goal.


> That doesn't mean that we didn't intend to be an implementation
> substrate.  It means we didn't intend to be ONLY an implementation
> substrate, which is a lot of what Maclisp had started to become--a
> veritable Tower of Babel where there were pockets of compatibility,
> but a number of packages were not cross-compatible.
> 
> The Lisp Machine's invention of the package system refined the clumsy
> notion of separated obarrays in Maclisp.  Obarrays gave the technical 
> power of a package, but were so clumsy to use that almost no one used them
> other than the compiler itself, which had a special obarray that separated
> it from user code so that executing user code in the compiler didn't break
> the compiler.  But the fact that there was no syntax for them meant they
> really weren't usable by regular usables.
> 
> Maclisp was also not really lexical in any serious way (other than the
> compiler's propensity for simply ignoring the dynamic "semantics" of 
> variables in the interpreter).  Interpreted variable bindings had dynamic
> scope, but compiled bindings had sort-of-lexical scope (other than that
> closures didn't get created except in a hodgepodge of specialized
> circumstances).
> 
> Maclisp was historically important for many things it did, but perhaps one
> of its lasting contributions was the number of errors made in its design
> that led us to do things better in the future...
> 
> ... if we remember the lessons.
> 
> Otherwise, well, I suppose the obligatory conclusion is:
> 
>    Those who don't learn from the lessons of (programming 
>    language design) history are doomed to reinvent Maclisp.

You know, your argument amounts to this: "MacLisp did a lot of stupid 
things.  Therefore *combination-hook* is a bad idea."  But you haven't 
actually said how *combination-hook* is actually like any of the bad 
things that MacLisp did.


> > > It's strictly true what you say Ron that it would implement the feature 
> > > you
> > > want, but it does so at the expense of any other uses of that switch, 
> > > since
> > > such uses are likely to be incompatible.  Consequently, the nightmare we
> > > had in the late 1970's with libraries only loading if they were loaded 
> > > with
> > > other "culturally compatible" libraries is likely.
> > 
> > No.  There are certainly ways to shoot yourself in the foot with 
> > *combination-hook*, but it takes only a tiny modicum of programmer 
> > discipline to avoid them.  The situation is no different from all the 
> > other global variables controlling the behavior of various aspects of 
> > the system now.
> 
> The other variables you cite are problematic at best already, but are
> marginally justified because everyone knows about them.  They are there
> in the core language, and anyone learning the language knows to bind 
> things like *print-level*, *read-eval*, etc. if they run an application
> that might be affected.
> 
> The same can simply not be said about an implementation extension that
> is equivalently powerful.  A portable application will not know to protect
> itself.

Hogwash.  As I have pointed out repeatedly, there is a very simple way 
that an application can "protect" itself against any mischief that might 
be wrought by *combination-hook*: Don't use ((...) ...) syntax.  Since 
this syntax is currently an error, all error-free CL code automatically 
does this anyway.

The same cannot be said for *readtable*.


> If anything, I'm _more_ comfortable with *read-eval* and *print-length*
> being characterized as an "attractive nuisance"
>  [ref: http://en.wikipedia.org/wiki/Attractive_nuisance ]
> and having people be more conservative about their uses than saying that
> these are existence proofs of that it is ok to not worry about the 
> attractive nuisance potential of something like a *combination-hook*.
> 
> There is certainly value in these special variables, however I think it's
> still important to stay on guard for their Dark Side and to develop good
> strong programming guidelines that work to diminish the ill effects.
> 
> While it might be possible for you to develop a theory of how this could
> be done for *combination-hook*, you haven't done so

I certainly have.  See above.

> so I cannot critique your theory.

That may well be, but not for the reason that you state.

>  It is not reasonable in my theory of public debate for you
> to claim that the mere possibility that you could develop such a theory of
> good style is enough to trump my concern that you might not.

Well, in MY theory of public debate it is not reasonable for YOU to 
simply ignore a significant portion of my position and then claim that 
my position is lacking that portion that you have ignored.


> > I don't understand this at all.  Leaving aside my impression that these 
> > things are just incredibly bad ideas (they seem to be just 
> > non-transparent efficiency hacks), how do they result in 
> > incompatibilities?
> 
> By creating situations that were mutually incompatible when loaded together.
> 
> That's probably an overbrief answer.

It's not so much the brevity as the utter lack of content.  There are a 
lot of things that were done back in the old days that in retrospect 
were just Bad Programming Practice.  I strongly suspect that the 
incompatibilities were simply the result of someone doing something 
obviously stupid, and not an inherent conflict between these features 
that you cite.


> But I don't have time for something better.

That's a pretty lame excuse.  It's pretty unfair of you to cite an 
example in support of your position and then fail to explain how it 
actually supports your position.  It leaves people with the impression 
that you have actually supported your position when in fact you have not.


> > > Almost better, even though it seems more special purpose, to have 
> > > *open-open-funcall-mode* [and to somehow explain away lambda and setf
> > > as exceptions] because at least an implementation with that isn't tempted
> > > to say "oh, there are OTHER things you could do with *combination-hook*
> > > and then to develop warring factions.
> > 
> > I think you're being paranoid.  I don't see a lot of wars breaking out 
> > over the use of *readtable*.  Why should *combination-hook* be any 
> > different?
> 
> First, *readtable* is bound per-file. That creates some partial boundary
> against problems.

Fine, let *combination-hook* be bound per file.  Or make it a 
(combination-hook) macro or *combination-hook* symbol-macro to make it 
lexically scoped.


> Second, *readtable* is indeed overpowerful and people routinely talk about
> the issue of changes to the readtable from one application infecting another.
> So it's not a good model of safety.

I do not dispute that.  It is nonetheless the current state of things, 
and it has served the language reasonably well.  I don't think 
*readtable* tops very many things-I'd-like-to-see-changed-in-CL lists.


> Third, *readtable* affects readtime, not runtime, and it's
> traditionally easier to isolate that.

OK, who are you really, and what have you done with Kent Pitman?

First, *combination-hook* affects macroexpansion time, not runtime.

Second, being able to invoke the Lisp reader to parse S-expressions at 
run time is one of the things that is often cited as a feature that 
distinguishes Lisp from other languages.  The fact that read time is NOT 
isolated from run time is one of the hallmarks of Lisp.

If you were really Kent Pitman I would not have to explain these things 
to you.


> It's rare (not impossible, just
> rare) for a reader for one module to need to call code from another
> independently-written module.  A huge percentage of all custom reader
> applications assume that they will only see or call code from their
> own module while in recursive calls, so that doing (let ((*readtable*
> *my-readtable*)) (read)) is often extremely safe.

And why would you expect that the (let ((*combination-hook* ...)) 
(compile-file ...)) idiom will serve not serve just as well?


> But, in fact, speaking now to the exceptions (and elaborating on my
> second point), it's a common for people to have problems where one
> readtable contains some readmacros you need and another contains
> others because indeed the modularity that CL promotes is not 100%
> optimal here.  You might sometimes wish for something more like a
> dynamic chain (a search list of sorts) but then applications would
> have to be debugged on a per-application basis--it wouldn't be
> possible to prove that an individual module works without knowing that
> others were not involved.  And that's where this whole discussion
> began--the desire to avoid that.

I don't understand what you mean by a "dynamic chain" in this context.  
But it doesn't really matter.  What bothers me about your position is 
the way you flip-flop about whether or not CL's design is "good enough."  
You are perennially one of the leading resistors of change (I have never 
once seen you support a proposed change, nor offer one of your own), 
implying that you think CL in its current form is "good enough."  But 
when someone proposes a design that has all the same problems that an 
existing feature has, you oppose it on the grounds that it has those 
problems, and it is therefore NOT good enough.  I find that lack of 
consistency in your quality metric most frustrating.

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <uy7w2fnuj.fsf@nhplace.com>
Ron Garret <·········@flownet.com> writes:

> In article <·············@nhplace.com>,
>  Kent M Pitman <······@nhplace.com> wrote:
> 
> > Ron Garret <·········@flownet.com> writes:
> > 
> > > In article <·············@nhplace.com>,
> > >  Kent M Pitman <······@nhplace.com> wrote:
> > > 
> > > > A number of options were provided for "customizing" the language, but 
> > > > they
> > > > came to be properly seen, as Pascal astutely observes here, as big-switch
> > > > ways of turning the language into an implementation substrate for another
> > > > language, not really as customizations of the current language.
> > > 
> > > Sure.  So?  Isn't that (being an implementation substrate for other 
> > > languages) one of the things CL is supposed to be for?
> > 
> > Not at the expense of program combination, no.
> 
> Then why is *readtable* part of the language?

You again ignore history.  The design was not "from scratch".  It was
based on prior implementations.  A large number of existing systems
customized syntax, so we continued that tradition.  *readtable* added
modularity, it didn't diminish it, relative to prior implementations.

Also, as I said before, when things like this are in the core
language, all users are aware of them.  Had you made your
*combination-hook* suggestion in the design of the base language, I
don't know that it would have been accepted (primarily because
sympathy to a Scheme-compatible style was lower then than I perceive
it to be now).  However, again speaking relatively, you would have had
fewer obstacles because you wouldn't have had the obstacle of "some
programs won't know about it" since by definition (so to speak), if it
were in the language, I couldn't argue that some programs wouldn't
know about it and couldn't protect themselves.  

Had this [*combination-hook*] been raised in a historical context,
though, my sense is that it would have failed because there was a
contingent who affirmatively wanted to oppose CL turning into Scheme.
But among those who were reasoning from first principles, I think you'd
have gotten the best audience by simply proposing that combinations be
allowed, period.  I'm trying in this thread to articulate my sense of
the well-founded set of reasons why a hook whose sole purpose seems to
be to provide a Turing-Machine-like hook with a single end-goal purpose
would have been, and should have been, rejected.

Moreover, the right way to do this, as has been earlier noted in other
messages, is lexically, not dynamically.

> > DARPA would never have bought Common Lisp if we'd said "it will have
> > many features, but one of them will not be that applications can expect
> > to plug and play without prior cooperative communication to assure they
> > aren't stepping on each others' toes."
> 
> But that is (was) in fact the state of affairs.  (Which is to say, that 
> applications *could* have stepped on each others toes, and still can, 
> but mostly don't because people by and large avoid doing the things that 
> cause programs to step on each other's toes.)
>
> BTW, at the risk of stating the obvious, the economic situation w.r.t. 
> CL has changed since 1985.  What DARPA would or wouldn't have bought 20 
> years ago perhaps ought not be our most pressing concern today.

You asked:

 "Isn't that (being an implementation substrate for other 
  languages) one of the things CL is supposed to be for?"

What CL is derives from history.  I did not answer in terms of today,
I answered in terms of history since that's where "intent in design"
came into play.

Your desire to dismiss DARPA as relevant today, though, denies the
fact that what we built into CL was a desire for modules to respect
the prisoner's dilemma, as I explained.

In case it went past you way earlier, I have FAR LESS concern with
your desire to support combinations than I have with your desire to
introduce a feature that allows me to implement not only combinations
"your way" but competing theories that are incompatible.  You may have
thought I was being tongue-in-cheek or dismissive, but I might not be
having this debate with you if you had suggested just *ALLOW-COMBINATIONS*
of T or NIL.  My concern is the needless problem of Turing Machine 
willfulness that you introduce in the form you do.

If this were only about *allow-combinations*, I would mostly not have
an issue, although I do think (as noted below) that "sparseness" is a
concern.  Whether that should be a dominating concern, I can't say.
But it is something that people have to weigh, and hence this issue is
still not a slam dunk.

> > Sections like
> > 
> >   11.1.2.1.1 Constraints on the COMMON-LISP Package for
> >              Conforming Implementations
> >      http://www.lispworks.com/documentation/HyperSpec/Body/11_abaa.htm
> > 
> >   11.1.2.1.2 Constraints on the COMMON-LISP Package for 
> >              Conforming Programs
> >      http://www.lispworks.com/documentation/HyperSpec/Body/11_abab.htm
> > 
> > are testimony to the intent of the vendors not to have one program step
> > on another.
> 
> And the existence of *readtable* (to say nothing of the absence of 
> first-class top-level global environments a.k.a. modules/lexicons) is 
> testimony to their failure to entirely achieve this goal.

I'm not sure that success/failure is so binary as you're making it out.

And personally, I think there are rules of good style that cause it to
work better.

But as I said, if you push, I'm more likely to say that *READTABLE* was
bad than that your feature is good.  And I don't know to what point, since
it's not going away.

> > Maclisp was historically important for many things it did, but perhaps one
> > of its lasting contributions was the number of errors made in its design
> > that led us to do things better in the future...
> > 
> > ... if we remember the lessons.
> > 
> > Otherwise, well, I suppose the obligatory conclusion is:
> > 
> >    Those who don't learn from the lessons of (programming 
> >    language design) history are doomed to reinvent Maclisp.
> 
> You know, your argument amounts to this: "MacLisp did a lot of stupid 
> things.  Therefore *combination-hook* is a bad idea."  But you haven't 
> actually said how *combination-hook* is actually like any of the bad 
> things that MacLisp did.

I'll leave the coherence of my argument for others reading this thread
to sort out.

Debates rarely change the minds of the participants, especially on
issues of opinion where the participants hold strongly opposing
views. What debates do is give others who are not thus caught up a
chance to observe the ideas in play and to inform their own opinions.

I simply don't think that what I've said reduces to what you've
reduced it to, but I won't now debate that.  That seems an invitation
to either get into a debate about English semantics, or else to rehash
what I've said.

I've laid out why I think what I think, and others following the
discussion will either think I'm ranting without any point (as you seem
to suggest I am) or they will not.

> > The same can simply not be said about an implementation extension that
> > is equivalently powerful.  A portable application will not know to protect
> > itself.
> 
> Hogwash.  As I have pointed out repeatedly, there is a very simple way 
> that an application can "protect" itself against any mischief that might 
> be wrought by *combination-hook*: Don't use ((...) ...) syntax.  Since 
> this syntax is currently an error, all error-free CL code automatically 
> does this anyway.

If reading data that might be ill-formed, defining your language to
not signal errors about what might be a symptom of ill-formedness is
not necessarily a benign change.

I don't expect this argument to convince you.  Frankly, I expect you
to dismiss my concern about this issue as irrelevant and unimportant,
even while engaging in a discussion with me about how I should not
dismiss your concern about the need for Scheme-like combinations in a 
language that is not Scheme.  But perhaps you will surprise me.  I'm not
saying you're incapable of that--I'm just saying I don't expect it.

(I personally think the desire to use that syntax is irrelevant and
unimportant _to me_, however I do not _dismiss_ the issue nor the
arguments for it because I try to recognize the arguments,
sensibilities, and perceived needs of others as relevant to them.  I
try not to weigh the absolute need in any material way other than when
those needs intrude on either a particular other user's need and when
those needs intrude on the overall design principles of the language.)

In this case, I know that "sparseness", as I've called it,
[ http://groups.google.com/group/comp.lang.lisp/msg/54067d895793587a 
  http://groups.google.com/group/comp.lang.lisp/msg/feed2ddf5be6e90e ]
is a design principle that I and others in the CL committee
have explicitly and implicitly relied upon at various times.

Hence, as I understand the term hogwash 
  1. Worthless, false, or ridiculous speech or writing; nonsense.
  [ http://dictionary.reference.com/search?q=hogwash ]
you are simply dismissing out of hand the notion that I might have a
reason for my belief.

Moreover, beyond the issue of sparseness, I have repeatedly cited the
fact that your hook is so general that it simultaneously adds not just
what you want, but when you say (to re-quote you from above):

> this syntax is currently an error, all error-free CL code automatically 
> does this anyway.

you forget that if you added the feature you want you would be 
simultaneously creating a space of code for which this statement was not
true, since others might use the new hook you create incompatibly, and
hence you are proposing the addition of a feature which is its own
downfall from an internal consistency point of view.

> The same cannot be said for *readtable*.

You're talking to someone who argued (unsuccessfully and perhaps only
even half-heartedly since there were mixed points of view on this,
even within myself) for the non-inclusion of EQUAL in the language,
based on the same concerns as people raised when wanting to keep COPY
out of the language.  Ditto for FIXNUM.

I've said above, and I will reiterate here.  *readtable* was not added from
first principles.  Ditto FIXNUM and EQUAL.  They were added because they were
known quantities that had been used successfully for years, notwithstanding
theoretical arguments about why they could send the language into a tailspin.
Users would have rebelled had they not been there.  Since *COMBINATION-HOOK*
was not of that kind, the same arguments simply would not have applied.

Whether my arguments apply now is anyone's guess--I don't know who
your target is in this interchange.   The modern market of ideas can
speak for itself--people can chime in with their present values, I guess.
But I presume they like CL and want to know how it came to be and why it
made the choices it did, and I'm offering my personal view on that. The
views of others there at the time might even differ from mine, but I'm not
asking them not to speak, nor am I trying to speak for them.  I'm just doing
my part.  My specialty is history, not because I was trained for it but because
I lived a least a bit of it, often as much by accident as intent since my
presence in many amazing situations was often just chance.

> >  It is not reasonable in my theory of public debate for you
> > to claim that the mere possibility that you could develop such a theory of
> > good style is enough to trump my concern that you might not.
> 
> Well, in MY theory of public debate it is not reasonable for YOU to 
> simply ignore a significant portion of my position and then claim that 
> my position is lacking that portion that you have ignored.

I guess we've each stated our positions.  Now it's for the audience to
sort out.  It'll be fun to tune in CNN, Fox News, etc. tomorrow and
hear what the pundits on Hardcons and Lisp Gang have to say, won't it?

> > By creating situations that were mutually incompatible when loaded together.
> > 
> > That's probably an overbrief answer.
> 
> It's not so much the brevity as the utter lack of content.  There are a 
> lot of things that were done back in the old days that in retrospect 
> were just Bad Programming Practice.  I strongly suspect that the 
> incompatibilities were simply the result of someone doing something 
> obviously stupid, and not an inherent conflict between these features 
> that you cite.

Learning from history is rarely done by just seeing something happen twice.
You have to reapply situations that are not superficially the same at just
the right time, and not at other times.  I've laid out why I thought this
matters.  I hear you as saying you think it doesn't.  Again, I'll be watching
the pundits to find out whose argument was more persuasive.

> > But I don't have time for something better.
> 
> That's a pretty lame excuse.  It's pretty unfair of you to cite an 
> example in support of your position and then fail to explain how it 
> actually supports your position.  It leaves people with the impression 
> that you have actually supported your position when in fact you have not.

Fair or not, I have finite time.  It also seems unfair that in a
supposedly expanding 4-dimensional universe, only space is expanding
and not time. You'd think we'd find that as time went on, we'd get not
only galaxies moving farther apart, but hours as well, so that we had
more and more time to post on comp.lang.lisp.

As so recently noted, I have spent a lot of time away, and I'm at risk
of that again.  I have a heavy schedule of work.  My goal in my
participation here is less to win or lose a debate, and more to upload
information from my brain into a more durable form.  I trust that if I
have not seemed coherent to you, I may have to someone.  Or, if not, then
speaking longer won't help things.  If I'm just babbling incoherently,
better I cut that off than make people wade through twice as much babble.
It sounded to me coherent, but maybe that's just an advancing dimentia
setting in... or maybe I was just never coherent in the first place.
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-091EB4.09480212062006@news.gha.chartermi.net>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Moreover, the right way to do this, as has been earlier noted in other
> messages, is lexically, not dynamically.

There are lexically scoped versions of my proposal.  Why do you ignore 
those?


> In case it went past you way earlier, I have FAR LESS concern with
> your desire to support combinations than I have with your desire to
> introduce a feature that allows me to implement not only combinations
> "your way" but competing theories that are incompatible.  You may have
> thought I was being tongue-in-cheek or dismissive, but I might not be
> having this debate with you if you had suggested just *ALLOW-COMBINATIONS*
> of T or NIL.  My concern is the needless problem of Turing Machine 
> willfulness that you introduce in the form you do.

Just for the record, I would be perfectly happy with a boolean 
*ALLOW-COMBINATIONS*.  (That was actually my original suggestion.)  I 
changed it because there was disagreement about what the semantics of 
combinations in CL ought to be.


> > And the existence of *readtable* (to say nothing of the absence of 
> > first-class top-level global environments a.k.a. modules/lexicons) is 
> > testimony to their failure to entirely achieve this goal.
                                  ^^^^^^^^
> 
> I'm not sure that success/failure is so binary as you're making it out.

I'm not making it out to be binary.  Note the highlighted word.


> > > The same can simply not be said about an implementation extension that
> > > is equivalently powerful.  A portable application will not know to 
> > > protect
> > > itself.
> > 
> > Hogwash.  As I have pointed out repeatedly, there is a very simple way 
> > that an application can "protect" itself against any mischief that might 
> > be wrought by *combination-hook*: Don't use ((...) ...) syntax.  Since 
> > this syntax is currently an error, all error-free CL code automatically 
> > does this anyway.
> 
> If reading data that might be ill-formed, defining your language to
> not signal errors about what might be a symptom of ill-formedness is
> not necessarily a benign change.
> 
> I don't expect this argument to convince you.  Frankly, I expect you
> to dismiss my concern about this issue as irrelevant and unimportant,
> even while engaging in a discussion with me about how I should not
> dismiss your concern about the need for Scheme-like combinations in a 
> language that is not Scheme.  But perhaps you will surprise me.  I'm not
> saying you're incapable of that--I'm just saying I don't expect it.

I do not dismiss your concern as irrelevant or unimportant.  I will 
point out the following however:

1.  This argument would apply to *any* proposed new feature, not just 
*combination-hook*.

2.  Lisp already has a feature called macros whose entire purpose is to 
take data patterns which were previously ill-formed and render them 
semantically meaningful.  Not only is this feature not generally 
considered to be non-benign, it is considered by many to be of high 
utility.

3.  I don't have any data to back this up, but I strongly suspect that 
cases of programmers accidentally typing combinations into their code 
are quite rare.  Cases where such accidental combinations would 
accidentally lead to semantically meaningful behavior in the presence of 
*combination-hook* would be rarer still.

4.  At the risk of repeating myself, any program can insure complete 
protection from any potentially deleterious effects of 
*combination-hook* simply by insuring that it is set to its default 
value.

> Hence, as I understand the term hogwash 
>   1. Worthless, false, or ridiculous speech or writing; nonsense.
>   [ http://dictionary.reference.com/search?q=hogwash ]
> you are simply dismissing out of hand the notion that I might have a
> reason for my belief.

Not at all.  I simply think that your reasons are without merit (which 
is not the same thing as dismissing your beliefs as irrelevant or 
unimportant.  If I thought that I wouldn't bother to respond.)


> > this syntax is currently an error, all error-free CL code automatically 
> > does this anyway.
> 
> you forget that if you added the feature you want you would be 
> simultaneously creating a space of code for which this statement was not
> true, since others might use the new hook you create incompatibly, and
> hence you are proposing the addition of a feature which is its own
> downfall from an internal consistency point of view.

Yes, that is a legitimate concern.  My answer (again at the risk of 
repeating myself) is that this is also true of features that are already 
in the language and empirically we have not ended up with a morass of 
incompatible libraries due to judicious use of these features.  There is 
no reason to expect the situation w.r.t. *combination-hook* to be any 
different.


> > The same cannot be said for *readtable*.
> 
> You're talking to someone who argued (unsuccessfully and perhaps only
> even half-heartedly since there were mixed points of view on this,
> even within myself) for the non-inclusion of EQUAL in the language,
> based on the same concerns as people raised when wanting to keep COPY
> out of the language.  Ditto for FIXNUM.

Yes, I know.  Do you believe that subsequent history has vindicated your 
position?  In other words, do you believe that the inclusion of these 
things in the language has resulted in substantial damage?  How do you 
think the world would be different today if you had gotten your way?


> I've said above, and I will reiterate here.  *readtable* was not added from
> first principles.  Ditto FIXNUM and EQUAL.  They were added because they were
> known quantities that had been used successfully for years, notwithstanding
> theoretical arguments about why they could send the language into a tailspin.

And in retrospect do you think that those theoretical arguments were 
correct?

> Users would have rebelled had they not been there.  Since *COMBINATION-HOOK*
> was not of that kind, the same arguments simply would not have applied.

But users *do* rebel.  But the rebellion is not as visible now because 
they rebel by using Scheme (or Python).

> Whether my arguments apply now is anyone's guess--I don't know who
> your target is in this interchange.

Aren't you curious?


> > > By creating situations that were mutually incompatible when loaded 
> > > together.
> > > 
> > > That's probably an overbrief answer.
> > 
> > It's not so much the brevity as the utter lack of content.  There are a 
> > lot of things that were done back in the old days that in retrospect 
> > were just Bad Programming Practice.  I strongly suspect that the 
> > incompatibilities were simply the result of someone doing something 
> > obviously stupid, and not an inherent conflict between these features 
> > that you cite.
> 
> Learning from history is rarely done by just seeing something happen twice.
> You have to reapply situations that are not superficially the same at just
> the right time, and not at other times.  I've laid out why I thought this
> matters.  I hear you as saying you think it doesn't.  Again, I'll be watching
> the pundits to find out whose argument was more persuasive.
> 
> > > But I don't have time for something better.
> > 
> > That's a pretty lame excuse.  It's pretty unfair of you to cite an 
> > example in support of your position and then fail to explain how it 
> > actually supports your position.  It leaves people with the impression 
> > that you have actually supported your position when in fact you have not.
> 
> Fair or not, I have finite time.  It also seems unfair that in a
> supposedly expanding 4-dimensional universe, only space is expanding
> and not time. You'd think we'd find that as time went on, we'd get not
> only galaxies moving farther apart, but hours as well, so that we had
> more and more time to post on comp.lang.lisp.
> 
> As so recently noted, I have spent a lot of time away, and I'm at risk
> of that again.  I have a heavy schedule of work.  My goal in my
> participation here is less to win or lose a debate, and more to upload
> information from my brain into a more durable form.  I trust that if I
> have not seemed coherent to you, I may have to someone.  Or, if not, then
> speaking longer won't help things.  If I'm just babbling incoherently,
> better I cut that off than make people wade through twice as much babble.
> It sounded to me coherent, but maybe that's just an advancing dimentia
> setting in... or maybe I was just never coherent in the first place.

So you don't have time to write anything to explain why cxr and hacky 
strings introduce fundamental incompatibilities into the language, but 
you do have time to write three paragraphs explaining why you don't have 
time.

OK.

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <uac8c5or0.fsf@nhplace.com>
Ron Garret <·········@flownet.com> writes:

> So you don't have time to write anything to explain why cxr and hacky 
> strings introduce fundamental incompatibilities into the language, but 
> you do have time to write three paragraphs explaining why you don't have 
> time.

This remark is hugely offensive to me.

I was going to spend some time this weekend reading comp.lang.lisp, but at
seeing this I have suddenly decided I have better ways to spend my time.

In general, anyone who isn't convinced by my arguments on technical
matters is welcome to retain their own opinion unchanged; the burden
is on me to make a more convincing argument if I so choose.  But they
may not challenge my use of time, nor insist that I go around with
them until they are convinced.  This being a volunteer operation, I
speak when I feel I have something to offer and I reserve the right to
not speak when I feel that I either have nothing to offer or I feel
that for whatever personal reason, it's not my choice to do so.

Civil conversation is the price of my presence/participation here.

And I have to add that I feel some sense of shame to be goaded even into
making chiding remarks of this kind.  It troubles me to be dragged down
even that far into the mud.  That's another reason why I tend to opt out.
It beats staying around to bicker further in a way I'm sure no one wants.
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-A5CBBB.01220317062006@news.gha.chartermi.net>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > So you don't have time to write anything to explain why cxr and hacky 
> > strings introduce fundamental incompatibilities into the language, but 
> > you do have time to write three paragraphs explaining why you don't have 
> > time.
> 
> This remark is hugely offensive to me.

That is unfortunate.  The remark was merely a statement of objective 
fact.  Furthermore, it served what I feel to be a legitimate rhetorical 
purpose.  See below.

> I was going to spend some time this weekend reading comp.lang.lisp, but at
> seeing this I have suddenly decided I have better ways to spend my time.

Apparently not.

> In general, anyone who isn't convinced by my arguments on technical
> matters is welcome to retain their own opinion unchanged; the burden
> is on me to make a more convincing argument if I so choose.  But they
> may not challenge my use of time, nor insist that I go around with
> them until they are convinced.

And I made no such challenge nor did I make any such insistence.  I 
merely pointed out the fact that you had time to write what you wrote.  
Making that observation had a legitimate rhetorical purpose, namely, to 
show that your stated reason for not offering better support for your 
argument was questionable, and that therefore there might actually be 
some other reason for not offering that additional support, like for 
example that your position might in fact be wrong.

> This being a volunteer operation, I
> speak when I feel I have something to offer and I reserve the right to
> not speak when I feel that I either have nothing to offer or I feel
> that for whatever personal reason, it's not my choice to do so.

Absolutely.

> Civil conversation is the price of my presence/participation here.

I make every effort to be civil in this forum.  But I do not believe it 
is uncivil to point out a potentially salient fact.

> And I have to add that I feel some sense of shame to be goaded even into
> making chiding remarks of this kind.  It troubles me to be dragged down
> even that far into the mud.  That's another reason why I tend to opt out.
> It beats staying around to bicker further in a way I'm sure no one wants.

Indeed.  I would much rather you took the time to explain why cxr and 
hacky strings introduce fundamental incompatibilities into the language, 
because I really don't understand how that could possibly be the case.

rg
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <u3bekqh4s.fsf@nhplace.com>
Pascal Costanza <··@p-cos.net> writes:

> ···@flownet.com wrote:
> > Hi Kent!  Good to see you back!
> 
> I second this.
> 
> (Ron, another thing we agree on... ;)

Actually, I've been super-busy with work, and will likely continue to be 
for the next few months at least...  But I had a bit of extra time this
weekend.  (And I can't complain really--it's always better to have too much
work than too little.)

> No, this variation actually looks much better. Assuming that there
> would also be a lambda-macrolet, this would mean that a) it's clear by
> looking at the program text what definitions apply and b) there would
> be a way to control what's going on when you mix code from different
> libraries.

Zetalisp didn't have a binding form, perhaps because the intended use was
the implementation of a language-level primitive, not the willy-nilly 
creation of user-defined syntax.

But I don't think it's unreasonable to want a binding form, though since
in my example the binding form is just doing the service of trampolining 
from the function to a variable, unless you have some other use in mind for
the form, it shouldn't matter if you simply bind the variable once you've
defined that it will work that way globally.  (I guess you're saying you'd
like not to have to have your local variables not be defined globally, and
that's probably quite reasonable.)

> Question: Does/can this extend to arbitrary depth? That is, would a
> (define-lambda-macro name ...) apply to ((name ...) ...), (((name ...)
> ...) ...), and so on?

I doubt it.  But with about 10 seconds thought I don't see any reason
it shouldn't.  (I don't think there should be a different kind of operator
for that layer, so it'd end up undefined otherwise.  Might as well have it
just recycle the definition from the second layer.)
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4egffpF1es85dU1@individual.net>
···@flownet.com wrote:
> Pascal Costanza wrote:
>> ···@flownet.com wrote:
>>> Pascal Costanza wrote:
>>>
>>>>> Of course not.  Not that any of this hasn't been said before, but it
>>>>> might be worth reiterating in light of this newly discovered common
>>>>> ground: we agree that the current situation w.r.t. lambda expressions
>>>>> is confusing.
>>>> It is confusing, but not confusing enough to warrant any action, IMHO.
>>> But if action were warranted you would favor erring on the side of
>>> being restrictive, i.e. deprecating ((lambda ...) ...) rather than
>>> generalizing ((...) ...)  That is not in keeping with CL's design
>>> philosophy IMO.
>> Do you mean the design philosophy stated in Chapter 1.1 of CLtL2, or
>> some other design philosophy?
> 
> I mean the part about being a programmable programming language.

There are quite a few areas in which Common Lisp is not programmable.

>>> Finally, my actual concern here is not support for FP (thought that
>>> would be an ancillary benefit) but rather first-class global lexical
>>> environments.
>> Why do you need this feature in this context?
> 
> So that I can implement a simple syntax for accessing top-level lexical
> bindings (so that I can implement a new macro system) without a code
> walker.

I don't see why you would need to treat conses in car positions of forms 
  specially to make this work. Care to elaborate?


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e96bmF1c98v1U1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>> The fact that the 
>> same form gives rise to two reasonable, but completely different 
>> interpretations _is_ confusing.
> 
> On that view, many things in CL are already confusing.

Sure.

> Personally I 
> find the idea of ((...) ...) == (funcall (...) ...) to be quite clear, 
> unambiguous, and easy to understand.  It also has the nice property that 
> it subsumes the ((lambda ...) ...) rule, which is also (you must at this 
> point agree) quite confusing.

Yes, I agree, ((lambda ...) ...) is also confusing.

>>>> The fact that you have an existence proof is clear 
>>>> evidence that I cannot disallow you to do this.
>>> But I am only able to do it by resorting to implementation-specific 
>>> hacks (or a full code walker, which offends my sensibilities for 
>>> something so trivial).  And that is a direct result of you and people 
>>> like you who wish to impose gratuitous constraints on the language on 
>>> the grounds that *they* find it confusing or useless or whatever.
>> I wasn't a member of the ANSI CL standardization committee.
> 
> No, but you are a member of the faction that supports treating their 
> work as immutable gospel.

This is incorrect. We only disagree in how change is actually achieved.

>>>> If I remember correctly, your motivation to enable treating conses in 
>>>> the car of a form in a meaningful way was to make a certain programming 
>>>> style more convenient.
>>> That is one reason.  It is not the only reason.
>> What are your other reasons?
> 
> I am working on adding a system of first-class global lexical 
> environments to CL (originally called locales after the terminology in 
> T, now called lexicons to avoid confusion with I18N).  It turns out that 
> adding these to the language has all sorts of happy consequences, 
> including the seamless integration of Lisp-1 and Lisp-2 semantics 
> without a code walker, in addition to a host of ancillary benefits which 
> I will not detail here.  I'll just say that great swaths of complexity 
> and controversy can be mowed down and discarded if you have first-class 
> global lexical environments.
> 
> One of the additional benefits is (possibly) a new macro system which 
> combines the simplicity of Lisp-1 semantics and the simplicity of Lisp-2 
> macros while avoiding the risk of accidental name clashes, which some 
> people might find an attractive alternative to what is currently 
> available.  But to prototype the system I need to be able to convert 
> (^foo ...) into something like (funcall (top-level-binding 'foo) ...).
> 
> If I have ((...) ...) == (funcall (...) ...) then this is trivial.  All 
> I need is a reader macro for #\^, and everything works just fine.  But 
> without that I have to resort to one of a number of horrible hacks, a 
> code walker being among the least horrible of them.
> 
> There are other useful things one can do as well, but that's the one 
> that has my attention at the moment.

I am looking forward to reading about, and playing with, the whole 
thing. I am skeptical that you can achieve all your goals, but I am 
happy to be proved wrong.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-E9EF1A.16162601062006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> > No, but you are a member of the faction that supports treating their 
> > work as immutable gospel.
> 
> This is incorrect. We only disagree in how change is actually achieved.

That's true, but the positions are not symmetric.  Your position is that 
the first step for one who wants change is to produce a prototype.  But 
the change I want is motivated in large part by the fact that producing 
a prototype is much harder than it otherwise would be if the change I 
wish to see were implemented first.  So by insisting that I produce a 
prototype you are in effect insisting that I do a lot of work, part of 
the effect of which would be to render moot the reason for wanting the 
change in the first place.  That hardly seems fair.

> I am looking forward to reading about, and playing with, the whole 
> thing.

I'm working on it.  I just realized that my imlementation-specific hack 
works in OpenMCL, so I may be releasing this thing sooner than I 
thought.  But it is not the highest thing on my priority list.

rg
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-D05F4B.16252101062006@news.gha.chartermi.net>
In article <·······························@news.gha.chartermi.net>,
 Ron Garret <·········@flownet.com> wrote:

> I just realized that my imlementation-specific hack 
> works in OpenMCL, so I may be releasing this thing sooner than I 
> thought.

Here's the patch that will allow MCL and OpenMCL to treat ((...) ...) as 
(funcall (...) ...)

http://www.flownet.com/ron/lisp/nx1-combination-hook.lisp

I know that the file contains more than the half-dozen line change I've 
been advertising, but that's because I need to make two tiny changes to 
two very large functions.  If you compare those large functions to the 
original sources (at least in MCL) you will see that the changes are in 
fact quite tiny.  If I published this change as a diff instead of a 
standalone file it would be much smaller.

NOTE: this code was originally written for an older version of regular 
MCL, and I have given it only the most cursory testing in OpenMCL, and 
then only on an old version.  It is very likely buggy.  But it works in 
at least a few straightforward cases.

rg
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4ea517F1d3f87U1@individual.net>
Ron Garret wrote:
> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>>> No, but you are a member of the faction that supports treating their 
>>> work as immutable gospel.
>> This is incorrect. We only disagree in how change is actually achieved.
> 
> That's true, but the positions are not symmetric.  Your position is that 
> the first step for one who wants change is to produce a prototype.  But 
> the change I want is motivated in large part by the fact that producing 
> a prototype is much harder than it otherwise would be if the change I 
> wish to see were implemented first.  So by insisting that I produce a 
> prototype you are in effect insisting that I do a lot of work, part of 
> the effect of which would be to render moot the reason for wanting the 
> change in the first place.  That hardly seems fair.

Yes, sometimes life is not fair. ;)


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Paul F. Dietz
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <LoGdnVyvmtxH3h3ZnZ2dnUVZ_t6dnZ2d@dls.net>
Ron Garret wrote:

> So by insisting that I produce a 
> prototype you are in effect insisting that I do a lot of work, part of 
> the effect of which would be to render moot the reason for wanting the 
> change in the first place.  That hardly seems fair.

Are you laboring under the misapprehension that we have any
obligation to listen to anything you say?  You want to take
up our time, it's up to *you* to make the case.

	Paul
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149280573.313854.317310@f6g2000cwb.googlegroups.com>
Paul F. Dietz wrote:
> Ron Garret wrote:
>
> > So by insisting that I produce a
> > prototype you are in effect insisting that I do a lot of work, part of
> > the effect of which would be to render moot the reason for wanting the
> > change in the first place.  That hardly seems fair.
>
> Are you laboring under the misapprehension that we have any
> obligation to listen to anything you say?

Of course not.  Why would you even think such a thing?

> You want to take up our time

I want to do nothing of the sort.  No one is forcing anyone to read my
postings.

rg
From: Thomas Schilling
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4ebjahF1e4q1pU1@news.dfncis.de>
Ron Garret wrote:

> One of the additional benefits is (possibly) a new macro system which 
> combines the simplicity of Lisp-1 semantics and the simplicity of Lisp-2 
> macros while avoiding the risk of accidental name clashes, which some 
> people might find an attractive alternative to what is currently 
> available.  But to prototype the system I need to be able to convert 
> (^foo ...) into something like (funcall (top-level-binding 'foo) ...).
> 
> If I have ((...) ...) == (funcall (...) ...) then this is trivial.  All 
> I need is a reader macro for #\^, and everything works just fine.  But 
> without that I have to resort to one of a number of horrible hacks, a 
> code walker being among the least horrible of them.

I'm propably missing something, but what's wrong with expanding

  (^foo a b c)

to

  ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)

apart from performance issues?  (I mean for sake of a prototype.  OTOH
some compilers might even optimize this indirection away.)

(BTW, I'm looking forward reading about your ideas on first-class
environments.  Will this also imply first-class modules, or am I
dreaming here?)
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149317503.254175.178790@i39g2000cwa.googlegroups.com>
Thomas Schilling wrote:
> Ron Garret wrote:
>
> > One of the additional benefits is (possibly) a new macro system which
> > combines the simplicity of Lisp-1 semantics and the simplicity of Lisp-2
> > macros while avoiding the risk of accidental name clashes, which some
> > people might find an attractive alternative to what is currently
> > available.  But to prototype the system I need to be able to convert
> > (^foo ...) into something like (funcall (top-level-binding 'foo) ...).
> >
> > If I have ((...) ...) == (funcall (...) ...) then this is trivial.  All
> > I need is a reader macro for #\^, and everything works just fine.  But
> > without that I have to resort to one of a number of horrible hacks, a
> > code walker being among the least horrible of them.
>
> I'm propably missing something, but what's wrong with expanding
>
>   (^foo a b c)
>
> to
>
>   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
>
> apart from performance issues?  (I mean for sake of a prototype.  OTOH
> some compilers might even optimize this indirection away.)

Because then it doesn't do the Right Thing when ^foo appears anywhere
other than the CAR of a form.

> (BTW, I'm looking forward reading about your ideas on first-class
> environments.  Will this also imply first-class modules, or am I
> dreaming here?)

That depends on how you define "module".  On some reasonable
interpretations the words are synonyms.

rg
From: Alexander Schmolck
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <yfslksea4ot.fsf@oc.ex.ac.uk>
···@flownet.com writes:

> > I'm propably missing something, but what's wrong with expanding
> >
> >   (^foo a b c)
> >
> > to
> >
> >   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
> >
> > apart from performance issues?  (I mean for sake of a prototype.  OTOH
> > some compilers might even optimize this indirection away.)
> 
> Because then it doesn't do the Right Thing when ^foo appears anywhere
> other than the CAR of a form.

It can't be much more than 4 lines of code to check if it does, right?

'as
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149352711.061048.29600@y43g2000cwc.googlegroups.com>
Alexander Schmolck wrote:
> ···@flownet.com writes:
>
> > > I'm propably missing something, but what's wrong with expanding
> > >
> > >   (^foo a b c)
> > >
> > > to
> > >
> > >   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
> > >
> > > apart from performance issues?  (I mean for sake of a prototype.  OTOH
> > > some compilers might even optimize this indirection away.)
> >
> > Because then it doesn't do the Right Thing when ^foo appears anywhere
> > other than the CAR of a form.
>
> It can't be much more than 4 lines of code to check if it does, right?

And exactly how do you propose to check?

rg
From: Alexander Schmolck
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <yfsac8tam2x.fsf@oc.ex.ac.uk>
···@flownet.com writes:

> Alexander Schmolck wrote:
> > ···@flownet.com writes:
> >
> > > > I'm propably missing something, but what's wrong with expanding
> > > >
> > > >   (^foo a b c)
> > > >
> > > > to
> > > >
> > > >   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
> > > >
> > > > apart from performance issues?  (I mean for sake of a prototype.  OTOH
> > > > some compilers might even optimize this indirection away.)
> > >
> > > Because then it doesn't do the Right Thing when ^foo appears anywhere
> > > other than the CAR of a form.
> >
> > It can't be much more than 4 lines of code to check if it does, right?
> 
> And exactly how do you propose to check?

(defvar *list-reader* (get-macro-character #\( ))
(set-macro-character #\( (lambda (stream c &aux (carp (char= #\^ (peek-char nil stream))))
                          (let ((list (funcall *list-reader* stream c)))
                            (if carp `((lambda (&rest #1=#:r) (apply ,(car list) #1#)) ,@(cdr list)) list))))

CL-USER> (set-macro-character #\^ (lambda (stream c) (declare (ignore c)) `(top-level-binding ',(read stream))))

T
CL-USER> '(^car ^not-car)
((LAMBDA (&REST #:R) (APPLY (TOP-LEVEL-BINDING (QUOTE CAR)) #:R)) (TOP-LEVEL-BINDING (QUOTE NOT-CAR)))

'as
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <u7j3xsug5.fsf@nhplace.com>
Alexander Schmolck <··········@gmail.com> writes:

> ···@flownet.com writes:
> 
> > Alexander Schmolck wrote:
> > > ···@flownet.com writes:
> > >
> > > > > I'm propably missing something, but what's wrong with expanding
> > > > >
> > > > >   (^foo a b c)
> > > > >
> > > > > to
> > > > >
> > > > >   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
> > > > >
> > > > > apart from performance issues?  (I mean for sake of a prototype.  OTOH
> > > > > some compilers might even optimize this indirection away.)
> > > >
> > > > Because then it doesn't do the Right Thing when ^foo appears anywhere
> > > > other than the CAR of a form.
> > >
> > > It can't be much more than 4 lines of code to check if it does, right?
> > 
> > And exactly how do you propose to check?
> 
> (defvar *list-reader* (get-macro-character #\( ))
> (set-macro-character #\( (lambda (stream c &aux (carp (char= #\^ (peek-char nil stream))))
>                           (let ((list (funcall *list-reader* stream c)))
>                             (if carp `((lambda (&rest #1=#:r) (apply ,(car list) #1#)) ,@(cdr list)) list))))
> 
> CL-USER> (set-macro-character #\^ (lambda (stream c) (declare (ignore c)) `(top-level-binding ',(read stream))))
> 
> T
> CL-USER> '(^car ^not-car)
> ((LAMBDA (&REST #:R) (APPLY (TOP-LEVEL-BINDING (QUOTE CAR)) #:R)) (TOP-LEVEL-BINDING (QUOTE NOT-CAR)))
> 
> 'as

Print this out as a big sign to hang on your wall:

  "Decisions about what evaluation context you're in
   are very risky to make at read-time."
         --Kent Pitman

If you don't understand what the sign is telling you, attach the
following piece of "code" underneath it as "clarification":

(defmacro oops (&rest expressions)
  `(list
    '(,@expressions occurs at the start of  a list but
      ,@expressions does not)
    'occurs 'quoted 'but
    (,@expressions 'occurs 'at 'the 'start 'of 'a 'list 'but
     ,@expressions 'does 'not)
    'does 'not))

(defmacro spoo (expresions)
  (oops ,@expressions))

For extra credit, you can worry about non-macro-based decision-making
as well, such as where defun and funcall and apply become involved.
But the point is made.
From: Alexander Schmolck
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <yfsu0708w1q.fsf@oc.ex.ac.uk>
Kent M Pitman <······@nhplace.com> writes:

> Alexander Schmolck <··········@gmail.com> writes:
> 
> > ···@flownet.com writes:
> > 
> > > Alexander Schmolck wrote:
> > > > ···@flownet.com writes:
> > > >
> > > > > > I'm propably missing something, but what's wrong with expanding
> > > > > >
> > > > > >   (^foo a b c)
> > > > > >
> > > > > > to
> > > > > >
> > > > > >   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
> > > > > >
> > > > > > apart from performance issues?  (I mean for sake of a prototype.  OTOH
> > > > > > some compilers might even optimize this indirection away.)
> > > > >
> > > > > Because then it doesn't do the Right Thing when ^foo appears anywhere
> > > > > other than the CAR of a form.
> > > >
> > > > It can't be much more than 4 lines of code to check if it does, right?
> > > 
> > > And exactly how do you propose to check?
> > 
> > (defvar *list-reader* (get-macro-character #\( ))
> > (set-macro-character #\( (lambda (stream c &aux (carp (char= #\^ (peek-char nil stream))))
> >                           (let ((list (funcall *list-reader* stream c)))
> >                             (if carp `((lambda (&rest #1=#:r) (apply ,(car list) #1#)) ,@(cdr list)) list))))
> > 
> > CL-USER> (set-macro-character #\^ (lambda (stream c) (declare (ignore c)) `(top-level-binding ',(read stream))))
> > 
> > T
> > CL-USER> '(^car ^not-car)
> > ((LAMBDA (&REST #:R) (APPLY (TOP-LEVEL-BINDING (QUOTE CAR)) #:R)) (TOP-LEVEL-BINDING (QUOTE NOT-CAR)))
> > 
> > 'as
> 
> Print this out as a big sign to hang on your wall:
> 
>   "Decisions about what evaluation context you're in
>    are very risky to make at read-time."
>          --Kent Pitman
> 
> If you don't understand what the sign is telling you

I understand this. But I thought that depending on how exactly Ron intends to
use his ^ syntax some monstrous but trivially implemented hack like the above
might well do for PROTOTYPING purposes. I would think my time better spent
pursuing some horrendous and limited but localized and quickly implemented
workaround that's enough to develop my idea to a level that allows evaluation
of its usefulness than, without a really compelling and developed use case, to
clamour for language change to allow a simpler implementation of my idea
(regardless of the merit of my change proposal). If the above hack is
insufficient surely leveraging some preexisiting and freely available
code-walker ought to suffice for coming up with a prototype?

'as
From: ···@flownet.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149466820.835150.250250@f6g2000cwb.googlegroups.com>
Alexander Schmolck wrote:

> I understand this. But I thought that depending on how exactly Ron intends to
> use his ^ syntax some monstrous but trivially implemented hack like the above
> might well do for PROTOTYPING purposes.

I appreciate the sentiment, but alas, the way in which I intend to use
the carat syntax is in a macro system for a Lisp-1.  So things like
this have to work:

(defmacro ...
  (let ( (top-level-foo ^foo) )
    `(,top-level-foo ...))

rg
From: ········@googlemail.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149454633.785468.86680@i39g2000cwa.googlegroups.com>
> >   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
> >
> > apart from performance issues?  (I mean for sake of a prototype.  OTOH
> > some compilers might even optimize this indirection away.)
>
> Because then it doesn't do the Right Thing when ^foo appears anywhere
> other than the CAR of a form.

Don't know what the Right Thing is for your ideas, but at least my
suggested variants are both eta-convertable/-equivalent (in
lambda-calculus speak).  Even in the presence of dynamic variables in
function arguments.  But you're experienced enough to know for yourself
what works and what not ..

> > (BTW, I'm looking forward reading about your ideas on first-class
> > environments.  Will this also imply first-class modules, or am I
> > dreaming here?)
>
> That depends on how you define "module".  On some reasonable
> interpretations the words are synonyms.

I'd say: modules = interfaces + implementations.  "First class modules"
would then allow for, e.g. dynamic substitution of implementions for
one interface.  Maybe also inheritance/extensibility.  But I'm still
not sure what features a lisp-friendly module system should have.
Haven't done too much reseach either.  So I'll simply take a look at
what you've done :)

-ts
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-5AAA8F.23452904062006@news.gha.chartermi.net>
In article <·······················@i39g2000cwa.googlegroups.com>,
 ········@googlemail.com wrote:

> > >   ((lambda (&rest r) (apply (top-level-binding 'foo) r)) a b c)
> > >
> > > apart from performance issues?  (I mean for sake of a prototype.  OTOH
> > > some compilers might even optimize this indirection away.)
> >
> > Because then it doesn't do the Right Thing when ^foo appears anywhere
> > other than the CAR of a form.
> 
> Don't know what the Right Thing is for your ideas,

^foo should return the top-level binding of foo.  So, among other 
invariants, two occurrences of ^foo without an intervening assignment 
should always be eq to each other.


> > > (BTW, I'm looking forward reading about your ideas on first-class
> > > environments.  Will this also imply first-class modules, or am I
> > > dreaming here?)
> >
> > That depends on how you define "module".  On some reasonable
> > interpretations the words are synonyms.
> 
> I'd say: modules = interfaces + implementations.  "First class modules"
> would then allow for, e.g. dynamic substitution of implementions for
> one interface.  Maybe also inheritance/extensibility.  But I'm still
> not sure what features a lisp-friendly module system should have.
> Haven't done too much reseach either.  So I'll simply take a look at
> what you've done :)

Have a look at http://www.flownet.com/ron/locales.pdf to get an idea of 
what I'm up to.  (Note: locales are now called lexicons to avoid 
confusion with I18N.)

rg
From: Didier Verna
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <mux64jmy1lg.fsf@uzeb.lrde.epita.fr>
Pascal Costanza wrote:

> I mostly agree with your assessment. But note that a functional programming
> style is somewhat more convenient with a Lisp-1. It's easier to express
> things like (((f x) y) z) whereas in a Lisp-2 you would have to say (funcall
> (funcall (f x) y) z). So if you have a strong preference for functional
> programming (or would like to explore it more deeply) you might want to try
> out a Lisp-1 (i.e., Scheme).

        Maybe I'm missing something but I don't see what this (I mean the need
for funcall) has to do with being a Lisp-2 at all. Consider

(defun make-adder (n)
  (lambda (x) (+ n x)))

((make-adder 10) 3)


I never understood why Common-Lisp doesn't like that. Knowing that this
expression must be a function call, we expect that the evaluation of the first
object in the list, (make-adder 10) returns a function, which it does. In
turn, (make-adder 10) being a function call, we expect that make-adder
evaluates to a function (that's the only place where there is a Lisp-2 thing),
which it does. What's wrong with that ?


-- 
Didier Verna, ······@lrde.epita.fr, http://www.lrde.epita.fr/~didier

EPITA / LRDE, 14-16 rue Voltaire   Tel.+33 (1) 44 08 01 85
94276 Le Kremlin-Bic�tre, France   Fax.+33 (1) 53 14 59 22   ······@xemacs.org
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e59pgF1d61iuU1@individual.net>
Didier Verna wrote:
> Pascal Costanza wrote:
> 
>> I mostly agree with your assessment. But note that a functional programming
>> style is somewhat more convenient with a Lisp-1. It's easier to express
>> things like (((f x) y) z) whereas in a Lisp-2 you would have to say (funcall
>> (funcall (f x) y) z). So if you have a strong preference for functional
>> programming (or would like to explore it more deeply) you might want to try
>> out a Lisp-1 (i.e., Scheme).
> 
>         Maybe I'm missing something but I don't see what this (I mean the need
> for funcall) has to do with being a Lisp-2 at all. Consider
> 
> (defun make-adder (n)
>   (lambda (x) (+ n x)))
> 
> ((make-adder 10) 3)
> 
> 
> I never understood why Common-Lisp doesn't like that. Knowing that this
> expression must be a function call, we expect that the evaluation of the first
> object in the list, (make-adder 10) returns a function, which it does. In
> turn, (make-adder 10) being a function call, we expect that make-adder
> evaluates to a function (that's the only place where there is a Lisp-2 thing),
> which it does. What's wrong with that ?

((make-adder 10) 3) and (let ((x (make-adder 10))) (x 3)) would not be 
the same.

Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Didier Verna
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <muxirnmwcwl.fsf@uzeb.lrde.epita.fr>
Pascal Costanza wrote:

>> What's wrong with that ?
>
> ((make-adder 10) 3) and (let ((x (make-adder 10))) (x 3)) would not be the
> same.

        I fail to see the point. Since you know you're in a Lisp-2, you know
(x 3) will need a functional value for x, which is not what let provides (you
have flet for that), so I'm not expecting those two expressions to be the same
at all. I just want to be able to write the first one and I don't see a good
reason why it's not possible.

-- 
Didier Verna, ······@lrde.epita.fr, http://www.lrde.epita.fr/~didier

EPITA / LRDE, 14-16 rue Voltaire   Tel.+33 (1) 44 08 01 85
94276 Le Kremlin-Bic�tre, France   Fax.+33 (1) 53 14 59 22   ······@xemacs.org
From: Pascal Costanza
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <4e62huF1d5oraU1@individual.net>
Didier Verna wrote:
> Pascal Costanza wrote:
> 
>>> What's wrong with that ?
>> ((make-adder 10) 3) and (let ((x (make-adder 10))) (x 3)) would not be the
>> same.
> 
>         I fail to see the point. Since you know you're in a Lisp-2, you know
> (x 3) will need a functional value for x, which is not what let provides (you
> have flet for that), so I'm not expecting those two expressions to be the same
> at all. I just want to be able to write the first one and I don't see a good
> reason why it's not possible.

It's not a serious issue, but it could be confusing, especially when 
refactoring your code. Consider taking the complement of a function:

(not ((complement f) ...)) <=> (funcall f ...)
(not (f ...)) <=> ((complement #'f) ...)

It seems to me that this would make Lisp-2 harder to explain without 
gaining any expressiveness.


Pascal

-- 
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-909874.11393131052006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Didier Verna wrote:
> > Pascal Costanza wrote:
> > 
> >>> What's wrong with that ?
> >> ((make-adder 10) 3) and (let ((x (make-adder 10))) (x 3)) would not be the
> >> same.
> > 
> >         I fail to see the point. Since you know you're in a Lisp-2, you 
> >         know
> > (x 3) will need a functional value for x, which is not what let provides 
> > (you
> > have flet for that), so I'm not expecting those two expressions to be the 
> > same
> > at all. I just want to be able to write the first one and I don't see a 
> > good
> > reason why it's not possible.
> 
> It's not a serious issue, but it could be confusing, especially when 
> refactoring your code. Consider taking the complement of a function:
> 
> (not ((complement f) ...)) <=> (funcall f ...)
> (not (f ...)) <=> ((complement #'f) ...)

So what's the problem?

> It seems to me that this would make Lisp-2 harder to explain without 
> gaining any expressiveness.

Common Lisp is already plenty hard to explain.  I doubt this would make 
matters much worse than they already are.

And the problem with forcing ((...) ...) to be an error is not losing 
expressiveness, it's losing the ability to implement certain kinds of 
DSLs (like the new macro system I'm working on) in a straightforward 
manner.  I can hack my implementation to do the Right Thing with ((...) 
...) in two lines of code, and then I can implement the key feature of 
my macro system (the ability to refer directly to a top-level binding) 
in another two lines of code.  But to do it in portable CL requires many 
dozens of lines of very hacky code (because I have to shoehorn the 
binding into a symbol to trick CL into calling it).  It's very annoying.

rg
From: Burton Samograd
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <87r72aui27.fsf@gmail.com>
Ron Garret <·········@flownet.com> writes:

> And the problem with forcing ((...) ...) to be an error is not losing 
> expressiveness, it's losing the ability to implement certain kinds of 
> DSLs (like the new macro system I'm working on) in a straightforward 
> manner.  I can hack my implementation to do the Right Thing with ((...) 
> ...) in two lines of code, and then I can implement the key feature of 
> my macro system (the ability to refer directly to a top-level binding) 
> in another two lines of code.  But to do it in portable CL requires many 
> dozens of lines of very hacky code (because I have to shoehorn the 
> binding into a symbol to trick CL into calling it).  It's very annoying.

I'm glad I wasn't the only one that found this behaviour annoying.
The conciseness of ((..)...) and not having to think about two name
spaces is much more freeing.  There are (somewhat contrived IMO)
reasons for two name spaces, but I fail to see the benefits after
hitting right into the consiquences right at the start.  

I like(d) lisp because it has untyped symbols; as I used CL I found
out it did not.  I *want* to be able to stuff a function reference
into any symbol, because it makes sense (especially with my
interest in metaprogramming), now there are exceptions that get in the
way of creativity that bother me. 

Yes, maybe I'll have to move to Scheme.  I like this camp though...

-- 
burton samograd						 kruhft .at. gmail
'programmed piano ep' now available : http://kruhft.boldlygoingnowhere.org
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-21ABC9.12242531052006@news.gha.chartermi.net>
In article <··············@gmail.com>,
 Burton Samograd <······@gmail.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > And the problem with forcing ((...) ...) to be an error is not losing 
> > expressiveness, it's losing the ability to implement certain kinds of 
> > DSLs (like the new macro system I'm working on) in a straightforward 
> > manner.  I can hack my implementation to do the Right Thing with ((...) 
> > ...) in two lines of code, and then I can implement the key feature of 
> > my macro system (the ability to refer directly to a top-level binding) 
> > in another two lines of code.  But to do it in portable CL requires many 
> > dozens of lines of very hacky code (because I have to shoehorn the 
> > binding into a symbol to trick CL into calling it).  It's very annoying.
> 
> I'm glad I wasn't the only one that found this behaviour annoying.
> The conciseness of ((..)...) and not having to think about two name
> spaces is much more freeing.  There are (somewhat contrived IMO)
> reasons for two name spaces, but I fail to see the benefits after
> hitting right into the consiquences right at the start.  
> 
> I like(d) lisp because it has untyped symbols; as I used CL I found
> out it did not.  I *want* to be able to stuff a function reference
> into any symbol, because it makes sense (especially with my
> interest in metaprogramming), now there are exceptions that get in the
> way of creativity that bother me. 
> 
> Yes, maybe I'll have to move to Scheme.  I like this camp though...

Scheme has its own set of annoyances.  On the whole I find CL less 
annoying, but ((...) ...) it particularly irksome because it's a 
gratuitous restriction that is impossible to fix at the user level.

rg
From: Burton Samograd
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <87mzcyugbt.fsf@gmail.com>
Ron Garret <·········@flownet.com> writes:

> In article <··············@gmail.com>,
>  Burton Samograd <······@gmail.com> wrote:
>> Yes, maybe I'll have to move to Scheme.  I like this camp though...
>
> Scheme has its own set of annoyances.  On the whole I find CL less 
> annoying, but ((...) ...) it particularly irksome because it's a 
> gratuitous restriction that is impossible to fix at the user level.

Does the (lisp-1 ...) macros that were previously posted help with
that?  I admit, I never tried them.

-- 
burton samograd						 kruhft .at. gmail
'programmed piano ep' now available : http://kruhft.boldlygoingnowhere.org
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-D6CD22.14200631052006@news.gha.chartermi.net>
In article <··············@gmail.com>,
 Burton Samograd <······@gmail.com> wrote:

> Ron Garret <·········@flownet.com> writes:
> 
> > In article <··············@gmail.com>,
> >  Burton Samograd <······@gmail.com> wrote:
> >> Yes, maybe I'll have to move to Scheme.  I like this camp though...
> >
> > Scheme has its own set of annoyances.  On the whole I find CL less 
> > annoying, but ((...) ...) it particularly irksome because it's a 
> > gratuitous restriction that is impossible to fix at the user level.
> 
> Does the (lisp-1 ...) macros that were previously posted help with
> that?  I admit, I never tried them.

Probably not, but it depends on what you mean by "the (lisp-1 ...) 
macros that were previously posted" and "that".  For ((...) ...) there 
is no help to be had because the standard specifies that this must be an 
error unless the first list is a lambda form.  (BTW, it just occurred to 
me that the fact that lambda forms are allowed blows away Pascal's 
argument against ((...) ...) because ((lambda ...) ...) and (let ((x 
(lambda ...))) (x ...)) are both allowed but do not mean the same thing.  
So CL already has that problem.)

rg
From: Marco Baringer
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m2mzcx3mxq.fsf@bese.it>
Ron Garret <·········@flownet.com> writes:

> In article <··············@gmail.com>,
>  Burton Samograd <······@gmail.com> wrote:
>
>> Ron Garret <·········@flownet.com> writes:
>> 
>> > In article <··············@gmail.com>,
>> >  Burton Samograd <······@gmail.com> wrote:
>> >> Yes, maybe I'll have to move to Scheme.  I like this camp though...
>> >
>> > Scheme has its own set of annoyances.  On the whole I find CL less 
>> > annoying, but ((...) ...) it particularly irksome because it's a 
>> > gratuitous restriction that is impossible to fix at the user level.
>> 
>> Does the (lisp-1 ...) macros that were previously posted help with
>> that?  I admit, I never tried them.
>
> Probably not, but it depends on what you mean by "the (lisp-1 ...) 
> macros that were previously posted" and "that".

if by "the lisp-1 macros" he meant the ones Hoan Ton-That wrote and i
mentioned in this thread, and "that" means allowing the car of a form
to be any form which returns a function, then the answer is yes.

-- 
-Marco
Ring the bells that still can ring.
Forget the perfect offering.
There is a crack in everything.
That's how the light gets in.
	-Leonard Cohen
From: Burton Samograd
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <87bqtdv74x.fsf@gmail.com>
Marco Baringer <··@bese.it> writes:

> Ron Garret <·········@flownet.com> writes:
>
>> In article <··············@gmail.com>,
>>  Burton Samograd <······@gmail.com> wrote:
>>
>>> Ron Garret <·········@flownet.com> writes:
>>> 
>>> > In article <··············@gmail.com>,
>>> >  Burton Samograd <······@gmail.com> wrote:
>>> >> Yes, maybe I'll have to move to Scheme.  I like this camp though...
>>> >
>>> > Scheme has its own set of annoyances.  On the whole I find CL less 
>>> > annoying, but ((...) ...) it particularly irksome because it's a 
>>> > gratuitous restriction that is impossible to fix at the user level.
>>> 
>>> Does the (lisp-1 ...) macros that were previously posted help with
>>> that?  I admit, I never tried them.
>>
>> Probably not, but it depends on what you mean by "the (lisp-1 ...) 
>> macros that were previously posted" and "that".
>
> if by "the lisp-1 macros" he meant the ones Hoan Ton-That wrote and i
> mentioned in this thread, and "that" means allowing the car of a form
> to be any form which returns a function, then the answer is yes.

Sorry, I was in a rush and not very specific.  You are correct in what
I was referring to.

-- 
burton samograd						 kruhft .at. gmail
'programmed piano ep' now available : http://kruhft.boldlygoingnowhere.org
From: Ron Garret
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <rNOSPAMon-E92F2D.15511731052006@news.gha.chartermi.net>
In article <···············@individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> Didier Verna wrote:
> > Pascal Costanza wrote:
> > 
> >> I mostly agree with your assessment. But note that a functional 
> >> programming
> >> style is somewhat more convenient with a Lisp-1. It's easier to express
> >> things like (((f x) y) z) whereas in a Lisp-2 you would have to say 
> >> (funcall
> >> (funcall (f x) y) z). So if you have a strong preference for functional
> >> programming (or would like to explore it more deeply) you might want to 
> >> try
> >> out a Lisp-1 (i.e., Scheme).
> > 
> >         Maybe I'm missing something but I don't see what this (I mean the 
> >         need
> > for funcall) has to do with being a Lisp-2 at all. Consider
> > 
> > (defun make-adder (n)
> >   (lambda (x) (+ n x)))
> > 
> > ((make-adder 10) 3)
> > 
> > 
> > I never understood why Common-Lisp doesn't like that. Knowing that this
> > expression must be a function call, we expect that the evaluation of the 
> > first
> > object in the list, (make-adder 10) returns a function, which it does. In
> > turn, (make-adder 10) being a function call, we expect that make-adder
> > evaluates to a function (that's the only place where there is a Lisp-2 
> > thing),
> > which it does. What's wrong with that ?
> 
> ((make-adder 10) 3) and (let ((x (make-adder 10))) (x 3)) would not be 
> the same.

Kind of like ((lambda ...) ...) and (let ((x (lambda ...))) (x ...)) 
aren't the same.  (I pointed this out in another branch of this thread.  
I just wanted to put the response here as well to complete the 
historical record.)

rg
From: Peter Seibel
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <m2wtc1ixzk.fsf@gigamonkeys.com>
Ron Garret <·········@flownet.com> writes:

> In article <···············@individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
>
>> Didier Verna wrote:
>> > Pascal Costanza wrote:
>> > 
>> >> I mostly agree with your assessment. But note that a functional 
>> >> programming
>> >> style is somewhat more convenient with a Lisp-1. It's easier to express
>> >> things like (((f x) y) z) whereas in a Lisp-2 you would have to say 
>> >> (funcall
>> >> (funcall (f x) y) z). So if you have a strong preference for functional
>> >> programming (or would like to explore it more deeply) you might want to 
>> >> try
>> >> out a Lisp-1 (i.e., Scheme).
>> > 
>> >         Maybe I'm missing something but I don't see what this (I mean the 
>> >         need
>> > for funcall) has to do with being a Lisp-2 at all. Consider
>> > 
>> > (defun make-adder (n)
>> >   (lambda (x) (+ n x)))
>> > 
>> > ((make-adder 10) 3)
>> > 
>> > 
>> > I never understood why Common-Lisp doesn't like that. Knowing that this
>> > expression must be a function call, we expect that the evaluation of the 
>> > first
>> > object in the list, (make-adder 10) returns a function, which it does. In
>> > turn, (make-adder 10) being a function call, we expect that make-adder
>> > evaluates to a function (that's the only place where there is a Lisp-2 
>> > thing),
>> > which it does. What's wrong with that ?
>> 
>> ((make-adder 10) 3) and (let ((x (make-adder 10))) (x 3)) would not be 
>> the same.
>
> Kind of like ((lambda ...) ...) and (let ((x (lambda ...))) (x ...)) 
> aren't the same.  (I pointed this out in another branch of this thread.  
> I just wanted to put the response here as well to complete the 
> historical record.)

Of course that's arguably an inconsistency introduced by the punny
LAMBDA macro which could easily have been left out of the language if
not for the desire to be able to write a ISLISP compatibility layer in
portable Common Lisp. As long as we're completing the historical
record.

-Peter

-- 
Peter Seibel           * ·····@gigamonkeys.com
Gigamonkeys Consulting * http://www.gigamonkeys.com/
Practical Common Lisp  * http://www.gigamonkeys.com/book/
From: ········@gmail.com
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149355905.305075.227150@c74g2000cwc.googlegroups.com>
···········@gmail.com wrote:
> What are the relative merits of a Lisp-1 (one namespace for both
> functions and variable) versus a Lisp-2 (one namespace for functions,
> and one for variables)? I can think of the following:
>
> For Lisp-1:
>  * No #'
>  * It's always clear what a name refers to; can't do potentially
> confusing things like (foo foo)
>  * Somewhat simplified language description, somewhat simplified
> implementation
>
> For Lisp-2:
>  * Can't accidentially shadow function names (especially important for
> safe non-hygienic macros)
>  * The size of the namespace is doubled, because you can reuse each
> name twice
>
> Looking at the above lists, I would tentatively choose Lisp-2, since
> the arguments for Lisp-1 are mostly aesthetic, and the arguments for
> Lisp-2 are quite practical. But, what do you think? Are there any items
> you would add to the lists?


to me the sharpquote is a big garish roadsign pointing out that the
behavior of the code is about to change (IE the code inside a lambda
definition is evaluated every time the fun is called but not right
then) or there's going to be some FUNCALL tomfoolery

thus it makes the difference between

    (let ((x (+ 4 y)))
       #'(lambda () x))

and

    #'(lambda () (+ 4 y))

or

    (foo arg arg arg)

and

    (foo #'fn arg arg)

much more visually intuitive

Nick
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <ubqt9syjm.fsf@nhplace.com>
···········@gmail.com writes:

> What are the relative merits of a Lisp-1 (one namespace for both
> functions and variable) versus a Lisp-2 (one namespace for functions,
> and one for variables)? I can think of the following:
> 
> For Lisp-1:
>  * No #'
>  * It's always clear what a name refers to; can't do potentially
> confusing things like (foo foo)
>  * Somewhat simplified language description, somewhat simplified
> implementation
> 
> For Lisp-2:
>  * Can't accidentially shadow function names (especially important for
> safe non-hygienic macros)
>  * The size of the namespace is doubled, because you can reuse each
> name twice
> 
> Looking at the above lists, I would tentatively choose Lisp-2, since
> the arguments for Lisp-1 are mostly aesthetic, and the arguments for
> Lisp-2 are quite practical. But, what do you think? Are there any items
> you would add to the lists?

For Lisp-1                For Lisp-2

 * No #'                  * #'

That is, No #' is a goal of people who want no #'. But some of us WANT
#', so make sure to have "#'" as a positive on the other camp.  It's easy
to have this discussion go awry if you assume everyone is trading against
the same sets of goods and bads.

Consider one extreme end of the abortion debate, disregarding the other:

For Pro-Life:           For Pro-Choice:

 * No babies killed.    * Babies killed.

This does not fairly reflect the debate.  Because from the pro-Choice side,
the issues are not viewed symmetrically.  That is, from the extreme end of
the other side, consider:

For Anti-Abortion:                For Pro-Choice:

 * Mothers raped and enslaved.    * Mothers not raped and enslaved.

This, of course, doesn't capture things either.  Properly placed, you have
to assume that some people are simply perceiving and valuing things 
differently such that a fair analysis shows

For Pro-Life:

 * People who perceive that this issue is all about life
   find that avoiding abortion avoids messy moral issues for them.

For Pro-Choice:

 * People who perceive that this issue is only somewhat about life,
   since for some of them life does not begin "at an instant" or else
   the soul does not "enter the body" at a known time or "a mother's
   already invested life is to be weighed differently than that of a
   few cells which has no investment" or ...

My point here is to give you a framework for understanding a debate in
which the sides simply do not have the same goals, frames of reference,
etc.  It is very difficult to reach common ground, and it is even more 
difficult if you assume that there must be a single solution.

To get back to Lisp1/Lisp2, the thing I find most frustrating is not
the relatively simple issue that there are two different worlds here,
but that a lot of the debate often takes on a meta-flavor of "one of
these worlds must be shown to be uniquely Right and the other uniquely
Wrong".  I also find it somewhat amusing and somewhat sad that in my
experience, the people who line up to push Lisp1 often do so not just
because they like the list of features Lisp1 has, but because they
often seem to project a sensibility about the world that by force of
will they can make it more simple and effectively rule the issues of
the more complex world to be "out of bound" or "improper".  So it becomes
a meta-debate of the form Language1/Language2, where in one world you
have only Scheme but in the other world you admit that both Scheme and
CL have a legitimate place, and it seems so much simpler to say that there
should be only Scheme because the option of living with both Scheme and
CL seems impossibly awful.  On the other hand, the advocates of Lisp2 seem
to so much more often say "I could tolerate sometimes using Scheme if I
didn't have to think I had to use it all the time."  So there's the same
kind of odd bias as I often find in the abortion debate, where if you listen
carefully, it's as if the pro-life community thinks the pro-choice people
are saying "always abortion" when they're saying "think about it".  Choice
is about tolerance, not about pushing one's way.  Lisp2 is about choice,
not about pushing one's way.

I'm actually rather a fan of Lisp Omega, I just haven't seen it worked
through. I tried and failed with my meager available time in my
http://www.nhplace.com/kent/Half-Baked area.

And so I come back to the notion that the reason CL has #' is not just
a "patch" where it couldn't figure out how not to, but many of us like
having a marker on something that says "I'm using this name as a function.
I don't do that very often. Take note."  To do without that would be to
risk confusion.  So we find it comforting.   It's almost uniformly assumed by
the Lisp1 community that all people would find #' offensive, but you can
count me as someone who does not.

I can understand the desire to not have to use it, but that's really an
orthogonal point.  CL could have been designed not to require it, and it's
easy enough to write a binding form that sets up the situation on a by-need
basis, it just can't be done to make binding "generally" do that.

Going back then to your chart:

For Lisp-1:
>  * It's always clear what a name refers to; can't do potentially
> confusing things like (foo foo)

I'd add:

For Lisp-2:
>  * It's always clear what a name refers to; can't do potentially
> confusing things like (foo foo)

That is, both of these things are always clear.  It's just a question
of what they mean.  One means "call the function in fname foo on itself". 
The other means "call the function in fname foo on the object in variable 
foo".  I don't see that either of those is unclear.

You wrote:

> For Lisp-1:
>
>  * Somewhat simplified language description, somewhat simplified
> implementation

I'd add:

> For Lisp-2:
>
> * Somewhat simplified programs.

That is, every time you make a language do more, you make programs do less.

Then again, Scheme makes the language do more with macro hygiene. That is
not a simplification of the implementation, and it's probably not even a
simplfication of the language.

It gets quite complicated to talk about simple and complex because when
things have consequences, and consequences are able to combine into the
infinities, counting consequences and assessing complexity is often more
complex than people who want a simple, fun debate are willing to do.

You omit another point entirely:  which language is "more isomorphic to
how people think".  I claim (without proof, alas, but at least I'm being
up front about htat) that a language that is "more like people think" is
easier to reason about.  The Lisp1 people often assume "simpler" is what
makes it easier to reason about, but I'm not sure that's the same, and I'm
not sure it goes without saying.

I make the bold (albeit scientifically unsupported) claim the Lisp2
model is more isomorphic to how people think.  I base my claim on the
observable fact that are no extant human languages in which any word
has only one meaning, suggesting to me that humans have tons of
wetware for dispatching for meaning based on context, which wetware
would be going to waste if you were to only have one assigned meaning
at any given time.

The claim that is often made in reverse by the Lisp1 community is that
it's easier to teach the Lisp1 way.  That reqires proof, though they
rarely assume so and rarely offer any; humans often know immediately
about the notion of context-dependent meaning and are surprised when
they can't use it to get out of a bind (pardon pun) they've made for
themselves by global naming in a Lisp1.  The entire biz of macro
hygiene may be regarded as elegant by the Scheme community but is
rarely offered in the context of discussion of "decreased language
size" for not having lisp2.  Yet packages+lisp2 is the reason there's
no semantic problem in CL like what was in Scheme was unusable absent
hygiene because the language design made things too cluttered.

You write:
> For Lisp-2:
>
>  * The size of the namespace is doubled, because you can reuse each
> name twice

So double an infinity is bigger, eh?  Hmmm.  I'm not sure I really buy
this one.  I think you're better off making an argument in terms of
what I'll call "intellectual real estate".  I've often seen in the law
make a difference between "real estate" and "ip" based, as far as I
can tell, on the notion that land is "real" and "finite" and therefore
has intrinsic value in a way that a copyable or destroyable commodity
does not.  But if you examine intellectual property more closely, you
see that IP is not devoid of real, exhaustible commodities.  The lack
of short names in any language or naming system, such that the
desirable names will always go for a better price.  This explains why
short names sell for more in the domain name system on the internet.
Of course, there's no end of ability to make new namespace systems,
but even those will eventually fall victim to a naming problem, and at
some point the length of the naming system will be too long and you'll
need meta-naming systems.  But within any one of these, there's 
competition for the short names or for other scarce resources ("screen
real estate" has long been cited, and is a kind of specific instance
of this).  So the issue isn't about the number of names, but about the
number of names you'd reasonably be willing to use, and that number, while
people might disagree on it, is something most people would agree is 
quite finite.  So I will say that the number of available "good names"
is doubled, which is much more interesting than the whole namespace
being doubled.

Then again, mixed alphabetic case could do a lot more than this issue ever
will for adding names, so maybe it's a point without a difference.

- - - -

I personally like Lisp-2 because it reflects how I really think about
names.  If I know a person named John, I never get him confused with
where to go to the bathroom. "the john" and "John" are just different
and their grammatical use is enough to make sure there is no difference.
The idea that a program should be randomly insisting that in all contexts
I mean the same thing is just laughable to me.

Do I mind someone inventing a language that does this?  No, I'm able to 
tolerate a lot.  Tolerance in the world makes room for a great many
fun things.  Intolerance does not.  My life is enriched by others doing
things differently, even on their own terms that surprise me.  My life is
not enriched by people inventing reasons to phase me out, nor do I suspect
that theirs is thusly enriched either.  I therefore use Scheme on its own
terms, but I expect CL to behave on its own terms.  I continue to believe
that a language in the Lisp Omega style could bridge the gap, but it's
not something that's been shown.  I think it could be a fun thesis for
someone...

Incidentally, if you read Gabriel's/my Lisp1/Lisp2 paper, you'll see there
are also some speed issues you didn't address in your discussion.  A Lisp-2
is capable of being faster without a lot of additional proof technology.
It can make some better assumptions about why people are using variables
than an all-purpose one-namespace system can.  A Lisp-1 can make up for that
in most cases by clever compilers, but that's far from "for free".
From: Pascal Bourguignon
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <87r72569t6.fsf@thalassa.informatimago.com>
Kent M Pitman <······@nhplace.com> writes:
> ···········@gmail.com writes:
>
>> What are the relative merits of a Lisp-1 (one namespace for both
>> functions and variable) versus a Lisp-2 (one namespace for functions,
>> and one for variables)? I can think of the following:
>> 
>> For Lisp-1:
>>  * No #'
>>  * It's always clear what a name refers to; can't do potentially
>> confusing things like (foo foo)
>>  * Somewhat simplified language description, somewhat simplified
>> implementation
>> 
>> For Lisp-2:
>>  * Can't accidentially shadow function names (especially important for
>> safe non-hygienic macros)
>>  * The size of the namespace is doubled, because you can reuse each
>> name twice
>> 
>> Looking at the above lists, I would tentatively choose Lisp-2, since
>> the arguments for Lisp-1 are mostly aesthetic, and the arguments for
>> Lisp-2 are quite practical. But, what do you think? Are there any items
>> you would add to the lists?
>
> For Lisp-1                For Lisp-2
>
>  * No #'                  * #'
>
> That is, No #' is a goal of people who want no #'. But some of us WANT
> #', so make sure to have "#'" as a positive on the other camp.  It's easy
> to have this discussion go awry if you assume everyone is trading against
> the same sets of goods and bads.

Yes, but if the objective is to have (function ...), you can write:

   (define (function x) (if (procedure? x) x (error "Not a procedure!")))

in scheme too...

 
(define (deriv f x delta-x)
   (/ (- ((function f) (+ x delta-x)) ((function f) x))
      delta-x))

And since I find funcall more slightly more readable:

(define (funcall f . args) (apply f args))

(define (deriv f x delta-x)
   (/ (- (funcall (function f) (+ x delta-x)) (funcall (function f) x))
      delta-x))

So this is mostly orthogonal. 
(We can also have macros allowing us to write "lisp-1" like in Common Lisp).


So, is the question: "What's the relative merit of one notation over another?"
or: "What's the relative merit of one implementation over another?"?
As you said, compilers may be able to do more or less well with one
or another solution.

-- 
__Pascal Bourguignon__                     http://www.informatimago.com/

NOTE: The most fundamental particles in this product are held
together by a "gluing" force about which little is currently known
and whose adhesive power can therefore not be permanently
guaranteed.
From: levy
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149452884.397885.61010@h76g2000cwa.googlegroups.com>
I don't think I can add too much to this Lisp-1 vs. Lisp-2 thing, but
still, here is my 2 cents.

What happens with the whole issue if we forget about storing programs
in text files?

I think the whole linking (binding) and editing becomes quite different
and thus the whole question may even become superfluous. Also, what is
displayed on the screen becomes a policy question or rather the user's
taste (even on the same project). Of course this idea introduces a
number of difficult other questions, but I would be very glad to see
such a lisp system.

The program is a complex data structure linked (bound) together and
displayed with a projection based on user preferences. Then I think
editing becomes the real question, how to store references what
information shall we put there, do we allow multiple names (per
language/user/long-short/etc.) how do we find the editing context to
bind names

Clearly these are not new ideas, but maybe show the question from a
different point of view.

levy
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <uy7wcp27e.fsf@nhplace.com>
"levy" <················@gmail.com> writes:

> I don't think I can add too much to this Lisp-1 vs. Lisp-2 thing, but
> still, here is my 2 cents.
> 
> What happens with the whole issue if we forget about storing programs
> in text files?
> 
> I think the whole linking (binding) and editing becomes quite different
> and thus the whole question may even become superfluous. Also, what is
> displayed on the screen becomes a policy question or rather the user's
> taste (even on the same project). Of course this idea introduces a
> number of difficult other questions, but I would be very glad to see
> such a lisp system.
> 
> The program is a complex data structure linked (bound) together and
> displayed with a projection based on user preferences. Then I think
> editing becomes the real question, how to store references what
> information shall we put there, do we allow multiple names (per
> language/user/long-short/etc.) how do we find the editing context to
> bind names
> 
> Clearly these are not new ideas, but maybe show the question from a
> different point of view.

It's certainly useful to go ahead and think along the lines that you're 
thinking.  In a sense, the areas of "automatic programming",
"artificial intelligence", "aspect-oriented programming", and 
"layered languages" all are pursuing this same notion from different 
angles.

But I think the answer is that none of this obviates the importance of
language syntax and semantics unless you think that by adding these
other layers no one will ever look back at the low-level again.

For example, there are issues of good coding that are largely ignored
in the issuing of assembly code.  So if you think Lisp will be relegated
to assembly code, then perhaps it doesn't matter.

But the more likely metaphor would be something like a Linux desktop vs
the underlying file-based interface.  Some users continue to modify the
/etc/ directory manually while others use automatic tools that maintain
the editing.  A consequence of this is that although there are layered 
tools, the underlying structures must be kept intelligible and well-ordered
to humans, and consequently the semantics still matters.  I suspect that
while a lot of interesting tools will get layered onto Lisp as time goes
on, it will still be necessary to look inside.

But that's just my personal guess.  It could also go hybrid.  There
are some areas of electronics where people still "look at" resistors
and transistors and whatnot, such that the layout matters, while there
are others that are entirely managed by computer design programs and
really not laid out in a way that could be thought of as "appropriate
for people" since optimizing for fastest-execution-speed is more
important than for easiest-debugging or fastest-human-reading or
anything like that.
From: levy
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <1149494258.277121.205880@g10g2000cwb.googlegroups.com>
Kent M Pitman wrote:

> But I think the answer is that none of this obviates the importance of
> language syntax and semantics unless you think that by adding these
> other layers no one will ever look back at the low-level again.
I believe syntax becomes a matter of taste (preferences, policy), but
clearly semantics will not.

As for the lower layers, one can always look at the assembly code by
switching to a different projection which may not even get materialized
until the code is actually run. Clearly there can be another projection
that shows stuff in the current syntax for backward compatibility.

Also one can look at compilation at any level as a projection and any
intermediate result can be shown to the user. If programs are stored in
structured form then it becomes easier to annotate information for
lower levels which can be hidden from various other views.

I think the text file approach is very limited, there is one widely
used projection (almost identity transformation) and it's extremly
difficult to create new transformations and annotate information.

levy
From: Kent M Pitman
Subject: Re: Relative merits of Lisp-1 vs. Lisp-2?
Date: 
Message-ID: <umzcrn583.fsf@nhplace.com>
"levy" <················@gmail.com> writes:

> Kent M Pitman wrote:
> 
> > But I think the answer is that none of this obviates the importance of
> > language syntax and semantics unless you think that by adding these
> > other layers no one will ever look back at the low-level again.
>
> I believe syntax becomes a matter of taste (preferences, policy), but
> clearly semantics will not.

But semantics is really an aesthetic in many ways, too, unless it's an
unpredictable semantics.  There are things with hugely unaesthetic semantics 
for direct programming (TeX and Java fill that bill for me) yet they make
great assembly languages (i.e., targets of back-end code for a higher level
language) because they are highly predictable for their respective tasks.

Also, to the specific point at hand, if you know a particular feature is 
weak or even just not what you want, you can program around it.

There are not too few names in a Lisp1.  There is an infinity of
names, just as in Lisp2.  With appropriate name mangling (which is how
compilers often can choose to implement namespacing anyway this
anyway), you can project two namespaces onto one.  With appropriate 
idiomatic uses in the compiler, the issue of ((foo...) ...) never comes
up because you can just write (funcall (funcall ...) ...) and paper it
over at the level above.  Syntax, you might say, but at some level what
isn't?  At some level, there's just a Turing Machine + syntax.
You can "work around" a lack of let bindings by clever renaming, which is
sort of what the lambda calculus does when it does beta reduction.
You can work around a lack of certain kinds of data structures by reshaping
other data flow paths to contain more data (the would-be contents of a
struct) or indexes of data stored elsewhere (as call-by-reference was emulated
in both my Fortran->Lisp translator for Macsyma and I believe also the
one Symbolics had as a product).  

In the end, the only "semantics" is that of the machine and the rest is
just data.  And even the machine can be emulated, so that "semantics" can
be handwaved away, too, layer after layer.

I used to think semantics and syntax were different, but at some point
I realized it was just a matter of semantics... or is it syntax?

> As for the lower layers, one can always look at the assembly code by
> switching to a different projection which may not even get materialized
> until the code is actually run. Clearly there can be another projection
> that shows stuff in the current syntax for backward compatibility.
> 
> Also one can look at compilation at any level as a projection and any
> intermediate result can be shown to the user. If programs are stored in
> structured form then it becomes easier to annotate information for
> lower levels which can be hidden from various other views.

I've answered this somewhat above.  I agree with pieces of this but 
have a different take on some of it, and it's hard to tease it out in the
time I had this morning.
 
> I think the text file approach is very limited, there is one widely
> used projection (almost identity transformation) and it's extremly
> difficult to create new transformations and annotate information.

On this, I agree the text file is very limited.  And I agree it's useful
to have other approaches.  But I don't agree that any given other approach
gives fundamentally more power.  What I think is the ability to choose 
is what gives the power.  

I've sometimes said that the key to intelligence is not the choice of 
representation but the ability to choose representation dynamically in an
appropriate way.

On another day, we can talk about whether there's any difference between
dynamic and static. (The argument may go something like the above for syntax
and semantics.)