From: Simon H.
Subject: Why seperate function namespaces?
Date: 
Message-ID: <e9904ec5.0304300245.402055bf@posting.google.com>
Of all the functional languages I've used (Scheme, OCaml, etc) I
notice that Lisp is the only one with a seperate namespace for
functions.  It just recently struck me that it seems to be a rather
odd and counter-intuitive way of doing things, especially since I
can't think of a single other language that does it, however I assume
there's some reason for it...  Or at least an excuse.  ;-)

Pax,
S

From: Kaz Kylheku
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <cf333042.0304300932.782a5b46@posting.google.com>
······@mail.com (Simon H.) wrote in message news:<····························@posting.google.com>...
> Of all the functional languages I've used (Scheme, OCaml, etc) I
> notice that Lisp is the only one with a seperate namespace for
> functions.

I'm going to assume that you are not a troll, and point you to this:

  http://www.c2.com/cgi/wiki?SingleNamespaceLisp
  http://www.dreamsongs.com/Separation.html
From: Michael J. Ferrador
Subject: LISP-2 (Function namespace) exapmle for CMUCL prompt
Date: 
Message-ID: <3EB02FA9.DAE1084B@orn.com>
looking at my init/rc (.cmucl-init) files while reading
some lisp-2, symbol properties stuff I came up with -

- but since it's on Cliki, http://www.cliki.net/CMUCL%20Hints
As well as http://www.cons.org/cmucl/doc/prompt.html
And I'm new to CL, I'd thought I would submit it to the
critical flame of C.L.L



(in-package :common-lisp)

(defvar *last-package* nil "cache previous package")
(defvar my-prompt "value of package nick or name + >")   ; was
*cached-prompt*

(defun my-prompt ()
  (unless (eq *last-package* *package*)
    (setf my-prompt                                      ; was
*cached-prompt*
          (concatenate 'string (or (first (package-nicknames *package*))
                                   (package-name *package*))
                       "> "))
    (setf *last-package* *package*))
  my-prompt)                                             ; was
*cached-prompt*

(setf *prompt* #'my-prompt)



- It seems a trite example to save 1 sysmbol, at the possible expense
of clarity. Now I see the spreadsheet like value OR function, but BOTH
in lisp-2 (and more in CL)
From: Adam Warner
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <pan.2003.05.01.02.30.23.117517@consulting.net.nz>
Hi Kaz Kylheku,

>   http://www.c2.com/cgi/wiki?SingleNamespaceLisp
>   http://www.dreamsongs.com/Separation.html

Wow, the wiki link is particularly impressive. I've never seen/grokked
funcall used with quoted symbols to indirect upon a function.

I've just been playing with redefining local functions and I managed to
come up with this code that seems legal (and could be wrapped in a macro
to avoid use of funcall within the visible body of the code):

;;can't redefine local functions?
(labels ((foo (n) (* n n))
	 (foo-foo (n) (* (foo n) (foo n))))
  (print (foo 2))
  (print (foo-foo 2)))

;;try a different approach
(let* ((foo (lambda (n) (* n n)))
       (foo-foo (lambda (n) (* (funcall foo n) (funcall foo n)))))
  (flet ((foo (n) (funcall foo n))
	 (foo-foo (n) (funcall foo-foo n)))
    (print (foo 2))
    (print (foo-foo 2))
    (setf foo (lambda (n) (* n n n)))
    (print (foo 2)) ;;expecting 8
    (print (foo-foo 2)))) ;;expecting 64

CMUCL 18 January 2003 build 4523 can't compile it:
 
; Python version 1.0, VM version Intel x86 on 01 MAY 03 02:24:34 pm.
; Compiling: /home/adam/t/c.lisp 01 MAY 03 02:20:12 pm

; Compiling labels ((foo # #) (foo-foo # #)): 
; Compiling let* ((foo #) (foo-foo #)): 


Error in function common-lisp::assert-error:
   The assertion (eq c::env
                     (c::lambda-environment
                      (c::lambda-var-home c::thing))) failed.

Restarts:
  0: [continue] Retry assertion.
  1: [abort   ] Return to Top-Level.

Debug  (type H for help)

(common-lisp::assert-error (eq c::env (c::lambda-environment #)) nil nil)
Source: 
; File: target:code/macros.lisp
(restart-case (error cond) (continue nil :report (lambda # #) nil))
0] 

CLISP and SBCL work fine. Can anyone confirm that the bug is still in a
more recent version of CMUCL? If it is I'll report it to the mailing list.

Regards,
Adam
From: Jochen Schmidt
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <b8q5mn$3ev$00$1@news.t-online.com>
Adam Warner wrote:

> Hi Kaz Kylheku,
> 
>>   http://www.c2.com/cgi/wiki?SingleNamespaceLisp
>>   http://www.dreamsongs.com/Separation.html
> 
> Wow, the wiki link is particularly impressive. I've never seen/grokked
> funcall used with quoted symbols to indirect upon a function.
> 
> I've just been playing with redefining local functions and I managed to
> come up with this code that seems legal (and could be wrapped in a macro
> to avoid use of funcall within the visible body of the code):
> 
> ;;can't redefine local functions?
> (labels ((foo (n) (* n n))
> (foo-foo (n) (* (foo n) (foo n))))
>   (print (foo 2))
>   (print (foo-foo 2)))
> 
> ;;try a different approach
> (let* ((foo (lambda (n) (* n n)))
>        (foo-foo (lambda (n) (* (funcall foo n) (funcall foo n)))))
>   (flet ((foo (n) (funcall foo n))
> (foo-foo (n) (funcall foo-foo n)))
>     (print (foo 2))
>     (print (foo-foo 2))
>     (setf foo (lambda (n) (* n n n)))
>     (print (foo 2)) ;;expecting 8
>     (print (foo-foo 2)))) ;;expecting 64

Try yet another approach:

(defmacro with-single-namespace ((&rest fns) &body forms)
  `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
                                        (funcall ,',fn ,@,'args))) fns))
     ,@forms))

(with-single-namespace (foo foo-foo)
  (let* ((foo (lambda (n) (* n n)))
         (foo-foo (lambda (n) (* (foo n) (foo n)))))
    (print (foo 2))
    (print (foo-foo 2))
    (setf foo (lambda (n) (* n n n)))
    (print (foo 2))
    (print (foo-foo 2))))

ciao,
Jochen
From: Jeff Caldwell
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <Ne1sa.9827$Jf.4555989@news1.news.adelphia.net>
FWIW,

(defmacro with-single-namespace ((&rest fns) &body forms)
   `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
                                        (funcall ,`,fn ,@,'args))) fns))
                  ,@forms))

Error: A comma appears outside the scope of a backquote (or there are 
too many commas).

(defmacro with-single-namespace ((&rest fns) &body forms)
   `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
                                       `(funcall ,',fn ,@,'args))) fns))
                  ,@forms))
WITH-SINGLE-NAMESPACE
Jochen Schmidt wrote:
...

> Try yet another approach:
> 
> (defmacro with-single-namespace ((&rest fns) &body forms)
>   `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
>                                         (funcall ,',fn ,@,'args))) fns))
>      ,@forms))
...
From: Jochen Schmidt
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <b8q7iq$2hk$02$2@news.t-online.com>
Jeff Caldwell wrote:

> FWIW,
> 
> (defmacro with-single-namespace ((&rest fns) &body forms)
>    `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
>                                         (funcall ,`,fn ,@,'args))) fns))
>                   ,@forms))
> 
> Error: A comma appears outside the scope of a backquote (or there are
> too many commas).
> 
> (defmacro with-single-namespace ((&rest fns) &body forms)
>    `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
>                                        `(funcall ,',fn ,@,'args))) fns))
>                   ,@forms))

Yes I realized this after sending the message. I accidently omitted the
backquote when I splitted the line to fit into 76 characters.

ciao,
Jochen
From: Jochen Schmidt
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <b8q79s$2hk$02$1@news.t-online.com>
Jochen Schmidt wrote:

 
> Try yet another approach:
> 
> (defmacro with-single-namespace ((&rest fns) &body forms)
>   `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
>                                         `(funcall ,',fn ,@,'args))) fns))
>      ,@forms))

Sorry my editing because of limited linelength omitted the inner backquote
before the funcall...

> 
> (with-single-namespace (foo foo-foo)
>   (let* ((foo (lambda (n) (* n n)))
>          (foo-foo (lambda (n) (* (foo n) (foo n)))))
>     (print (foo 2))
>     (print (foo-foo 2))
>     (setf foo (lambda (n) (* n n n)))
>     (print (foo 2))
>     (print (foo-foo 2))))
> 
> ciao,
> Jochen
From: Adam Warner
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <pan.2003.05.01.04.37.52.869121@consulting.net.nz>
Hi Jochen Schmidt,

>> Try yet another approach:
>> 
>> (defmacro with-single-namespace ((&rest fns) &body forms)
>>   `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
>>                                         `(funcall ,',fn ,@,'args))) fns))
>>      ,@forms))
> 
> Sorry my editing because of limited linelength omitted the inner backquote
> before the funcall...
> 
>> 
>> (with-single-namespace (foo foo-foo)
>>   (let* ((foo (lambda (n) (* n n)))
>>          (foo-foo (lambda (n) (* (foo n) (foo n)))))
>>     (print (foo 2))
>>     (print (foo-foo 2))
>>     (setf foo (lambda (n) (* n n n)))
>>     (print (foo 2))
>>     (print (foo-foo 2))))

That's very impressive. Thanks Jochen.

Regards,
Adam
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b8ocn1$rrg$1@f1node01.rhrz.uni-bonn.de>
Simon H. wrote:
> Of all the functional languages I've used (Scheme, OCaml, etc) I
> notice that Lisp is the only one with a seperate namespace for
> functions.  It just recently struck me that it seems to be a rather
> odd and counter-intuitive way of doing things, especially since I
> can't think of a single other language that does it, however I assume
> there's some reason for it...  Or at least an excuse.  ;-)

Here is a paper that explains the difference: 
http://www.nhplace.com/kent/Papers/Technical-Issues.html

There are several reasons why I like a separate function namespace more 
than a unified namespace. The most important ones are:

+ Functions and values are different, i.e. (eql 3 (lambda () 3)) returns 
nil for very good reasons. So in effect it's more intuitive to treat 
them differently IMHO. (but see below)

+ You can very easily implement macros that treat variables and 
functions differently. You don't have to submit to any kind of 
orthogonality aesthetics in case it turns out to be unnatural.

+ If you use a certain name for a global function you cannot use it 
locally anymore in a unified namespace. For example, it Common Lisp it's 
perfectly ok to say something like (let ((list ...)) (dosomething ...)), 
although list is also a predefined function. This can be very handy.

However, a unified namespace can be more intuitive when you are mainly 
programming in a functional/applicative style. Common Lisp doesn't have 
a preference for a particular programming style, so that's not as 
important as in, say, Scheme.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Dorai Sitaram
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b8r82h$7au$1@news.gte.com>
In article <············@f1node01.rhrz.uni-bonn.de>,
Pascal Costanza  <········@web.de> wrote:
>
>There are several reasons why I like a separate function namespace more 
>than a unified namespace. The most important ones are:
>
>+ Functions and values are different, i.e. (eql 3 (lambda () 3)) returns 
>nil for very good reasons. So in effect it's more intuitive to treat 
>them differently IMHO. (but see below)

I saw below, but the example still eludes me.  The
non-equality of 3 and (lambda () 3) is assured whether
the namespace is split or not, so what exactly
are you highlighting with this example...?
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-9EE5AC.20112701052003@news.netcologne.de>
In article <············@news.gte.com>,
 ····@goldshoe.gte.com (Dorai Sitaram) wrote:

> I saw below, but the example still eludes me.  The
> non-equality of 3 and (lambda () 3) is assured whether
> the namespace is split or not, so what exactly
> are you highlighting with this example...?

Both expressions always evaluate to 3, so there is no obvious reason why 
they shouldn't be regarded equal. However in general, you cannot 
determine such properties of functions - you cannot give a reasonable 
semantics for (function-eql (lambda (...) ...) (lambda (...) ...)). So 
there is a fundamental difference between values and functions.

You always have to keep this fundamental difference in mind, whether the 
language tries to hide it or not.


Pascal

-- 
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0105031258520001@k-137-79-50-101.jpl.nasa.gov>
In article <······························@news.netcologne.de>, Pascal
Costanza <········@web.de> wrote:

> In article <············@news.gte.com>,
>  ····@goldshoe.gte.com (Dorai Sitaram) wrote:
> 
> > I saw below, but the example still eludes me.  The
> > non-equality of 3 and (lambda () 3) is assured whether
> > the namespace is split or not, so what exactly
> > are you highlighting with this example...?
> 
> Both expressions always evaluate to 3, so there is no obvious reason why 
> they shouldn't be regarded equal.

Er, no.  (lambda () 3) does not evaluate to 3.  ((lambda () 3)) does, but
that's not the same thing.

> However in general, you cannot 
> determine such properties of functions - you cannot give a reasonable 
> semantics for (function-eql (lambda (...) ...) (lambda (...) ...)). So 
> there is a fundamental difference between values and functions.

It's not so clear that you can give a reasonable semantics of value-eql
for all values.  Consider:

#1=(1 2 #1 #1)  and #1=(1 2 #2=(1 2 #2) #2)

Are these value-eql or not?

> You always have to keep this fundamental difference in mind, whether the 
> language tries to hide it or not.

The difference between functions and values is not fundamental.  It does,
however, seem to be very strongly ingrained in the human psyche.

E.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-7B51FE.02262002052003@news.netcologne.de>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

> It's not so clear that you can give a reasonable semantics of value-eql
> for all values.  Consider:
> 
> #1=(1 2 #1 #1)  and #1=(1 2 #2=(1 2 #2) #2)

You're right. I stand corrected.

Pascal

-- 
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie
From: Dorai Sitaram
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b8rqb2$7h5$1@news.gte.com>
In article <······························@news.netcologne.de>,
Pascal Costanza  <········@web.de> wrote:
>In article <············@news.gte.com>,
> ····@goldshoe.gte.com (Dorai Sitaram) wrote:
>
>> I saw below, but the example still eludes me.  The
>> non-equality of 3 and (lambda () 3) is assured whether
>> the namespace is split or not, so what exactly
>> are you highlighting with this example...?
>
>Both expressions always evaluate to 3, so there is no obvious reason why 
>they shouldn't be regarded equal. 

Ah, you are making a little joke without the smiley
prop, yes?  The .de in your address kind of
blinded me to that possibility...
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-C564A1.02273202052003@news.netcologne.de>
In article <············@news.gte.com>,
 ····@goldshoe.gte.com (Dorai Sitaram) wrote:

> >Both expressions always evaluate to 3, so there is no obvious reason why 
> >they shouldn't be regarded equal. 
> 
> Ah, you are making a little joke without the smiley
> prop, yes? 

No, I had the wrong mental model and made a mistake.


Pascal
From: Dorai Sitaram
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b8tqkp$902$1@news.gte.com>
In article <······························@news.netcologne.de>,
Pascal Costanza  <········@web.de> wrote:
>In article <············@news.gte.com>,
> ····@goldshoe.gte.com (Dorai Sitaram) wrote:
>
>> >Both expressions always evaluate to 3, so there is no obvious reason why 
>> >they shouldn't be regarded equal. 
>> 
>> Ah, you are making a little joke without the smiley
>> prop, yes? 
>
>No, I had the wrong mental model and made a mistake.

Well, um, OK, I guess.  I do wish you'd take this
as a lesson, and curb your overeagerness in dissing
Lisp-1 and Scheme with bogus reasons.  It is a bit too
mealy-mouthed to say "I had the wrong mental model and
made a mistake" when your "reasons" are questioned.  :-)
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b8tsme$aoc$1@f1node01.rhrz.uni-bonn.de>
Dorai Sitaram wrote:
> In article <······························@news.netcologne.de>,
> Pascal Costanza  <········@web.de> wrote:

>>No, I had the wrong mental model and made a mistake.
> 
> Well, um, OK, I guess.  I do wish you'd take this
> as a lesson, and curb your overeagerness in dissing
> Lisp-1 and Scheme with bogus reasons.  It is a bit too
> mealy-mouthed to say "I had the wrong mental model and
> made a mistake" when your "reasons" are questioned.  :-)

Well, fortunately this wasn't the only reason I have given, and I don't 
think the others were equally bogus. ;-)

To be serious: Most of the time when people argue for a Lisp-2 they 
point to practical advantages, whereas the practical advantages of a 
Lisp-1 weren't clear to me before this thread - I haven't read any good 
expositions in this regard yet. In this thread, Tom Lord has given the 
first practical advantage of a Lisp-1 that I can relate to, namely that 
it is easier to refactor your code in a Lisp-1.

In Common Lisp, if you have code like this:

(defun f (x) (...))
(defun g (x) (... (f x) ...))

(g 5)

...and you want to switch from using a global function to a parameter 
you have to replace all the calls of the global functions as follows:

(defun g (x f) (... (funcall f x) ...))

(g 5 #'f)

Whereas in Scheme, when you start from:

(define (f x) (...))
(define (g x) (... (f x) ...))

(g 5)

You can switch to the parameterized version like this:

(define (g x f) (... (f x) ...))

(g 5 f)

The important point here is that the body of g doesn't change at all. 
That's a real advantage I didn't know about before.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0205030753280001@k-137-79-50-101.jpl.nasa.gov>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<········@web.de> wrote:

> Whereas in Scheme, when you start from:
> 
> (define (f x) (...))
> (define (g x) (... (f x) ...))
> 
> (g 5)
> 
> You can switch to the parameterized version like this:
> 
> (define (g x f) (... (f x) ...))
> 
> (g 5 f)
> 
> The important point here is that the body of g doesn't change at all. 
> That's a real advantage I didn't know about before.

If you think about it you will realize that this "advantage" is
*precisely* the same as the classic "disadvantage" of a Lisp-1.  Sometimes
you want a lexical named F to shadow the global function F, sometimes you
don't (the classic examples are when F is spelled "LIST" or "CAR").

The Right Answer IMO is to have first-class dynamic environments that have
separate function and value maps.  At the user's option the function and
value maps can be set to the same map object thus providing the semantics
of a Lisp-1, or to two separate objects, thus providing the semantics of a
Lisp-2.  That way you get the best of both worlds.

E.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b8u1kk$kqe$1@f1node01.rhrz.uni-bonn.de>
Erann Gat wrote:

> The Right Answer IMO is to have first-class dynamic environments that have
> separate function and value maps.  At the user's option the function and
> value maps can be set to the same map object thus providing the semantics
> of a Lisp-1, or to two separate objects, thus providing the semantics of a
> Lisp-2.  That way you get the best of both worlds.

Do you know Lisps that offer such constructs?

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0205031017010001@k-137-79-50-101.jpl.nasa.gov>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<········@web.de> wrote:

> Erann Gat wrote:
> 
> > The Right Answer IMO is to have first-class dynamic environments that have
> > separate function and value maps.  At the user's option the function and
> > value maps can be set to the same map object thus providing the semantics
> > of a Lisp-1, or to two separate objects, thus providing the semantics of a
> > Lisp-2.  That way you get the best of both worlds.
> 
> Do you know Lisps that offer such constructs?

Not yet :-)

T had first-class environments, but it was a Lisp-1.  But I think T's
design makes a good starting point.

E.
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfw1xzhxrnf.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> In Common Lisp, if you have code like this:
> 
> (defun f (x) (...))
> (defun g (x) (... (f x) ...))
> 
> (g 5)
> 
> ...and you want to switch from using a global function to a parameter
> you have to replace all the calls of the global functions as follows:
> 
> (defun g (x f) (... (funcall f x) ...))
> 
> (g 5 #'f)

In Common Lisp, if you have code like this:

(defvar *table* (make-hash-table))

(defun register (name element list) 
  (push (cons element list) (gethash name *table*)))

... and you want to use some new function in the body, you don't have to
go checking the bound variable list to see if it's a good name.  Also, when
changing the name of a lexical variable, you don't have to modify existing
references to functions that might be in use. e.g., deciding to use list
instead of cons in what's stored in the table above:

(defvar *table* (make-hash-table))

(defun register (name element list) 
  (push (list element list) (gethash name *table*)))

These examples always look stupid in the small but they are common enough
in the large.  Scheme programmers know these are an issue and that's why they
religiously misspell names of system functions when using them as arguments.
A Scheme programmer would say the above would never happen either because
he would write

 (define (register name element lst) ...)

to start with, so that LIST was not accidentally bound in the first place
or else the Scheme programmer would make the false claim that no one ever
wants to use the name of any system function as a variable.  

Common Lisp is full of functions that are utilities for manipulating data.
A great many of these has a constructor by the same name as the type.
Many of the manipulators of the type take just one instance of the type,
and using the type name is an argument name.

Historical Aside: A lot of proto-Common Lisp programmers, at the birth
of CLOS, were dubious about how often they'd want multiple dispatch.
Many of those with much experience in what was then still called
object-oriented programming (Smalltalk, Flavors, etc.) kept insisting
that multiple dispatch just didn't come up in their code and that
probably CLOS didn't need it.  But the CLOS committee argued that it
didn't come up because people knew it was going to be a pain and
mentally steered things away from something they might prefer to use
if it were there. (I personally was a bit in the middle; I didn't
think the CLOS committee's prediction was likely to be true, but I
didn't see the harm in going their way anyway, just to see.)  My
personal subjective assessment is that the CLOS committee turned out
to be right.  The incidence rate of multiple dispatch is dramatically
higher than would have been predicted by poeple from those other
backgrounds where it wasn't available (including myself).

And I think Scheme people should not be so quick to say that no one wants
the separation between functions and variables.

Human languages overload the meanings of words according to type.
We've had this conversation before.  Do a google groups search for
"Como como como." (Spanish sentence "I eat how I eat" which
illustrates the differing uses of "como" depending on parts of the
sentence.)  The English sentence "Buffalo buffalo buffalo." likewise
has different meanings based on context.  I seem to recall that there
was an extended conversation of other languages in which this happens
and you can read about it in the archives.  But the point is that that
human wetware does not, left to its own devices, naturally avoid
varying the meaning of words based on syntactic placement in
sentences.  Lots of (perhaps almost all) natural languages seem to
gravitate toward this.  My informal assumption about why this is
"because they can" and "because it would be foolish to have a
capability and not use it", given how strong a need we have for
concise expression when getting complex thoughts out of our brains in
finite form.

Consequently, the spartan "I will deny myself a capability my brain is
obviously perfectly well capatble of exploiting fully" attitude of the
Scheme community does not sit well with me.  Maybe indeed it makes the
language more complicated to implement; I just don't care.  I care
only that (a) it is _possible_ to implement the language and (b) the
language is easy to use.  In other words, ease of use and ease of
implementation are frequently at odds, and I prefer ease of use.  I want
short user programs, not short language definitions.  I think designers
should carefully check that it's feasible to implement what they design,
so that implementors are not held to impossible standards (pardon the
pun), but beyond that their allegience should be to users.

Users who don't want to use same-named variables and functions are welcome
not to use all the variable names available to them.

As to the issue of funcall and function (#'), those were explicit design
decisions as well.  Think of funcall and function as (very approximately)
marshall/unmarshall operations for grabbing something from universe and
forcing it back.  CL was designed to _permit_ this kind of thing, but it
was an explicit decision of the design committee not to have it be happening
by accident.  The people in the room at the time very clearly said "we
don't want to be _that_ FP".  The use of #' makes it easy to find the places
where something stored in the function domain is creeping out of its normal
home and moving to someplace else.  The use of the FUNCALL makes it easy to
find where functional parameters are being called.  

> Whereas in Scheme, when you start from:
> 
> (define (f x) (...))
> (define (g x) (... (f x) ...))
> 
> (g 5)
> 
> You can switch to the parameterized version like this:
> 
> (define (g x f) (... (f x) ...))
> 
> (g 5 f)
> 
> The important point here is that the body of g doesn't change at
> all. That's a real advantage I didn't know about before.

And it's only half the cases that a real language designer would have to
consider in making a good choice.  That is, there are techniques for
exhaustively exploring a syntax space methodically so you don't get blindsided
by things you should have thought of.  And doing so yields the fact 
that the world is more complex than you cite.  Consider:

 * Renaming a bound variable might intentionally capture a contained name.
 * Renaming a bound variable might unintentionally capture a contained name.
 * Renaming a contained name might cause it to be intentionally captured.
 * Renaming a contained name might cause it to be unintentionally captured.

Your example looks at the case where you set up the circumstances for the
intentional case and you neglect where it happens by accident.  But in the
full world, the situation is balanced.  The issue is not that the things
programmers need when renaming are hard in one language and easy in another, 
but rather that each language makes some renamings easy and some hard.
The case you need to make is therefore not one of "I might want to do x"
but "doing x is more statistically predicted".

And the things that we in the CL community observed is that Scheme
people routinely get name collisions because they so densely pack
their single namespace, such that they are often voluntarily
misspelling their function names. We decided we don't like this.

My guess is that people reject Lisp2 more out of kneejerk hatred of anything
different than because they've given it a chance and found it to be
unworkable.  I also see bogus theories of aesthetic appeal that usually
begin with the implied assertion "The only possible theory of aesthetics
is the following" and then go on to show why we've violated that, without
ever asking "Might there be another possible theory of aesthetics?"  But
then, I think the same mindset that leads one to believe that there is only
one namespace needed is the same mindset that leads one to believe that
there is only one theory of aesthetics needed.  So there ya go.

All in all, I mostly think the communities are best off separated from
one another because of deep divisions in how people think which are revealed
by these superficial issues.  It's better if you really disagree with me
on this that you go find people you agree with, than that you try to
convince me otherwise.

 "It's easier to learn to like something 
   than to learn to not like something."
 -- unknown source

I think the underlying point of the saying, though, is that learning to like
something is about learning to see that it has value, and understanding that
all things in the world are about trade-offs between values and problems.
Understanding something, and even liking it, is not a way of putting aside
what you thought before--it's monotonic.  It's about understanding that there
are multiple ways to think about something.  Can you think in Lisp1? Certainly.
But can you think in Lisp2? Certainly, too.  At that point, it's a simple
choice.  Lisp2 people choose it not because they are crazy, but because they
choose it.  Lisp1 people seem to think Lisp2 is not a choice.  I don't care
if they ever choose it.  I do care that they come to understand that it is
a valid choice.

 "There are two kinds of people in the world.
  People who think there are two kinds of people,
  and people who don't."
 -- another unknown source
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b8u8bu$hvk$1@f1node01.rhrz.uni-bonn.de>
Kent,

Thanks a lot for your excellent posting. [1]

As an addendum to the current thread, someone else has sent me a 
solution how to avoid the introduction of funcall in the example I have 
given:

> Pascal Costanza <········@web.de> writes:
> 
>>In Common Lisp, if you have code like this:
>>
>>(defun f (x) (...))
>>(defun g (x) (... (f x) ...))
>>
>>(g 5)
>>
>>...and you want to switch from using a global function to a parameter
>>you have to replace all the calls of the global functions as follows:
>>
>>(defun g (x f) (... (funcall f x) ...))
>>
>>(g 5 #'f)

...what you can also do in this case is the following:

(defun g (x fn)
   (flet ((f (x) (funcall fn x)))
     (... (f x) ...)))

(g 5 #'f)

So even this one is not a real issue in a Lisp-2.

You could even write a macro for that (not tested!):

(defmacro with-function (f &body body)
   (with-gensyms (args)
     `(flet ((,f (&rest ,args) (apply ,f ,args)))
        ,@body)))

...and then...

(defun g (x f)
   (with-function f
     (... (f x) ...)))


Thank god I can stick to Common Lisp! ;) Or to put it differently, I 
still wonder what the merits of a Lisp-1 are...


Pascal


[1] I also especially like the historical notes you give every now and 
then. (Perhaps it'd be a good a idea to collect them and make them 
available somewhere...)

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0205031032360001@k-137-79-50-101.jpl.nasa.gov>
In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<········@web.de> wrote:

> You could even write a macro for that (not tested!):
> 
> (defmacro with-function (f &body body)
>    (with-gensyms (args)
>      `(flet ((,f (&rest ,args) (apply ,f ,args)))
>         ,@body)))
> 
> ...and then...
> 
> (defun g (x f)
>    (with-function f
>      (... (f x) ...)))

Say, that's a cute trick.  We could even take it one step further:

(defmacro define ((fn &rest args) &body body)
  `(defun ,fn ,args (with-functions ,args ,@body)))

E.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <3EB2B797.7040105@web.de>
Erann Gat wrote:
> In article <············@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
> <········@web.de> wrote:
> 
> 
>>You could even write a macro for that (not tested!):
>>
>>(defmacro with-function (f &body body)
>>   (with-gensyms (args)
>>     `(flet ((,f (&rest ,args) (apply ,f ,args)))
>>        ,@body)))
>>
>>...and then...
>>
>>(defun g (x f)
>>   (with-function f
>>     (... (f x) ...)))
> 
> 
> Say, that's a cute trick.  We could even take it one step further:
> 
> (defmacro define ((fn &rest args) &body body)
>   `(defun ,fn ,args (with-functions ,args ,@body)))

...and then:

(defmacro define (car &body body)
   (if (listp car)
     (destructuring-bind (fn &rest args) car
       `(defun ,fn ,args (with-functions ,args ,@body)))
     (with-gensyms (sym)
       `(progn (defvar ,sym ,body)
               (define-symbol-macro ,car ,sym)))))

;-)

Would it make sense to analye body in the second case in order to turn 
this into a function in the case of (define f (lambda (...) ...))?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Thomas F. Burdick
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <xcvy91pdwl5.fsf@conquest.OCF.Berkeley.EDU>
Pascal Costanza <········@web.de> writes:

> Thank god I can stick to Common Lisp! ;) Or to put it differently, I
> still wonder what the merits of a Lisp-1 are...

Convenience.  Of course, "convenience" only makes sense when you
understand what domain you're talking about -- so, convenience in
functional programming.  This particular convenience feature has
*huge* drawbacks for other programming styles, so CL's Lisp-7-ness is
generally a feature.  But if you *really* *really* want to program in
an all-functional style, what you lose from a Lisp-1 is probably less
than what you gain.  Lisp being Lisp, you can make yourself a little
Lisp-1 world inside of a Lisp-2, and work inside of there, but in that
case you're really in your own world (I stand by my wording here :).

BTW, if you still don't get how a Lisp-1 is significantly convenient
to use, try learning some Scheme.  Just be sure to put your brain in
academic-exercise mode first.

-- 
           /|_     .-----------------------.                        
         ,'  .\  / | No to Imperialist war |                        
     ,--'    _,'   | Wage class war!       |                        
    /       /      `-----------------------'                        
   (   -.  |                               
   |     ) |                               
  (`-.  '--.)                              
   `. )----'                               
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfw8ytpchr5.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> (defmacro with-function (f &body body)
>    (with-gensyms (args)
>      `(flet ((,f (&rest ,args) (apply ,f ,args)))

          (declare (ignorable #',f))

>         ,@body)))

You want this so that others can insert this macro into situations where
in fact the f argument may or may not be used functionally.
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <vfwpcivr.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> Thank god I can stick to Common Lisp! ;) Or to put it differently, I
> still wonder what the merits of a Lisp-1 are...

You might want to take a look at `Structure and Interpretation of
Classical Mechanics' by Sussman and Wisdom.  They use Scheme to
explain Lagrangian and Hamiltonian mechanics.  In many of the
examples, the objects being dealt with are higher order functions.
For instance, an object's position may be described as a function of
time to coordinates.  A coordinate transformation is a function from
one coordinate system to another.  Applying the coordinate transform
to the function allows you to view an object's position in a different
coordinate frame.  You end up with some very abstract functions and
some deep nesting.  This leads to an abundance of FUNCALLs in the
Lisp-2 version.

;; Scheme version
(define ((F->C F) local)
  (->local (time local)
           (F local)
           (+ (((partial 0) F) local)
              (* (((partial 1) F) local) 
                 (velocity local)))))

;; CL version
(define F->C (F)
  #'(lambda (local)
      (->local (time local)
               (funcall F local)
               (+ (funcall (funcall (partial 0) F) local)
                  (* (funcall (funcall (partial 1) F) local)
                     (velocity local))))))
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-E189AA.22384905052003@news.netcologne.de>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> 
wrote:

> Pascal Costanza <········@web.de> writes:
> 
> > Thank god I can stick to Common Lisp! ;) Or to put it differently, I
> > still wonder what the merits of a Lisp-1 are...
> 

[...]
>This leads to an abundance of FUNCALLs in the
> Lisp-2 version.
> 
> ;; Scheme version
> (define ((F->C F) local)
>   (->local (time local)
>            (F local)
>            (+ (((partial 0) F) local)
>               (* (((partial 1) F) local) 
>                  (velocity local)))))
> 
> ;; CL version
> (define F->C (F)
>   #'(lambda (local)
>       (->local (time local)
>                (funcall F local)
>                (+ (funcall (funcall (partial 0) F) local)
>                   (* (funcall (funcall (partial 1) F) local)
>                      (velocity local))))))

Thanks for this good example.

In the meantime I have thought about it and came to the following 
conclusion: The advantage of a Lisp-1 doesn't lie specifically in the 
unified namespace. Instead, if you want to do XFP ("extreme functional 
programming" ;) you regularly want to apply functions that are 
themselves results of higher-order functions. This means that you not 
only want to treat the argument positions of an sexpr to be evaluated 
but you want the function position to be also a "normal" arbitrary 
expression. (and the example you have posted illustrates this nicely)

So ((some-expression) ...) should evaluate (some-expression) in order to 
determine the actual function to be applied. Now, when you have (f ...), 
f should also be interpreted as a "normal" expression - and this 
consequentially means that f must be a variable, and not somehting 
special.

So Schemers don't want Lisp-1 in the first place, but only as a 
consequence of another feature in their language.

I feel enlightened. ;) [1]


Pascal

[1] ...but this doesn't mean that I will switch to Scheme. I don't want 
to do XFP... ;)
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0505031455440001@k-137-79-50-101.jpl.nasa.gov>
In article <······························@news.netcologne.de>, Pascal
Costanza <········@web.de> wrote:

> So ((some-expression) ...) should evaluate (some-expression) in order to 
> determine the actual function to be applied.

Note that this feature could be added as a backwards-compatible extension
to Common Lisp.

E.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b97oa0$ru6$1@f1node01.rhrz.uni-bonn.de>
Erann Gat wrote:
> In article <······························@news.netcologne.de>, Pascal
> Costanza <········@web.de> wrote:
> 
>>So ((some-expression) ...) should evaluate (some-expression) in order to 
>>determine the actual function to be applied.
> 
> Note that this feature could be added as a backwards-compatible extension
> to Common Lisp.

You need a code walker in order to implement this, right? Or is there a 
simpler way?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Jeff Caldwell
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <41isa.10246$Jf.4807181@news1.news.adelphia.net>
The expressions Dorai posted do not both evaluate to 3.

CL-USER 1 > 3
3

CL-USER 2 > (lambda () 3)
#'(LAMBDA NIL 3)

CL-USER 3 > ((lambda () 3))
3

Jeff


Pascal Costanza wrote:
>  ····@goldshoe.gte.com (Dorai Sitaram) wrote:
> 
>>I saw below, but the example still eludes me.  The
>>non-equality of 3 and (lambda () 3) is assured whether
>>the namespace is split or not, so what exactly
>>are you highlighting with this example...?
...
> Both expressions always evaluate to 3, so there is no obvious reason why 
> they shouldn't be regarded equal.
> 
> Pascal
> 
From: Tj
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ccc7084.0304301258.51cd5297@posting.google.com>
······@mail.com (Simon H.) wrote in message news:<····························@posting.google.com>...
> It just recently struck me that it seems to be a rather
> odd and counter-intuitive way of doing things, especially since I
> can't think of a single other language that does it, however I assume
> there's some reason for it...  Or at least an excuse.  ;-)

Actually, most languages that are like C do this.  Go ahead, define
blah and blah().  So at least the language is as good as C. ;P

I recall a post by a Symbolics guy on ll1 mailing list mentioning that
being a lisp-1 would have been likely had there not been such strong
customer pressure; a tool was written to assist porting, but that
apparently wasn't good enough.

Though the extent of my CL is exploring Allegro and Corman, and
reading things like PAIP, it's obvious CL is more powerful than any
language I use.  And yet, it's like a big zit on the language.  Though
I think it's immature of me to find asymmetry ugly, since oftentimes
chaos gives resources useful for elegant solutions.

Tj
From: Gareth McCaughan
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <slrnbb0g8f.on.Gareth.McCaughan@g.local>
"Tj" wrote:

>  ······@mail.com (Simon H.) wrote in message news:<····························@posting.google.com>...
> > It just recently struck me that it seems to be a rather
> > odd and counter-intuitive way of doing things, especially since I
> > can't think of a single other language that does it, however I assume
> > there's some reason for it...  Or at least an excuse.  ;-)
> 
> Actually, most languages that are like C do this.  Go ahead, define
> blah and blah().  So at least the language is as good as C. ;P

Um, no.

    -------- bad.c begins ---------
    int x;
    int x(double y) { return 1; }
    -------- bad.c ends -----------

    $ gcc -c bad.c
    bad.c:2: `x' redeclared as different kind of symbol
    bad.c:1: previous declaration of `x'

-- 
Gareth McCaughan  ················@pobox.com
.sig under construc
From: Tj
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ccc7084.0304302026.43a3aba1@posting.google.com>
Gareth McCaughan <················@pobox.com> wrote in message news:<······························@g.local>...
> "Tj" wrote:
> > Actually, most languages that are like C do this.  Go ahead, define
> > blah and blah().  So at least the language is as good as C. ;P
> 
> Um, no.

Thank you for correcting an incredibly embarrassing error.  I've been
programming in Java too much and should be perfectly suited for
management.  I'll refrain from posting on programming for a month.

Tj
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-DEF2B8.23150030042003@news.netcologne.de>
In article <···························@posting.google.com>,
 ··········@yahoo.com (Tj) wrote:

> Though the extent of my CL is exploring Allegro and Corman, and
> reading things like PAIP, it's obvious CL is more powerful than any
> language I use.  And yet, it's like a big zit on the language.  Though
> I think it's immature of me to find asymmetry ugly, since oftentimes
> chaos gives resources useful for elegant solutions.

I have the following definitions in my startup files:

(defun open-curl-macro-char (stream char)
  (declare (ignore char))
  (let ((forms (read-delimited-list #\} stream t)))
    `(funcall ,@forms)))

(set-macro-character #\{ #'open-curl-macro-char)
(set-macro-character #\} (get-macro-character #\)))


Now I can write {f args} instead of (funcall f args). Of course, you 
still have to know that you want to access the value, and not the 
function, but is this really such a big deal?

As far as I can see it, there are some pragmatic advantages of having a 
Lisp-2, whereas the reasons brought up for Lisp-1 are purely of an 
aesthetical nature. Or am I missing something?

Pascal

-- 
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-3004031457240001@k-137-79-50-101.jpl.nasa.gov>
In article <······························@news.netcologne.de>, Pascal
Costanza <········@web.de> wrote:

> In article <···························@posting.google.com>,
>  ··········@yahoo.com (Tj) wrote:
> 
> > Though the extent of my CL is exploring Allegro and Corman, and
> > reading things like PAIP, it's obvious CL is more powerful than any
> > language I use.  And yet, it's like a big zit on the language.  Though
> > I think it's immature of me to find asymmetry ugly, since oftentimes
> > chaos gives resources useful for elegant solutions.
> 
> I have the following definitions in my startup files:
> 
> (defun open-curl-macro-char (stream char)
>   (declare (ignore char))
>   (let ((forms (read-delimited-list #\} stream t)))
>     `(funcall ,@forms)))
> 
> (set-macro-character #\{ #'open-curl-macro-char)
> (set-macro-character #\} (get-macro-character #\)))
> 
> 
> Now I can write {f args} instead of (funcall f args). Of course, you 
> still have to know that you want to access the value, and not the 
> function, but is this really such a big deal?
> 
> As far as I can see it, there are some pragmatic advantages of having a 
> Lisp-2, whereas the reasons brought up for Lisp-1 are purely of an 
> aesthetical nature. Or am I missing something?

I disagree with the implication that aesthetic and pragmatic issues are
disjoint.  Programming languages are user interfaces, so there is
considerable overlap between the aesthetic and the pragmatic.

That said, the case for the aesthetics of Lisp-1 is not at all a
slam-dunk.  Simpler and more aesthetic are not equivalent.  (Bauhaus
architecture is simpler than Frank Gehry's style.  Does that necessarily
make it more aesthetic?)  Since there is already an asymmetry between the
CAR and the CDR in Lisp, it's not at all clear that having the same
evaluation semantics for both is necessarily the Right Thing even from a
purely aesthetic point of view.

E.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-30FB90.01353901052003@news.netcologne.de>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
 ···@jpl.nasa.gov (Erann Gat) wrote:

> > As far as I can see it, there are some pragmatic advantages of having a 
> > Lisp-2, whereas the reasons brought up for Lisp-1 are purely of an 
> > aesthetical nature. Or am I missing something?
> 
> I disagree with the implication that aesthetic and pragmatic issues are
> disjoint.  Programming languages are user interfaces, so there is
> considerable overlap between the aesthetic and the pragmatic.

In general, you are right. But I don't see the practical advantages of a 
Lisp-1, and the arguments brought up for a Lisp-1 always sound like 
purely aesthetical considerations to me. So my impression is that in 
this concrete case the two concerns seem to be separated.

> That said, the case for the aesthetics of Lisp-1 is not at all a
> slam-dunk.  Simpler and more aesthetic are not equivalent.  (Bauhaus
> architecture is simpler than Frank Gehry's style.  Does that necessarily
> make it more aesthetic?)  Since there is already an asymmetry between the
> CAR and the CDR in Lisp, it's not at all clear that having the same
> evaluation semantics for both is necessarily the Right Thing even from a
> purely aesthetic point of view.

Again, I agree. I was only talking about the arguments put forward by 
Lisp-1 advocates. I haven't heard yet about actual pragmatic advantages 
of Lisp-1. Are there any?

I can't imagine that there are cases in which you can actually forget 
about the difference between values and functions. In a Lisp-1, it only 
looks as if there is no difference. So what's the point?


Pascal

-- 
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie
From: Tom Lord
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <vb0qnbadvign54@corp.supernews.com>
	Pascal:

        > But I don't see the practical advantages of a 
	> Lisp-1,


It's hard to point to _absolute_ advantages for either lisp-1 or
lisp-2 because they aren't very different -- just different in
emphasis and syntax optimizations.

In lisp-2, I could program in a style where I always use funcall
and then I'm programming in lisp-1.

In lisp-1, I could program in a style where I always use something
like (get 'symbol 'function) in the CAR of expressions, and then 
I'm programming in lisp-2.

It's a case of "Anything you can do, I can do .... well, pretty much
the same way."


	> I haven't heard yet about actual pragmatic advantages of
	> Lisp-1. Are there any?

	> I can't imagine that there are cases in which you can
	> actually forget about the difference between values and
	> functions. In a Lisp-1, it only looks as if there is no
	> difference. So what's the point?

A few non-absolute pragmatic advantages of lisp-1:

* Um... lexical scoping?

  My CL (as compared to my Scheme) is certainly rusty.  Perhaps
  I am full of s here and I'll humbly accept the fish-slap if so:

  Aren't function binding slots per-symbol, and thus not lexically
  scoped?  In lisp-1, for example, I can take a function body, and
  wrap it in a `let' that shadows some globally defined function, and
  it all just works.  I don't need to go through the body adding
  FUNCALL or taking FUNCALL out should I happen to remove the shadow
  binding.


* macros

  Especially macros that implement binding constructs or have
  side-effects on bindings, and macros that have free variables.  In
  lisp-1, to choose a trivial example, I can have a single macro for
  `swap-values' where in lisp-2 I'd need `swap-values' and
  `swap-function-bindings' or worse (see "exploratory programming",
  below).


* pedagogy

  I'm not a college professor, but I'd bet a quarter that lisp-1 is
  easier to teach, simply because there's less to learn.  Sure,
  everybody messes up with `(let ((list ....)) ...)' -- once.  (After
  which their intuitive understanding of both the evaluation rule and
  lexical scoping is improved.)


* automatic code transforms

  A generalization of macros.   The fewer primitive constructs
  in your language, the easier it is to write high-level transforms.
  No need for `(cond ((eq (car foo) 'funcall) ...) ...)'.


* simpler implementation

  Consider a simple meta-circular interpreter.  The lisp-1 version is
  smaller and simpler.  I gather people don't _really_ transform
  everything-binding-related to lambda in the radical manner of
  RABBIT.SCM in production compilers, but I'm not so sure it's really
  a dead technique.


And the big one, I think, though the most abstract:

* exploratory programming

  In lisp-2, I have to decide whether a given function should be
  treated as the value of a variable or the function slot binding
  of a symbol.    My decision then becomes spread throughout 
  the code in the form of the presence or absense of FUNCALL.
  Then I change my mind about that decision.

  Worse: I have a package that assumes that decision goes one way, 
  and applications that use that package that way.   Then I want to 
  use the same package in an application that makes the decision the
  other way.

And the "general principles" one, though this is less directly a
pragmatic issue:

* why stop at 2?

  Maybe, like C, I want a third binding for, say, structure types.
  The step from 1 to 2 is almost always wrong.   Either stop at 1,
  or make it N.   But that's just a rule of thumb.

  Binding is binding is binding.   Why do need 2?



So, give me lisp-1 anyday --- but if I weren't, seemingly, the only
person in the world that wants it this way: make it a lisp-1 where
"nil" and "false" (however you spell them) are the same value.

-t
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-3004032218160001@192.168.1.51>
In article <··············@corp.supernews.com>, ····@emf.emf.net (Tom
Lord) wrote:

> * why stop at 2?

As others have already pointed out, CL doesn't actually stop at 2. 
However, I think there is an argument to made for stopping at 2: there is
an asymmetry between handling the CAR and the CDR during program
evaluation.  It is arguable that this asymmetry should extend to bindings.

E.
From: Tim Bradshaw
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ey3k7db1nru.fsf@cley.com>
* Tom Lord wrote:
> * Um... lexical scoping?

Um.  FLET, LABELS?

>   My CL (as compared to my Scheme) is certainly rusty. 
Yes, it is.


> * why stop at 2?

>   Maybe, like C, I want a third binding for, say, structure types.
>   The step from 1 to 2 is almost always wrong.   Either stop at 1,
>   or make it N.   But that's just a rule of thumb.

Maybe, like C, CL has done that.  The normal definition, I think,
makes CL a Lisp7:

- Functions & macros
- lexical variables
- special variables
- types and classes
- labels (for GO)
- block names
- symbols in quoted expressions (such as tags for THROW).

(defvar name nil)                       ;1

(defun name (name)                      ;2, 3
  (catch 'name                          ;4
    (block name                         ;5
      (tagbody
       (when (numberp (locally (declare (special name)) name))
         (go name))                     ;6
       (throw 'name name)
       name                             ;7
       (return-from name (+ name (locally (declare (special name)) 
                                   name)))))))

Typical programs probably add from several to many more namespaces.
Of course, typical Scheme programs add lots of namespaces too, but
that doesn't count, because, erm... well because you don't have to
mention this inconvenient fact when teaching the language, I guess.

--tim
From: Tom Lord
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <vb0vp0dts7lle6@corp.supernews.com>
        >> * Um... lexical scoping?

        > Um.  FLET, LABELS?


Ok, there's the fish-slap.  Blush blush.  Thanks.


	>> My CL (as compared to my Scheme) is certainly rusty. 
	> Yes, it is.

And, yup.  Sure.   

But wait a minute: Ick!

So now, if I want to make a macro that defines a new binding
construct:  what I need different versions of the macro depending on
where I want function bindings and where variable?

I'm looking at something like my 4/5-line (lisp-1) `let-values' or
`let*-values'.  Now I gather that the CL reply here is that, where I'd
want to use such a thing, I know that I'm looking at variable bindings
not function bindings and callers will use FUNCALL.  But that's not
really true:

For the namespace convenience of function bindings, I pay the cost of
having to spread throughout my code decisions about whether or not 
I'm referring to function bindings or variable bindings.   And if I
change my mind about that, or have two uses for my code that would
make that decision differently, I'm screwed.

Function bindings and variable bindings are apparently pretty
isomorphic -- so why have both?

The only good arguments I've seen for lisp-2 are about namespace
convenience such as regarding the function called `list' -- and that's
as much about a poorly chosen function name as anything else; and it's
really not all that big a deal to learn to work around regardless of
the function name.

I've seen lots of comments along the lines of "it usually doesn't
matter, you know when you should use funcall."  That's not really a
"pro" argument -- that's an "it's not too hard to live with" argument.


     > Maybe, like C, CL has done that.  The normal definition, I think,
     > makes CL a Lisp7:

     > - Functions & macros
     > - lexical variables
     > - special variables
     > - types and classes
     > - labels (for GO)
     > - block names
     > - symbols in quoted expressions (such as tags for THROW).


Ick again.


	> Typical programs probably add from several to many more
	> namespaces.  Of course, typical Scheme programs add lots of
	> namespaces too, but that doesn't count, because, erm... well
	> because you don't have to mention this inconvenient fact
	> when teaching the language, I guess.

Hmm.   

First, I'm not exactly a Scheme (in the sense of R^nRS) fan.  I am a
lisp-1 fan and I think there are some good ideas in Scheme.

In my not-quite-standard-Scheme lisp-1 ("systas scheme"), functions,
macros, lexical variables, special variables, types and classes, and
block names all share a namespace.   I don't have labels, per se.
Exception names do, in fact, have their own little namespace and I
regard that as a flaw in the design (at least I'm consistent).

Yes, lots of programs introduce "namespaces" of a sort -- heck, many
hash-tables effectively count as one.  Generally these don't change
the evaluation rules of the language, though.

-t
From: Tim Bradshaw
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ey3y91r6pyv.fsf@cley.com>
* Tom Lord wrote:

> For the namespace convenience of function bindings, I pay the cost of
> having to spread throughout my code decisions about whether or not 
> I'm referring to function bindings or variable bindings.   And if I
> change my mind about that, or have two uses for my code that would
> make that decision differently, I'm screwed.

I really can't see a case where this matters.  If your decide to
change your code so something wants to be an operator binding (I made
this term up: I mean `function or macro binding') instead of simply a
normal binding then you already have to change really a whole lot of
stuff: everything that said (op x ...) now needs to say (x ...).
About the least of your problems is going to be that the binding
construct is different.  Of course, maybe this change is all in
machine generated code, but then who cares anyway.

And note, even when you have a single namespace you *still* need two
binding constructs, because you need to know whether the scope name
being bound includes the binding itself (let / letrec).

> Yes, lots of programs introduce "namespaces" of a sort -- heck, many
> hash-tables effectively count as one.  Generally these don't change
> the evaluation rules of the language, though.

Now this is just stupid.  Of course they do, unless you never write
nontrivial macros and so live entirely within the semantics of the
base language.  Common Lisp people don't do that - they define
languages on top of CL with their own evaluation rules all the time.

I think I'm probably wasting my time here though, so I'll stop.

--tim
From: Thomas A. Russ
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ymiissuocql.fsf@sevak.isi.edu>
Well, there are a number of people (including me) who think that
multiple namespaces are a real boon.  This is especially the case with
large systems.

That is why Common Lisp goes to all that trouble to define packages:  to
increase the number of namespaces so as to make name collisions less
likely to occur.  It is the same with the other namespaces.  I think
this is one of the key design issues on which we disagree, since uniform
namespaces make it more difficult to manage large systems.

Now, you may, of course, think differently about some of the
philosophical issues, but I tend to think of functions and values as
being separate things.  In other words, when I am thinking about what
functions are available, I don't automatically also think of what values
are available.  I tend to concentrate on one or the other.  This may, of
course, just be training from working a long time in a Lisp-2, but
(ignoring the function-as-data paradigm) there is a fundamental
difference between a name that denotes a functional value and one that
denotes a non-functional value.

That difference is that if you try to apply a non-functional value to
some arguments you will get an error.  Even in Scheme, doing something
like

   (let ((x 3))  (x 10))     ==>  An error.

In order to use a functional value as a function, you really do need to
know that it is a function.  It is syntactically distinguished by both
lisp-1 and lisp-2 languages since functions are used differently than
values.

Now even in a lisp-2 language you can bind function values to the value
of lexical variables using let.  You can then treat it just the same as
any other value.  For example the following is legal:

  (let ((f #'(lambda (a b) (* a b))))
     (print f)
     (cons 'b f))

It also occurs to me that having the lisp-1 makes writing macros a lot
trickier.  The problem is that if you have a macro that expands into
some code, you have to worry not only about variable capture, but you
also have to worry about a redefinition or shadowing of the function
names as well.  I'm not familiar enough with Scheme's hygienic macros to
know if this is really a problem or not.

What if one wrote a macro FIRST that took the CAR of a list.  In CL this
might look like

    (defmacro first (x) `(car ,x))

Well, then what if one were to write some snippet of code like the
following:

     (let ((car 'ford)
           (desired-vehicles '(bmw mercedes)))
        (eq? car (first desired-vehicles)))

Does the lexical binding of CAR to 'FORD shadow the expanded code from
the macro FIRST and cause it to fail?  If so, I claim that this is a bad
property since one doesn't want to have to know what functions a macro
uses in its expansion in order to be sure of avoiding problems.

Now you can run into the same situation in CL with respect to capture if
use FLETs for functions.

One answer is, of course, not to use built-in function names as variable
names, ever.  Of course, this doesn't necessarily help you if the macro
expands into a user-defined function that you didn't know about.

In fact, I would claim that in a Lisp-1 system, in order to safely use
it, you really need to know about the entire namespace of defined
functions, so that you can choose your lexical variables names to avoid
conflicts.  Or else you give up on macros.

Separate namespaces really do provide some real advantages, so I think
that having "namespace convenience" is a real bonus.  If I didn't like
all of the conveniences of Common Lisp, I would be programming in C.



-- 
Thomas A. Russ,  USC/Information Sciences Institute          ···@isi.edu    
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305021319410.25452-100000@gwen.sixfingeredman.net>
On 1 May 2003, Thomas A. Russ wrote:
> there is a fundamental difference between a name that denotes a
> functional value and one that denotes a non-functional value.

You could say that about any type. To someone used to a functional style,
having to specify that you want the /functional/ binding makes about as
much sense as having to do:

	(let ((x 3))
	   (print (int-value x)))
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0205031224450001@k-137-79-50-101.jpl.nasa.gov>
In article
<········································@gwen.sixfingeredman.net>, Adrian
Kubala <······@sixfingeredman.net> wrote:

> On 1 May 2003, Thomas A. Russ wrote:
> > there is a fundamental difference between a name that denotes a
> > functional value and one that denotes a non-functional value.
> 
> You could say that about any type. To someone used to a functional style,
> having to specify that you want the /functional/ binding makes about as
> much sense as having to do:
> 
>         (let ((x 3))
>            (print (int-value x)))

That's not quite as outrageous as it might seem.  For a long time BASIC
had separate namespaces for integers, floats, and strings (denoted X%, X,
and X$).  And Hungarian notation, which I am given to understand is still
very much in fashion in certain circles, is basically a user-level hack to
make a separate name space for every type.

E.
From: Tim Bradshaw
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ey37k9958vd.fsf@cley.com>
* Erann Gat wrote:

> That's not quite as outrageous as it might seem.  For a long time
> BASIC had separate namespaces for integers, floats, and strings
> (denoted X%, X, and X$).  And Hungarian notation, which I am given
> to understand is still very much in fashion in certain circles, is
> basically a user-level hack to make a separate name space for every
> type.

Perl.
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0205032348540001@192.168.1.51>
In article <···············@cley.com>, Tim Bradshaw <···@cley.com> wrote:

> * Erann Gat wrote:
> 
> > That's not quite as outrageous as it might seem.  For a long time
> > BASIC had separate namespaces for integers, floats, and strings
> > (denoted X%, X, and X$).  And Hungarian notation, which I am given
> > to understand is still very much in fashion in certain circles, is
> > basically a user-level hack to make a separate name space for every
> > type.
> 
> Perl.

Oh yeah.  That too.

:-)

E.
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwznm47uvg.fsf@shell01.TheWorld.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On 1 May 2003, Thomas A. Russ wrote:
> > there is a fundamental difference between a name that denotes a
> > functional value and one that denotes a non-functional value.
> 
> You could say that about any type. To someone used to a functional style,
> having to specify that you want the /functional/ binding makes about as
> much sense as having to do:
> 
> 	(let ((x 3))
> 	   (print (int-value x)))

Of course, you can say it feels like anything you like. But it's not really
the same.  Because there is no reason that PRINT couldn't print the
integer x directly.  There IS a reason that 

 (f x)

and 

 (funcall f x)

are different.  That's exactly to allow programs in which the user's set
of operations required (e.g., CAR, CDR, CONS) do not inhibit his ability
to use various variables (e.g., if he's dealing with automobiles, to use
the variable CAR).  Consider that if DOLIST were a macro that expands into
references to the CAR function,  the following program would need to work:

 (defun wash (car-list)
   (dolist (car car-list)
     (wash1 car)))

This works in Scheme because they added infinite hair in their hygeinic
macro system in order to remember whose car FIRST expands into; but it works
in CL because our rules and customs for macro expansion and namespace 
separation do not lead to any name conflict using a _far simpler_ macro
system.  Not just simpler to implement (which I don't care about at all),
but simpler to _use_, which I care about a lot.
From: Christopher C. Stacy
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <u1xzf1mdl.fsf@dtpq.com>
>>>>> On 02 May 2003 20:10:27 -0400, Kent M Pitman ("Kent") writes:

 Kent> Adrian Kubala <······@sixfingeredman.net> writes:
 >> On 1 May 2003, Thomas A. Russ wrote:
 >> > there is a fundamental difference between a name that denotes a
 >> > functional value and one that denotes a non-functional value.
 >> 
 >> You could say that about any type. To someone used to a functional style,
 >> having to specify that you want the /functional/ binding makes about as
 >> much sense as having to do:
 >> 
 >> (let ((x 3))
 >> (print (int-value x)))

 Kent> Of course, you can say it feels like anything you like. But it's not really
 Kent> the same.  Because there is no reason that PRINT couldn't print the
 Kent> integer x directly.  There IS a reason that 

 Kent>  (f x)

 Kent> and 

 Kent>  (funcall f x)

 Kent> are different.  That's exactly to allow programs in which the user's set
 Kent> of operations required (e.g., CAR, CDR, CONS) do not inhibit his ability
 Kent> to use various variables (e.g., if he's dealing with automobiles, to use
 Kent> the variable CAR).  Consider that if DOLIST were a macro that expands into

And for telephony applications I've developed, Call Detail Records.
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305052357290.3309-100000@gwen.sixfingeredman.net>
On 2 May 2003, Kent M Pitman wrote:
> [...]
> This works in Scheme because they added infinite hair in their hygeinic
> macro system in order to remember whose car FIRST expands into; but it works
> in CL because our rules and customs for macro expansion and namespace
> separation do not lead to any name conflict using a _far simpler_ macro
> system.  Not just simpler to implement (which I don't care about at all),
> but simpler to _use_, which I care about a lot.

In other words, CL exports the complexity involved in making macros
hygenic into the whole rest of the language, and it /still/ doesn't
completely solve the problem of unintentionally-shadowed bindings.
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwr87ck43i.fsf@shell01.TheWorld.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On 2 May 2003, Kent M Pitman wrote:
> > [...]
> > This works in Scheme because they added infinite hair in their hygeinic
> > macro system in order to remember whose car FIRST expands into; but it works
> > in CL because our rules and customs for macro expansion and namespace
> > separation do not lead to any name conflict using a _far simpler_ macro
> > system.  Not just simpler to implement (which I don't care about at all),
> > but simpler to _use_, which I care about a lot.
> 
> In other words, CL exports the complexity involved in making macros
> hygenic into the whole rest of the language,

And the power.  That is, the so-called complexity is useful for other
kinds of namespacing that Scheme does not aspire to do.  That ought
not be taken as a value judgment on Scheme, but neither should you
take it as a slight on CL.  CL uses a package system that partitions
not just programs but data; Scheme's lexical contours can only
partition programs.  Each has their use.  All in all, though, it's easier
to implement Scheme using CL data structures than CL using Scheme data 
structures.  In CL, you can just make a SCHEME package and put Scheme 
there; in Scheme, you can't use Scheme symbols at all straightforwardly
as CL symbols.

> and it /still/ doesn't
> completely solve the problem of unintentionally-shadowed bindings.

I don't believe that's so.  I believe it solves the problem to the
exact same extent that Scheme does.

This sounds like flamebait since you did not cite a problem, merely make
a broad sweeping claim with no foundation.

If you are having trouble figuring out how to write something material
in CL, perhaps you could offer your example of a real programming problem
that you are inhibited from doing using CL.  But I doubt that is so.

Whatever theoretical problems you're having are probably due to fighting
the paradigm rather than trying to learn it.

If CL isn't your cup of tea, you're welcome not to use it.  The rest
of us are using it just fine, and have been for several decades now.
I've been employed at more than one vendor and can say with some
reliability that, to round numbers, no complaints are ever received
about the package system failing to allow them to write macros that
stand up under heavy stress.
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0605030914380001@192.168.1.51>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

>  no complaints are ever received
> about the package system failing to allow them to write macros that
> stand up under heavy stress.

This is true, but one does regularly hear other complaints about the
package system, the most common being:

(foo)

Error: unbound variable FOO  ; Doh!  Forgot to load a module

(require 'foo)

Loading FOO

(use-package 'foo)

Error: importing symbol FOO:FOO would result in a name conflict ; AAARRGGHH!



This situation gets particularly annoying when there are hundreds of
symbols involved.

E.
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305061754470.8930-100000@gwen.sixfingeredman.net>
On 6 May 2003, Kent M Pitman wrote:
> Adrian Kubala <······@sixfingeredman.net> writes:
> > In other words, CL exports the complexity involved in making macros
> > hygenic into the whole rest of the language,
>
> All in all, though, it's easier to implement Scheme using CL data
> structures than CL using Scheme data structures.  In CL, you can just
> make a SCHEME package and put Scheme there; in Scheme, you can't use
> Scheme symbols at all straightforwardly as CL symbols.

That should hardly be taken as an argument for CL -- the purpose of a
language is to limit the number of easily-expressible programs. Structured
languages cannot express "goto" -- however, since they can express almost
all the "good" uses of goto, and none of the "bad" ones, they are
preferred. Not that I'm prepared to argue this, but one could say that
such is the case with Lisp 1 + hygenic macros -- it allows one to do all
the useful things one does in Lisp 2, while avoiding more possible bugs.
From: Duane Rettig
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <4he87y5zz.fsf@beta.franz.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On 6 May 2003, Kent M Pitman wrote:
> > Adrian Kubala <······@sixfingeredman.net> writes:
> > > In other words, CL exports the complexity involved in making macros
> > > hygenic into the whole rest of the language,
> >
> > All in all, though, it's easier to implement Scheme using CL data
> > structures than CL using Scheme data structures.  In CL, you can just
> > make a SCHEME package and put Scheme there; in Scheme, you can't use
> > Scheme symbols at all straightforwardly as CL symbols.
> 
> That should hardly be taken as an argument for CL -- the purpose of a
> language is to limit the number of easily-expressible programs.

That is _definitely_ not Common Lisp's purpose.  Your statement might be
accurate for a _specific_ language, but not all languages. Perhaps Scheme
desires to restrict, but CL definitely desires to enable.

> Structured
> languages cannot express "goto" -- however, since they can express almost
> all the "good" uses of goto, and none of the "bad" ones, they are
> preferred.

Again, CL doesn't claim to be a Structured language, any more than it
claims to be a functional language (although it can allow either or both
kinds of programming).  Note that CL _does_ have goto, as in CL:GO.

> Not that I'm prepared to argue this, but one could say that
> such is the case with Lisp 1 + hygenic macros -- it allows one to do all
> the useful things one does in Lisp 2, while avoiding more possible bugs.

This is a different argument than you are making above.   It may indeed be
the case that CL gives you a bigger gun to hang yourself with, or more rope
with which to shoot yourself in the foot, and I like it like that.

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwel3bfwt5.fsf@shell01.TheWorld.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On 6 May 2003, Kent M Pitman wrote:
> > Adrian Kubala <······@sixfingeredman.net> writes:
> > > In other words, CL exports the complexity involved in making macros
> > > hygenic into the whole rest of the language,
> >
> > All in all, though, it's easier to implement Scheme using CL data
> > structures than CL using Scheme data structures.  In CL, you can just
> > make a SCHEME package and put Scheme there; in Scheme, you can't use
> > Scheme symbols at all straightforwardly as CL symbols.
> 
> That should hardly be taken as an argument for CL -- the purpose of a
> language is to limit the number of easily-expressible programs. Structured
> languages cannot express "goto" -- however, since they can express almost
> all the "good" uses of goto, and none of the "bad" ones, they are
> preferred. Not that I'm prepared to argue this, but one could say that
> such is the case with Lisp 1 + hygenic macros -- it allows one to do all
> the useful things one does in Lisp 2, while avoiding more possible bugs.

One could say this, but they'd be wrong. ;)

I had an actual mid-term test in compiler design at MIT on which I was
asked:  Gotos are (a) good (b) bad.  I have to say, I just don't believe
in absolute good and absolute bad in this sense.  It was this that caused
me to realize I was studying religion, not science.

I also don't believe the purpose of the language should be limit the number
of easily expressible programs.  Mostly I think the opposite.  You need
more exposition here.
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <3cjqdhc2.fsf@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> I had an actual mid-term test in compiler design at MIT on which I was
> asked:  Gotos are (a) good (b) bad.  

I hope you answered `yes'.
From: Nikodemus Siivola
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b99k34$59dad$1@midnight.cs.hut.fi>
Adrian Kubala <······@sixfingeredman.net> wrote:

> the purpose of a language is to limit the number of easily-expressible
> programs

What!? That's like saying "purpose of harmony is chaos", or any other
oxymoron of your choise... Please explain.

Cheers,

  -- Nikodemus
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwr87bedun.fsf@shell01.TheWorld.com>
Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:

> Adrian Kubala <······@sixfingeredman.net> wrote:
> 
> > the purpose of a language is to limit the number of easily-expressible
> > programs
> 
> What!? That's like saying "purpose of harmony is chaos", or any other
> oxymoron of your choise... Please explain.

I hate to speak for another, so Adrian can correct me if he's wrong, but
I have to assume that what he means implies that languages allow one to
express 'good' things and 'bad' things and that to the extent that one can
make illegal the bad things that one can easily reach, one has reduced the
amount of runtime grief.  This is sort of like childproofing your house.

This argument is presently playing out in US politics where it comes as
a surprise to a generation of people who have grown up without the need
to contemplate the design of their own political system--they are merely
consumers of it--and they have been surprised to learn that 'freedom'
doesn't imply 'safety' nor vice versa.  Worse, many (falsely) imagine
that this is something the founding fathers did not intend and are busy
'fixing' things by making things more safe, even at the expense of freedom.

It is certainly the case that one can trade freedom for safety, but many
of us don't want this kind of trade.

Reference: Disneyland with the Death Penalty
           Wired Magazine, Issue 1.04, Sep/Oct 1993
           http://www.wired.com/wired/archive/1.04/gibson.html
 
  "They that can give up essential liberty to obtain a
   little temporary safety deserve neither liberty nor safety."
     -- Benjamin Franklin
        Historical Review of Pennsylvania
        http://www.bartleby.com/100/245.1.html

I don't want to push the predictive nature of this metaphor too
precisely because there are some fundamental properties of the
linguistic world that are different than the real world. For example,
in the linguistic realm, one does not _live_ in the language and one
can use languages for one purpose and not another.  However, the basic
notion that freedom and safety are in tension with one another does
mostly carry over, and is worth giving serious heed to.  And for those
of us to whom linguistic freedom is important, the value of using
tools that let us express ourselves the way we want and not in some
government-approved "safe" way is not to be ignored even if the
consequences are less dramatic in the linguistic domain.
From: Michael Livshin
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <s3llxjuluu.fsf@laredo.verisity.com.cmm>
Kent M Pitman <······@world.std.com> writes:

> I hate to speak for another, so Adrian can correct me if he's wrong

priceless.  :)

> , but I have to assume that what he means implies that languages
> allow one to express 'good' things and 'bad' things and that to the
> extent that one can make illegal the bad things that one can easily
> reach, one has reduced the amount of runtime grief.  This is sort of
> like childproofing your house.

let's see.  how do you view CL's lack of general pointers to
arbitrary memory locations, like those they have in C?  is that a
good thing or a bad thing?

the problem with hygenic macros, in my view, is not that they restrict
you, but precisely that they don't buy you anything if you are using
CL.

the (forced) abstaining from using pointers, on the other hand,
obviously does buy you some beneficial language properties.

so I wouldn't want to make grand sweeping claims about desirability
of semantic restrictions or lack thereof.  it's always a trade-off.

-- 
Purely applicative languages are poorly applicable.
                -- Alan Perlis
From: Franz Kafka
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <cb6ua.4695$c22.3161@news01.roc.ny.frontiernet.net>
Michael Livshin:
>
> How do you view CL's lack of general pointers to
> arbitrary memory locations, like those they have in C?  is that a
> good thing or a bad thing?
>

Very Very Very Good Thing!!!!!!!!

I personally think that the lack of pointers is a good thing. It is one of
the
major reasons why I chose Lisp over C for most of my coding.

I found that pointers gave me way to much rope and
I ended up trashing the system by mangeling memory locations
when I was learning C.

I've read that you needed pointers to write intresting progs.
in one of my comp. sci. books but I found that I could
write those same programs in Lisp without using pointers,
and without worrying about trashing a memory location.

I think pointers are like goto not in function, so don't
flame about that, but because their are many other
ways to do what pointers do--and they all are
a lot more safe to do.

Lisp hides pointers--like loops hide goto. It's
still there but it's not as easy to shoot you
hand off with.
From: Franz Kafka
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <4i7ua.5293$oj.1402@news02.roc.ny.frontiernet.net>
"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com>
wrote in message ························@news01.roc.ny.frontiernet.net...
> Michael Livshin:
> >
> > How do you view CL's lack of general pointers to
> > arbitrary memory locations, like those they have in C?  is that a
> > good thing or a bad thing?
> >
>
> Very Very Very Good Thing!!!!!!!!
>

Even thou I think pointers are bad I don't think Lisp should
avoid them if like Pascal said thay have uses.

Lisp provides many ways to do the same task, and
many features so that programmers can write
what they want to write more easily.

I also think that a call/cc like operator
should have been included in ANSI CL
so that Lisp programmers could use it
to implement new control structures.
I know it is going to be harder in CL that
it was in Scheme--don't post about that--CL
is a bigger language.

The avoid it mentality of the 'goto' crowd
is to be avoided. If you need to add a
new looping construt to Lisp (go <tag>)
is one way to do it.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9atut$s7a$1@f1node01.rhrz.uni-bonn.de>
Michael Livshin wrote:

> the (forced) abstaining from using pointers, on the other hand,
> obviously does buy you some beneficial language properties.
> 
> so I wouldn't want to make grand sweeping claims about desirability
> of semantic restrictions or lack thereof.  it's always a trade-off.

No, it's not. The lack of pointers means that it is relatively hard to 
implement a straightforward FFI. The lack of a powerful UFFI is 
perceived to be one of the most prominent obstacles in making Common 
Lisp more popular. Or to put it the other way around: As soon as you 
include an FFI into Common Lisp, you need to include pointers in some 
way or the other.

The idea behind language restrictions is always that the world would be 
a better place if everyone would behave the same, at least in certain 
respects. However, the world doesn't function like that - you can create 
secluded compartments, but when the lack of certain facilities hinders 
people to express certain things, they will switch to a different 
compartment. It doesn't make sense to deny this fact.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Franz Kafka
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <2b7ua.4701$yb2.4570@news01.roc.ny.frontiernet.net>
"Pascal Costanza" <········@web.de> wrote in message
·················@f1node01.rhrz.uni-bonn.de...
> Michael Livshin wrote:
>
> > the (forced) abstaining from using pointers, on the other hand,
> > obviously does buy you some beneficial language properties.
> >
> > so I wouldn't want to make grand sweeping claims about desirability
> > of semantic restrictions or lack thereof.  it's always a trade-off.
>
> No, it's not. The lack of pointers means that it is relatively hard to
> implement a straightforward FFI. The lack of a powerful UFFI is
> perceived to be one of the most prominent obstacles in making Common
> Lisp more popular. Or to put it the other way around: As soon as you
> include an FFI into Common Lisp, you need to include pointers in some
> way or the other.
>

Lisp provides pointers--there just under the hood. If there was a portable
way to get at pointers that function should start with '%' because this
warns Lisp programmers that using this function could corrupt
memory.

'(a) ;; [value: A]---pointer--->[value:Nil]

some of Lisps "destructive operators" deal directly with pointers.

Because Linked DataStructures are built into the language--
the lack of pointers is not as big of an issue.

If the programmer had full access to pointers it would
be harder to Garbage Collect; if anyone can
think of how to implement pointers in such a way
that people who need them can use them but
programmers are not forced to use them that
would be great? I just don't know: how this
would be done, if it could be done in a
standardized way.

I only know one thing all Common Lisps would
have to treat them in the same way or the Lisp
code that used them would not be portable.
From: Michael Livshin
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <s3he86vrz3.fsf@laredo.verisity.com.cmm>
Pascal Costanza <········@web.de> writes:

> Michael Livshin wrote:
>
>> the (forced) abstaining from using pointers, on the other hand,
>> obviously does buy you some beneficial language properties.
>> so I wouldn't want to make grand sweeping claims about desirability
>> of semantic restrictions or lack thereof.  it's always a trade-off.
>
> The idea behind language restrictions is always that the world would
> be a better place if everyone would behave the same, at least in
> certain respects. However, the world doesn't function like that - you
> can create secluded compartments, but when the lack of certain
> facilities hinders people to express certain things, they will switch
> to a different compartment. It doesn't make sense to deny this fact.

I'm not denying this fact.  I think that dividing the world into
compartments (delineated by semantic restrictions) is clearly a good
thing.  I'm sure what are you arguing against.  I think that a
property stated as "if your program limits itself to core CL + safe
extensions, then it won't crash" is very nice.

let's try a (hopefully) better example of desirable semantic
restriction in CL: the lack of call/cc.  I guess the benefits (such
as implementable `unwind-protect') are pretty obvious, and the need
to introduce call/cc in order to interface to other compartments is
not very pressing.

(a world where the most popular computing platform is the Scheme
 Machine would be another matter entirely, of course.  but then, there
 would be the much more serious problem of having the lambda calculus
 be part of the elementary school curriculum...)

-- 
In many cases, writing a program which depends on supernatural insight
to solve a problem is easier than writing one which doesn't.
                                                        -- Paul Graham
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9b180$tmm$1@f1node01.rhrz.uni-bonn.de>
Michael Livshin wrote:

> let's try a (hopefully) better example of desirable semantic
> restriction in CL: the lack of call/cc.  I guess the benefits (such
> as implementable `unwind-protect') are pretty obvious, and the need
> to introduce call/cc in order to interface to other compartments is
> not very pressing.

The lack of call/cc in Common Lisp is not a deliberate restriction, but 
rather a consequence of the fact that it is hard to integrate with other 
features of the language. In general, programming language features do 
not combine very well, and that's when language designers have to make 
choices.

Kent Pitman once had a description of the problems involved when 
integrating call/cc into Lisp (unwind-protect vs. continuations) - I am 
not able to find it right now... (Kent?)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwof2ex2ac.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> The lack of call/cc in Common Lisp is not a deliberate restriction,
> but rather a consequence of the fact that it is hard to integrate with
> other features of the language. In general, programming language
> features do not combine very well, and that's when language designers
> have to make choices.
> 
> Kent Pitman once had a description of the problems involved when
> integrating call/cc into Lisp (unwind-protect vs. continuations) - I
> am not able to find it right now... (Kent?)

http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
From: Lars Brinkhoff
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <85smrq505y.fsf@junk.nocrew.org>
Pascal Costanza <········@web.de> writes:
> Kent Pitman once had a description of the problems involved when
> integrating call/cc into Lisp (unwind-protect vs. continuations) - I
> am not able to find it right now.

http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
From: Michael Livshin
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <s3d6iuvpr0.fsf@laredo.verisity.com.cmm>
Pascal Costanza <········@web.de> writes:

> Michael Livshin wrote:
>
>> let's try a (hopefully) better example of desirable semantic
>> restriction in CL: the lack of call/cc.  I guess the benefits (such
>> as implementable `unwind-protect') are pretty obvious, and the need
>> to introduce call/cc in order to interface to other compartments is
>> not very pressing.
>
> The lack of call/cc in Common Lisp is not a deliberate restriction,
> but rather a consequence of the fact that it is hard to integrate with
> other features of the language. In general, programming language
> features do not combine very well, and that's when language designers
> have to make choices.

pointers are a generalization of memory access.
call/cc is a generalization of control flow.

for all I know, you could implement a Common Lisp on top of Scheme,
using call/cc to implement the various control features.  as long as
you don't provide call/cc as a user-level feature, you can provide
working `unwing-protect' in such implementation.

it's the same with pointers.  CL implementations obviously use them.
FFI's use them.  but they are not in the ANSI spec for a good reason:
their availability to the user would make too many important
assumptions and invariants incorrect, and perhaps would make some ANSI
CL features completely unimplementable (no, "you can use this, but be
careful with pointers" doesn't cut it, thankyouverymuch).  so, just
like call/cc.

(the question whether a restriction is deliberate or not is orthogonal
 and irrelevant.)

-- 
The PROPER way to handle HTML postings is to cancel the article, then
hire a hitman to kill the poster, his wife and kids, and fuck his dog
and smash his computer into little bits. Anything more is just
extremism.                                    -- Paul Tomblin, in SDM
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9b3hn$vvq$1@f1node01.rhrz.uni-bonn.de>
Michael Livshin wrote:

> for all I know, you could implement a Common Lisp on top of Scheme,
> using call/cc to implement the various control features.  as long as
> you don't provide call/cc as a user-level feature, you can provide
> working `unwing-protect' in such implementation.

Not quite. Assume you have stored a continuation before you call 
unwind-protect, and within the extent of that unwind-protect you call 
the stored continuation. Does this mean that the cleanup from should be 
executed or not? There are cases in which it makes sense and there are 
cases in which it doesn't make sense. How do you distinguish between them?

It seems to be hard to come up with a good language design in this 
respect. Kent Pitman's article on this issue seems to suggest that it is 
unsolvable (but I am not sure whether I understood this correctly).

Combined with the fact that there seems to be no real need for 
continuations in the "real world" [1], it's really hard to justify the 
inclusion of such a feature.

Pointers are totally different in this regard because it's immediately 
clear what you could use them for. (It's better to talk about FFI here, 
and not about pointers as an isolated feature.)


Pascal


[1] http://makeashorterlink.com/?F5CF25974

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwissmx1w8.fsf@shell01.TheWorld.com>
Pascal Costanza <········@web.de> writes:

> It seems to be hard to come up with a good language design in this
> respect. Kent Pitman's article on this issue seems to suggest that 
> [the integration of call/cc and unwind-protect] is unsolvable 
> (but I am not sure whether I understood this correctly).

I think it's unsolvable without changing Scheme.  And you might as well
be asking to change the church's icon from a cross to a pistol or a 
guillotine when you ask to change the way continuations are managed
in Scheme.  Nevertheless, the article to which I previously alluded
( http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html )
does contain proposed fixes, just for purposes of conversation.  I don't
seriously expect anyone to accept the fixes, but not because I think the
proposals are not technically sound.  There is too much community vested
interest in keeping call/cc as it is, and so they'd rather just close their
eyes and pretend that there's no problem.  That, at least, is my assessment.
YMMV.

> Combined with the fact that there seems to be no real need for
> continuations in the "real world" [1], it's really hard to justify 
> the inclusion of such a feature.

I don't know if I'd go so far as to say no 'need' in the sense of 'no use'.
There is call for them, and they could be handy if we had them, but the
price is too high.  And if we offered call/cc in the way I propose, it
probably would just annoy the Scheme community.  Hmmm.... (sound of 
wheels turning)

> Pointers are totally different in this regard because it's immediately
> clear what you could use them for. (It's better to talk about FFI
> here, and not about pointers as an isolated feature.)

Actually, I think the desire to have pointers instead of a GC'd world,
and the desire to have unhygienic macros instead of hygienic ones is
pretty analogous.  The difference is not in the structure of the argument
but rather in the need of the community.  This community has expressed
a need for automatic memory management and a willingness to give up the
fredom of pointers to get it.  This community has not expressed a need
for hygiene (because we have it in other ways), and so we have no desire
to give up the full generality of our macro system for a more restrictive
one.
From: Michael Livshin
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <s38ytivnfo.fsf@laredo.verisity.com.cmm>
Pascal Costanza <········@web.de> writes:

> Michael Livshin wrote:
>
>> for all I know, you could implement a Common Lisp on top of Scheme,
>> using call/cc to implement the various control features.  as long as
>> you don't provide call/cc as a user-level feature, you can provide
>> working `unwing-protect' in such implementation.
>
> Not quite. Assume you have stored a continuation before you call
> unwind-protect, and within the extent of that unwind-protect you call
> the stored continuation.

how am I supposed to /call/ it, exactly?  I don't have any way to do
it.  I'm writing in CL, remember?

> Kent Pitman's article on this issue seems to suggest that it
> is unsolvable (but I am not sure whether I understood this
> correctly).

that's my understanding of it, too.

> Pointers are totally different in this regard because it's immediately
> clear what you could use them for. (It's better to talk about FFI
> here, and not about pointers as an isolated feature.)

it's indeed important to finally decide what are you talking about,
yes.

let's see:

I said that sometimes it is a good idea to restrict a language,
because well-chosen restrictions let you assume all kinds of useful
things.  as an example, I brought up general pointers.

you said that you need pointers for FFI's and at best, given the
existence of the so-called Real World, we can talk about
compartmentalization and not outright omission of certain semantic
features from the language.

I said "fine", and brought up call/cc as a possibly better example.  I
thought this example to be better /exactly/ because there's no
Real-World-related need to include it in CL.  I hoped this would help
you see my point.

but it didn't help.  oh well.

-- 
Due to the holiday next Monday, there will be no garbage collection.
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwy91iiywe.fsf@shell01.TheWorld.com>
Michael Livshin <······@cmm.kakpryg.net> writes:

> > Kent Pitman's article on this issue seems to suggest that it
> > is unsolvable (but I am not sure whether I understood this
> > correctly).
> 
> that's my understanding of it, too.

Btw, I just this minute made a change to the article 
 http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
in order to add some examples to make it clear what the options
for fixing it would look like in terms of code.

If you read the version where the proposed changes had no code examples,
you might want to take a second look.

Hopefully this will avoid the sense that the problem is pragmatic
and will make it more clear that the problem is political.
From: William D Clinger
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b84e9a9f.0305071107.95eddb1@posting.google.com>
Kent M Pitman wrote:
> Btw, I just this minute made a change to the article 
>  http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
> in order to add some examples to make it clear what the options
> for fixing it would look like in terms of code.
> 
> If you read the version where the proposed changes had no code examples,
> you might want to take a second look.
> 
> Hopefully this will avoid the sense that the problem is pragmatic
> and will make it more clear that the problem is political.

Actually, I think the problem is that we don't understand why you
aren't willing to consider implementing UNWIND-PROTECT as a simple
macro and registering it via the SRFI process.

Your article is written as though there were some technical impediment
to this, but I've never been able to figure out what it might be.

Will
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfw65om8qym.fsf@shell01.TheWorld.com>
······@qnci.net (William D Clinger) writes:

> Kent M Pitman wrote:
> > Btw, I just this minute made a change to the article 
> >  http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
> > in order to add some examples to make it clear what the options
> > for fixing it would look like in terms of code.
> > 
> > If you read the version where the proposed changes had no code examples,
> > you might want to take a second look.
> > 
> > Hopefully this will avoid the sense that the problem is pragmatic
> > and will make it more clear that the problem is political.
> 
> Actually, I think the problem is that we don't understand why you
> aren't willing to consider implementing UNWIND-PROTECT as a simple
> macro and registering it via the SRFI process.
> 
> Your article is written as though there were some technical impediment
> to this, but I've never been able to figure out what it might be.

The fact that it requires a change to call/cc.

I presented the need for a change to the authors mailing list, and they
rejected the idea.

While unwind-protect might be doable using secret facilities on a
given implementation, i don't see how it can be doable by
straightforward rewrite.  I'd rather see Scheme fixed than have secret
fixes everywhere that no one talks about.  You're welcome to show me
what the macro might expand into that works portably.
From: William D Clinger
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b84e9a9f.0305080807.4efb9f3@posting.google.com>
Kent M Pitman wrote:
> > Actually, I think the problem is that we don't understand why you
> > aren't willing to consider implementing UNWIND-PROTECT as a simple
> > macro and registering it via the SRFI process.
> > 
> > Your article is written as though there were some technical impediment
> > to this, but I've never been able to figure out what it might be.
> 
> The fact that it requires a change to call/cc.

I understand that you are claiming that the semantics you desire requires
a change to call/cc, but I have not seen you present any evidence in
support of that claim.  Furthermore, implementing UNWIND-PROTECT and both
of your proposed solutions in portable IEEE/ANSI/R5RS Scheme seems trivial
to me, so I must be missing something.

> I presented the need for a change to the authors mailing list, and they
> rejected the idea.
> 
> While unwind-protect might be doable using secret facilities on a
> given implementation, i don't see how it can be doable by
> straightforward rewrite.  I'd rather see Scheme fixed than have secret
> fixes everywhere that no one talks about.  You're welcome to show me
> what the macro might expand into that works portably.

; An implementation of Common Lisp's UNWIND-PROTECT
; in Scheme, complete with Common Lisp's restrictions
; on continuations.
; Uses SRFI 23: Error reporting mechanism.

(define-syntax unwind-protect
  (syntax-rules ()
    ((unwind-protect protected-form cleanup-form ...)
     (let ((entered? #f)
           (exited? #f)
           (errormsg
            "You entered or exited an UNWIND-PROTECT more than once"))
       (dynamic-wind
        (lambda ()
          (if entered?
              (error errormsg)
              (set! entered? #t)))
        (lambda () protected-form)
        (lambda ()
          (if exited?
              (error errormsg)
              (begin (set! exited? #t)
                     cleanup-form ...))))))))


; An implementation of one-shot continuations.
; Uses SRFI 23: Error reporting mechanism.

(define (call-with-one-shot-continuation f)
  (define (broken . args)
    (error "You called a one-shot continuation more than once."))
  (call-with-current-continuation
   (lambda (k)
     (unwind-protect (f (lambda (v) (k v)))
                     (set! k broken)))))


; The following code is completely untested, but should give the idea.

; An implementation of Kent Pitman's
; "Proposed fix #1: Change call-with-current-continuation",
; http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html

(define call/cc-fix-1
  (lambda (one-shot? f)
    (if one-shot?
        (call-with-one-shot-continuation f)
        (call-with-current-continuation f))))


; An implementation of Kent Pitman's
; "Proposed fix #2: Change continuations themselves",
; http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html

(define call/cc-fix-2
  (lambda (f)
    (define (broken . args)
      (error "Hey, you promised not to call me again!"))
    (call-with-current-continuation
     (lambda (k)
      (f (lambda (last-use? value)
           (let ((return k))
             (if last-use?
                 (set! k broken))
             (return value))))))))


Will
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwllxhl7s8.fsf@shell01.TheWorld.com>
······@qnci.net (William D Clinger) writes:

> I understand that you are claiming that the semantics you desire requires
> a change to call/cc, but I have not seen you present any evidence in
> support of that claim.  Furthermore, implementing UNWIND-PROTECT and both
> of your proposed solutions in portable IEEE/ANSI/R5RS Scheme seems trivial
> to me, so I must be missing something.

See my reply to Joe Marshall.
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <1xz9bkyx.fsf@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> While unwind-protect might be doable using secret facilities on a
> given implementation, i don't see how it can be doable by
> straightforward rewrite.  

(define-syntax unwind-protect
  (syntax-rules ()
    ((_ body cleanup ...) 
     (unwind-protect-function
      (lambda () body)
      (lambda () cleanup ...)))))

(define (unwind-protect-function body-form cleanup-form)
  (let ((cleaned-up #f))
    (dynamic-wind
     (lambda () 
       (if cleaned-up
           (unwind-protect-error)))
     body-form
     (lambda ()
       (cleanup-form)
       (set! cleaned-up #t)))))

(define (unwind-protect-error)
  (error "Attempt to re-enter a dynamic state with a one-shot cleanup."))
From: Erann Gat
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <gat-0805030855220001@192.168.1.51>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> wrote:

> Kent M Pitman <······@world.std.com> writes:
> 
> > While unwind-protect might be doable using secret facilities on a
> > given implementation, i don't see how it can be doable by
> > straightforward rewrite.  
> 
> (define-syntax unwind-protect
>   (syntax-rules ()
>     ((_ body cleanup ...) 
>      (unwind-protect-function
>       (lambda () body)
>       (lambda () cleanup ...)))))
> 
> (define (unwind-protect-function body-form cleanup-form)
>   (let ((cleaned-up #f))
>     (dynamic-wind
>      (lambda () 
>        (if cleaned-up
>            (unwind-protect-error)))
>      body-form
>      (lambda ()
>        (cleanup-form)
>        (set! cleaned-up #t)))))
> 
> (define (unwind-protect-error)
>   (error "Attempt to re-enter a dynamic state with a one-shot cleanup."))

You probably want:

(define (unwind-protect-function body-form cleanup-form)
   ...
     body-form
     (lambda ()
       (set! cleaned-up #t)
       (cleanup-form)))))

Otherwise if cleanup-form does a non-local exit then you could get
multiple entires without an error.

E.
From: Rob Warnock
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Zs2dnbceieydJSGjXTWc-g@speakeasy.net>
Erann Gat <···@jpl.nasa.gov> wrote:
+---------------
| Joe Marshall <···@ccs.neu.edu> wrote:
| > (define (unwind-protect-function body-form cleanup-form)
| >   (let ((cleaned-up #f))
| >     (dynamic-wind
| >      (lambda () 
| >        (if cleaned-up
| >            (unwind-protect-error)))
| >      body-form
| >      (lambda ()
| >        (cleanup-form)
| >        (set! cleaned-up #t)))))
|
| You probably want:
| (define (unwind-protect-function body-form cleanup-form)
|    ...
|      body-form
|      (lambda ()
|        (set! cleaned-up #t)
|        (cleanup-form)))))
| 
| Otherwise if cleanup-form does a non-local exit then you could get
| multiple entires without an error.
+---------------

But cleanup-form is not *allowed* to do a non-local exit!! R5RS says:

	The effect of using a captured continuation to enter or exit
	the dynamic extent of a call to BEFORE or AFTER is undefined.

[I ran into this when I was trying to write a "safe-eval" in Scheme,
what would have been in CL simply IGNORE-ERRORS wrapped around EVAL.
Can't be done in Scheme, at least not portably in conformance to R5RS...]


-Rob

-----
Rob Warnock, PP-ASEL-IA		<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwsmrps9u7.fsf@shell01.TheWorld.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > While unwind-protect might be doable using secret facilities on a
> > given implementation, i don't see how it can be doable by
> > straightforward rewrite.  
> 
> (define-syntax unwind-protect
>   (syntax-rules ()
>     ((_ body cleanup ...) 
>      (unwind-protect-function
>       (lambda () body)
>       (lambda () cleanup ...)))))
> 
> (define (unwind-protect-function body-form cleanup-form)
>   (let ((cleaned-up #f))
>     (dynamic-wind
>      (lambda () 
>        (if cleaned-up
>            (unwind-protect-error)))
>      body-form
>      (lambda ()
>        (cleanup-form)
>        (set! cleaned-up #t)))))
> 
> (define (unwind-protect-error)
>   (error "Attempt to re-enter a dynamic state with a one-shot cleanup."))

This will implement unwind-protect only at the expense of NOT allowing
you to do the things dynamic-wind is intended for, such as implementing
dynamic binding.

The whole point of dynamic-wind is to implement things like special
variables in a multi-tasked (time-sliced) environment.  Escape procedures
are closed over the dynamic state, so you can grab the state at various
points and set the process aside and then continue it later.  If any
of the dynamic state contains an unwind-protect under your implementation,
then any attempt to suspend the process for resumption later would close
the file.

This is like saying that with-open-file would open a file for exactly one
scheduler quantum.

I don't count this as a reasonable solution.
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ptmt9x3k.fsf@ccs.neu.edu>
> Joe Marshall <···@ccs.neu.edu> writes:
> [unwind-protect implementation]

Kent M Pitman <······@world.std.com> writes:

> This will implement unwind-protect only at the expense of NOT allowing
> you to do the things dynamic-wind is intended for, such as implementing
> dynamic binding.

It does not interfere with dynamic binding.

> The whole point of dynamic-wind is to implement things like special
> variables in a multi-tasked (time-sliced) environment.  

I have to disagree with this for a number of reasons:

  1.  It can implement dynamic binding in a single-threaded system.

  2.  While dynamic binding can be emulated with dynamic-wind in a
      single-threaded system, that is not `the whole point'.

  3.  Your objection is based on the assumption of a particular
      implementation, to wit, a shallow-bound set of dynamic variables
      implemented by using dynamic-wind and a multitasking system
      implemented by user-level continuation swapping.

  3a. Deep-bound or thread-local dynamic variables would not have a
      problem. 

  3b. `fluid-let' need not be implemented with dynamic-wind.  (Nor
       should it.  Common Lisp doesn't implement special variables
       with UNWIND-PROTECT.)

  3c. Time-slicing is but one way to multitask.  Each thread needs to
      maintain it's own dynamic state.

  3d. `User-level' continuations can create a crude approximation to a
      time-sliced system, but a true multithreaded system requires
      *much* more support.  This support would, no doubt, have access
      to raw thread state and control points.  The implementation
      would presumably handle things such that a program that runs
      correctly on a single-threaded system would continue to run
      correctly (barring resource contention) on a multi-threaded
      system.

> I don't count this as a reasonable solution.

It isn't under your assumptions.  But these assumptions would preclude
UNWIND-PROTECT on CommonLisp, too.
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfw1xz9s5xq.fsf@shell01.TheWorld.com>
Joe Marshall <···@ccs.neu.edu> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> It does not interfere with dynamic binding.

Yes, it does.
 
You're thinking I mean it in the literal sense of dynamic variables.
I mean, more generally, the establishment of dynamic state.
That is why it's _called_ DYNAMIC-wind.

If memory serves (I don' thave a copy of the paper handy, and it's
very long ago) this primitive is introduced in Steele's paper
"Macaroni is Better than Spaghetti" and this _is_ what it was
created for originally.  Establishing dynamic state in a multi-tasking
system.

To be honest, I don't know what dynamic-wind is in Scheme for, but I
think it is short-sighted.  The kinds of things that people implement
using it are often conflicting with one another, and involve presumptions
about protocol.  (The analog of the kind of protocol that things like
'caller saves' or 'callee saves' occupies--that is, a certain kind of
discipline about when to do something and when NOT to do something.)  Yet
there is no published set of presumptions, so everyone makes up their own.
And this is an accident waiting to happen.

I also don't see any way that this is going to generalize to 
non-single-threaded systems.  By contrast, CL's special variables and
CL's unwind-protect _do_ generalize to multi-threading, because we have
running implementations that do this without any special magic.

The fact is that the contract of dynamic-wind is to allow a "re-entrant"
unwind.  That's why there is a wind form.  Otherwise, you would just
do:  (begin (wind) (dynamic-wind body unwind)) since there would only
be one call.  The reason wind is given to dynamic-wind is to allow 
you to renter with structured state.  (call/cc already gives you the
ability to re-enter in an unstructured way.)  

> > I don't count this as a reasonable solution.
> 
> It isn't under your assumptions.  But these assumptions would preclude
> UNWIND-PROTECT on CommonLisp, too.

I have no idea what you mean on this last point.
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <k7d19uo0.fsf@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > Kent M Pitman <······@world.std.com> writes:
> > 
> > It does not interfere with dynamic binding.
> 
> Yes, it does.
>  
> You're thinking I mean it in the literal sense of dynamic variables.
> I mean, more generally, the establishment of dynamic state.
> That is why it's _called_ DYNAMIC-wind.

Consider a single-threaded system.  The dynamic binding of variables
and the establishment and disestablishment of dynamic state will be
synchronized.  There is no difficulty here.

> If memory serves (I don' thave a copy of the paper handy, and it's
> very long ago) this primitive is introduced in Steele's paper
> "Macaroni is Better than Spaghetti" and this _is_ what it was
> created for originally.  Establishing dynamic state in a multi-tasking
> system.

No, it is for maintaining dynamic state in a system with non-local
exits (and, since the continuations are first-class, non-local
entrances). 

> To be honest, I don't know what dynamic-wind is in Scheme for, but I
> think it is short-sighted.  

It is analagous to UNWIND-PROTECT.  Without it, there is no way to
reliably manage resources in the presence of non-local exits.

> The kinds of things that people implement using it are often
> conflicting with one another, and involve presumptions about
> protocol.  (The analog of the kind of protocol that things like
> 'caller saves' or 'callee saves' occupies--that is, a certain kind
> of discipline about when to do something and when NOT to do
> something.)  Yet there is no published set of presumptions, so
> everyone makes up their own.  And this is an accident waiting to
> happen.

Certainly.  Dynamic state is, by it's nature, non-local.

> I also don't see any way that this is going to generalize to 
> non-single-threaded systems.  By contrast, CL's special variables and
> CL's unwind-protect _do_ generalize to multi-threading, because we have
> running implementations that do this without any special magic.

[note discussion at end]

> The fact is that the contract of dynamic-wind is to allow a "re-entrant"
> unwind.  That's why there is a wind form.  Otherwise, you would just
> do:  (begin (wind) (dynamic-wind body unwind)) since there would only
> be one call.  The reason wind is given to dynamic-wind is to allow 
> you to renter with structured state.  (call/cc already gives you the
> ability to re-enter in an unstructured way.)  

Yes.  I think we all agree that re-entering a dynamic state is
possible under some circumstances, but impossible under others.  Both
Will and I gave a simple mechanism for detecting re-entrancy and
preventing it for those cases where it would be impossible.

> > > I don't count this as a reasonable solution.
> > 
> > It isn't under your assumptions.  But these assumptions would preclude
> > UNWIND-PROTECT on CommonLisp, too.
> 
> I have no idea what you mean on this last point.

Suppose I assert this:

    'UNWIND-PROTECT cannot work in Common Lisp because there are two
kinds of unwinds:  those that restore the saved values of special
variables and those that are used for cleanup.  The scheduler would be
unable to determine which ones were which, therefore the special
bindings would be visible in all the tasks.'

Needless to say, this statement is broken on so many levels that it is
hard to know where to begin to refute it.  Primarily, though, the
problem is that it assumes that special variables are implemented via
UNWIND-PROTECT, that there is a particular *required* implementation
of multitasking and that there is a particular *required*
implementation of dynamic variables.

As we all are aware, the CL standard says *nothing* about multitasking
and *nothing* about how special variables are supposed to be
implemented.

Neither does the Scheme standard.  It doesn't even have dynamic
variables!  So how can one legitimately object that DYNAMIC-WIND
doesn't work when combined with an imagined implementation of two
non-standard language extensions?

There are (several) Scheme systems that extend the Scheme language
with both multitasking and dynamic variables.  The two I am most
familiar with, MzScheme and MIT Scheme, both use the same strategy to
integrate these extensions with the base language:

    1.  Dynamic variables are bound on a per-thread basis.  (In
        MzScheme, these are parameterizations, in MIT Scheme they are
        fluid variables.)  DYNAMIC-WIND is not used as an
        implementation technique.

    2.  DYNAMIC-WINDs are not wound/unwound on thread switches.  (In a
        multiprocessor system there may not even *be* thread
        switches.)

    3.  CALL-WITH-CURRENT-CONTINUATION is not used to mimic context
        switching.

This is exactly analagous to what nearly every Common Lisp
implementation does.  The only difference being that Scheme allows you
to `throw back down the stack', so you need a way to wind in the other
direction (if such winding makes sense to attempt).
From: William D Clinger
Subject: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <b84e9a9f.0305081412.3f3f52e0@posting.google.com>
This has nothing to do with separate environments for functions and
values, so I'm changing the subject line.  I apologize to people who
are not interested in Scheme.  It came up because Michael Livshin
referred to Common Lisp's restrictions on continuations as an example
of a desirable semantic restriction.  Pascal Constanza then cited Kent
Pitman's old web page on this subject, which Kent is now defending in
the face of the ease with which the semantics he appears to have been
advocating can be implemented in Scheme.  The URL is
http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html

Kent M Pitman quoting Joe Marshall:
> > It does not interfere with dynamic binding.
> 
> Yes, it does.

In other words, Kent, the essence of your argument is that you can
use UNWIND-PROTECT to write buggy programs.

Do you acknowledge that the two fixes that you proposed would have
exactly the same problem as the UNWIND-PROTECT macro that I wrote
for you?

> To be honest, I don't know what dynamic-wind is in Scheme for...

Okay.  There are lots of things in Common Lisp whose purpose I don't
understand either, but that doesn't imply they are broken.

> , but I
> think it is short-sighted.  The kinds of things that people implement
> using it are often conflicting with one another, and involve presumptions
> about protocol....

I think your next assumption is a good example of that.

> I also don't see any way that this is going to generalize to 
> non-single-threaded systems.  By contrast, CL's special variables and
> CL's unwind-protect _do_ generalize to multi-threading, because we have
> running implementations that do this without any special magic.

So do we.  Why do you accept the existence of multithreaded systems
as evidence for the viability of CL's semantics, but not for the
viability of Scheme's?

> The fact is that the contract of dynamic-wind is to allow a "re-entrant"
> unwind.  That's why there is a wind form.  Otherwise, you would just
> do:  (begin (wind) (dynamic-wind body unwind)) since there would only
> be one call.  The reason wind is given to dynamic-wind is to allow 
> you to renter with structured state.  (call/cc already gives you the
> ability to re-enter in an unstructured way.)  

That is indeed the purpose of DYNAMIC-WIND.  You seem to be arguing
that there is also a need for UNWIND-PROTECT, which does not allow
this "re-entrant" behavior.  That is certainly true.  We routinely
use something like UNWIND-PROTECT to prevent C continuations from
being invoked more than once, for example, so we have a fair amount
of experience with this.

But when we showed you how easy it is to implement what still seems
to me to be exactly the semantics that you have been saying you want,
you objected on the grounds that the semantics you want can be used
to write buggy programs.  That is not a reasonable objection.

If you were to point out some way in which the semantics you want is
different from the semantics we showed you how to implement, then we
would take you more seriously.

Will
From: William D Clinger
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <b84e9a9f.0305121001.48a52e66@posting.google.com>
I haven't heard anything from Kent Pitman concerning Jonathan Rees's
comments, so I'll go ahead and post a new UNWIND-PROTECT macro that
allows any number of throws out of or into the UNWIND-PROTECT form,
but executes the cleanup forms at most once, when the UNWIND-PROTECT
is exited via the first normal return.

This is what I now understand Kent Pitman to want, and what I now
believe Kent is claiming cannot be done in portable Scheme.

Will

; An implementation of Common Lisp's UNWIND-PROTECT
; in Scheme.  The cleanup forms are executed only once,
; and only on a normal exit or throw to the normal exit.

(define-syntax unwind-protect
  (syntax-rules ()
    ((unwind-protect protected-form cleanup-form ...)
     (let ((normal-exit? #f)
           (exited? #f))
       (dynamic-wind
        (lambda () #t)
        (lambda ()
          (call-with-values
           (lambda () protected-form)
           (lambda results
             (set! normal-exit? #t)
             (values results))))
        (lambda ()
          (if (and normal-exit? (not exited?))
              (begin (set! exited? #t)
                     cleanup-form ...))))))))

(define (test-unwind-protect)
  (define (throw-out v) v)
  (define (throw-in v) v)
  (define normal-exits 0)
  (define n 5)
  (define throw-count 0)
  (call-with-current-continuation
   (lambda (k)
    (set! throw-out
          (lambda (v)
            (display "Throwing out of UNWIND-PROTECT") (newline)
            (k v)))
    (unwind-protect
     (begin (if (= throw-count 0)
                (call-with-current-continuation
                 (lambda (k)
                   (set! throw-in
                         (lambda (v)
                           (display "Throwing into UNWIND-PROTECT") (newline)
                           (k v))))))
           (display "Executing inside UNWIND-PROTECT") (newline)
           (set! throw-count (+ throw-count 1))
           (if (< throw-count n)
               (throw-out throw-count))
           (set! normal-exits (+ normal-exits 1))
           17)
     (display "*****Executing cleanup forms*****")
     (newline))))
  (if (< throw-count n)
      (throw-in throw-count))
  (if (= normal-exits 1)
      (begin (set! throw-count (- n 1))
             (display "Now for a second normal exit from the UNWIND-PROTECT")
             (newline)
             (throw-in 17))))

> (test-unwind-protect)
Executing inside UNWIND-PROTECT
Throwing out of UNWIND-PROTECT
Throwing into UNWIND-PROTECT
Executing inside UNWIND-PROTECT
Throwing out of UNWIND-PROTECT
Throwing into UNWIND-PROTECT
Executing inside UNWIND-PROTECT
Throwing out of UNWIND-PROTECT
Throwing into UNWIND-PROTECT
Executing inside UNWIND-PROTECT
Throwing out of UNWIND-PROTECT
Throwing into UNWIND-PROTECT
Executing inside UNWIND-PROTECT
*****Executing cleanup forms*****
Now for a second normal exit from the UNWIND-PROTECT
Throwing into UNWIND-PROTECT
Executing inside UNWIND-PROTECT

[end of test]
From: Dorai Sitaram
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <b9orqa$hkp$1@news.gte.com>
In article <····························@posting.google.com>,
William D Clinger <······@qnci.net> wrote:
>I haven't heard anything from Kent Pitman concerning Jonathan Rees's
>comments, so I'll go ahead and post a new UNWIND-PROTECT macro that
>allows any number of throws out of or into the UNWIND-PROTECT form,
>but executes the cleanup forms at most once, when the UNWIND-PROTECT
>is exited via the first normal return.
>
>This is what I now understand Kent Pitman to want, and what I now
>believe Kent is claiming cannot be done in portable Scheme.
>
>Will
>
>; An implementation of Common Lisp's UNWIND-PROTECT
>; in Scheme.  The cleanup forms are executed only once,
>; and only on a normal exit or throw to the normal exit.
>
>(define-syntax unwind-protect
>  (syntax-rules ()
>    ((unwind-protect protected-form cleanup-form ...)
>     (let ((normal-exit? #f)
>           (exited? #f))
>       (dynamic-wind
>        (lambda () #t)
>        (lambda ()
>          (call-with-values
>           (lambda () protected-form)
>           (lambda results
>             (set! normal-exit? #t)
>             (values results))))
>        (lambda ()
>          (if (and normal-exit? (not exited?))
>              (begin (set! exited? #t)
>                     cleanup-form ...))))))))


This won't quite work, Will.  Indeed, an unwind-protect
that only does cleanup on normal exit is for all
practical purposes indistinguisable from progn!
(This is regardless of whether the language has full
continuations, escaping continuations, a mix of them or
none of them.)

Kent wanted cleanup to occur for normal exit, and for
_some_ non-normal exits.  Your and Joe's earlier
solution did cleanup for _all_ non-normal exits, and
your current solution does cleanup for _no_ non-normal
exits.  A middle way is required.

Last weekend, I wrote up an attempt at solving Kent's
proposal #1 using Friedman and Haynes's
constraining-control techniques (from 1985!).
Interested people may please take a look at

  http://www.ccs.neu.edu/~dorai/uwcallcc/uwcallcc.html

I will tackle Kent's proposal #2 when I get time next.

The intriguing thing is F&H's paper already talk about
a Scheme-specific unwind-protect -- they also talk
about dynamic-wind, but they talk specifically and
separately about unwind-protect!  Like Kent, they note
that there is a choice of unwind-protect semantics for
Scheme, and they suggest four, implementing one as an
example.  Kent's two unwind-protects are different from
their four -- mainly because where F&H decide to
automatically deduce triggerable cleanups based on
relative locations in the continuation tree, Kent
favors user annotation --, but it's F&H's
technique that is the key, for it is flexible enough to
accommodate variations.
From: William D Clinger
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <b84e9a9f.0305130520.2dbe1ce9@posting.google.com>
I'm not trying to argue that Kent's semantics for UNWIND-PROTECT
is useful.  I have two goals here:

    1.  To figure out whether Kent actually has a well-defined
        semantics in mind, and if so to understand what it is.

    2.  To argue against arguments of the form "I can't figure
        out how to do X in this language, so doing X in this
        language must be impossible."

Dorai Sitaram wrote:
> This won't quite work, Will.  Indeed, an unwind-protect
> that only does cleanup on normal exit is for all
> practical purposes indistinguisable from progn!

PROG1, anyway.  That's a quite amusing example of "psychological
set".

> Kent wanted cleanup to occur for normal exit, and for
> _some_ non-normal exits.

That is what I have suspected, and I've been trying to goad
Kent into saying precisely _which_ non-normal exits should
trigger the cleanup and which should not.  Once he attempts
that, he will see that the problem isn't with call/cc and
dynamic-wind at all.  I think there is a valid criticism of
Scheme to be made here, but it isn't that Scheme lacks
expressive power---it's that Scheme provides too much power
without enough constraints or guidance on how to use that
power.

Recall that this conversation started when someone used
call/cc as an example of excessive expressive power.  Kent
didn't accept that argument.  He tried to argue instead that
call/cc and dynamic-wind have insufficient expressive power.
That's nonsense: the theorists believe that, when combined
with side effects and the rest of Scheme, call/cc is powerful
enough to implement every deterministic sequential control
structure that has ever been proposed.

I think Kent, like a lot of people in the Lisp community,
has a hard time admitting that a language can have too much
power.  I think that's why he turned instead to the bogus
argument that, because he can't figure out how to do it, it
can't be done.

> Last weekend, I wrote up an attempt at solving Kent's
> proposal #1 using Friedman and Haynes's
> constraining-control techniques (from 1985!).
> Interested people may please take a look at
> 
>   http://www.ccs.neu.edu/~dorai/uwcallcc/uwcallcc.html

Thank you for doing this.

> Like Kent, they note
> that there is a choice of unwind-protect semantics for
> Scheme, and they suggest four, implementing one as an
> example.  Kent's two unwind-protects are different from
> their four -- mainly because where F&H decide to
> automatically deduce triggerable cleanups based on
> relative locations in the continuation tree, Kent
> favors user annotation --, but it's F&H's
> technique that is the key, for it is flexible enough to
> accommodate variations.

Well, you clearly have a better idea of what Kent wants
than I do.  I hope you two continue this conversation here
so I can eavesdrop.

Will
From: Michael Livshin
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <s3issfhrd4.fsf@laredo.verisity.com.cmm>
······@qnci.net (William D Clinger) writes:

> I think there is a valid criticism of Scheme to be made here, but it
> isn't that Scheme lacks expressive power---it's that Scheme provides
> too much power without enough constraints or guidance on how to use
> that power.

I fail to see the practical difference between "you can do that, but
it won't really work because there no way to force the relevant
assumptions on the execution environment" and "you can't do that".

I'm not really disputing your claim, of course.  it makes perfect
sense to me, taken abstractly.

-- 
Perhaps it IS a good day to die; I say we ship it!
                                        -- Klingon Programmer
From: Erann Gat
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <gat-1305031114470001@k-137-79-50-101.jpl.nasa.gov>
······@qnci.net (William D Clinger) writes:

> [Kent Pitman] tried to argue instead that
> call/cc and dynamic-wind have insufficient expressive power.
> That's nonsense: the theorists believe that, when combined
> with side effects and the rest of Scheme, call/cc is powerful
> enough to implement every deterministic sequential control
> structure that has ever been proposed.

1.  What the theorists believe is irrelevant.  Theorists can be wrong just
like everyone else.

2.  Even if it is true that everything can be implemented in terms of
call/cc and dw it might be so hard to do it that it's a vacuaous
observation.  Obviously everything can be implemented in terms of IF and
GOTO, but that doesn't mean we should be content with those control
structures.

In fact, it seems to me (from my very non-expert point of view) that
unwind-protect in the presence of cooperative multitasking points out a
deep deficiency in the call/cc+dw point of view: you need different kinds
of winding and unwinding depending on which continuation you are using to
enter and exit the dynamic context.  If you're calling the thread-switch
continuation you need to do one thing, but if you're calling an error
continuation you need to do something else.  There is no built-in way for
call/cc and dw to exchange this sort of information, so you have to add
this communications channel yourself.  That is apparently a non-trivial
thing to get right, as evidenced by the fact that so far no one has.

E.
From: Erann Gat
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <gat-1305031121170001@k-137-79-50-101.jpl.nasa.gov>
In article <····················@k-137-79-50-101.jpl.nasa.gov>,
···@jpl.nasa.gov (Erann Gat) wrote:

> That is apparently a non-trivial
> thing to get right, as evidenced by the fact that so far no one has.

Correction: it looks like Dorai Sitaram may have gotten it right, but even
so it actualyl supports my claim because it's a pretty hairy piece of
code:

http://www.ccs.neu.edu/home/dorai/uwcallcc/uwcallcc.html

E.
From: Joe Marshall
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <he7y63bc.fsf@ccs.neu.edu>
···@jpl.nasa.gov (Erann Gat) writes:

> In fact, it seems to me (from my very non-expert point of view) that
> unwind-protect in the presence of cooperative multitasking points out a
> deep deficiency in the call/cc+dw point of view: you need different kinds
> of winding and unwinding depending on which continuation you are using to
> enter and exit the dynamic context.  If you're calling the thread-switch
> continuation you need to do one thing, but if you're calling an error
> continuation you need to do something else.  There is no built-in way for
> call/cc and dw to exchange this sort of information, so you have to add
> this communications channel yourself.  

Again, this is making the assumption that there *are* thread-switches
and that they are implemented via call-with-current-continuation.

> That is apparently a non-trivial thing to get right, as evidenced by
> the fact that so far no one has. 

It is indeed not easy to get right, but MIT Scheme, PLT Scheme,
Larceny and others have got it right.
From: Kent M Pitman
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <sfwznlq38zi.fsf@shell01.TheWorld.com>
Joe Marshall <···@ccs.neu.edu> writes:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> > In fact, it seems to me (from my very non-expert point of view) that
> > unwind-protect in the presence of cooperative multitasking points out a
> > deep deficiency in the call/cc+dw point of view: you need different kinds
> > of winding and unwinding depending on which continuation you are using to
> > enter and exit the dynamic context.  If you're calling the thread-switch
> > continuation you need to do one thing, but if you're calling an error
> > continuation you need to do something else.  There is no built-in way for
> > call/cc and dw to exchange this sort of information, so you have to add
> > this communications channel yourself.  
> 
> Again, this is making the assumption that there *are* thread-switches
> and that they are implemented via call-with-current-continuation.

Since I said I agreed with Erann here, let me defend that position.

We're not assuming thread switches are implemented that way, we're just
not trying to assume that they definitely aren't.  If they are, then
this operator is not available for the other purpose.

If you're willing to come out and say that this operator must not be used
to implement multithreading, that's more than the Scheme standard was
willing to do.  And it seems to me, to the degree I've looked at it so far,
to matter.  Though maybe when I find more time to think about it, I'll
have another opinion on that.

I'm speaking at this point still on the basis of intuitions and
recollections of past thought on this, not on the basis of recent new
thought, which I haven't made time for yet.  So you may regard my remarks
as more tentative than usual since Will has challenged me on some of it
and I've not gotten to considering his challenges.

> > That is apparently a non-trivial thing to get right, as evidenced by
> > the fact that so far no one has. 
> 
> It is indeed not easy to get right, but MIT Scheme, PLT Scheme,
> Larceny and others have got it right.
From: Erann Gat
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <gat-1305031554570001@192.168.1.52>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > ···@jpl.nasa.gov (Erann Gat) writes:
> > 
> > > In fact, it seems to me (from my very non-expert point of view) that
> > > unwind-protect in the presence of cooperative multitasking points out a
> > > deep deficiency in the call/cc+dw point of view: you need different kinds
> > > of winding and unwinding depending on which continuation you are using to
> > > enter and exit the dynamic context.  If you're calling the thread-switch
> > > continuation you need to do one thing, but if you're calling an error
> > > continuation you need to do something else.  There is no built-in way for
> > > call/cc and dw to exchange this sort of information, so you have to add
> > > this communications channel yourself.  
> > 
> > Again, this is making the assumption that there *are* thread-switches
> > and that they are implemented via call-with-current-continuation.
> 
> Since I said I agreed with Erann here, let me defend that position.
> 
> We're not assuming thread switches are implemented that way, we're just
> not trying to assume that they definitely aren't.

Yeah.  What he said.  ;-)

E.
From: Joe Marshall
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <znlp4enz.fsf@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> We're not assuming thread switches are implemented that way, we're just
> not trying to assume that they definitely aren't.  If they are, then
> this operator is not available for the other purpose.
> 
> If you're willing to come out and say that this operator must not be used
> to implement multithreading, that's more than the Scheme standard was
> willing to do.  And it seems to me, to the degree I've looked at it so far,
> to matter.  Though maybe when I find more time to think about it, I'll
> have another opinion on that.

I'm certainly willing to come out and assert that
call-with-current-continuation should not be used to simulate
multithreading and that dynamic-wind should not be used to simulate
special variables.  

The Scheme standard doesn't say that you must not do this, but the
Scheme standard says nothing about multitasking or interrupts,
either.  The particular problem at hand, to wit, the semantics of
call-with-current-continuation and dynamic-wind in a shared-memory,
multitasking system, is outside the purview of the standard.

It is true that very early on it was thought that dynamic-wind and
call-with-current-continuation *were* the appropriate way to model or
implement multitasking and special variables (many people still think
so, and I did for a while, too).  But if you go to the rrrs-authors
mailing list archives and look at these messages:

  http://zurich.ai.mit.edu/pipermail/rrrs-authors/1993-May/001638.html
  http://zurich.ai.mit.edu/pipermail/rrrs-authors/1993-May/001641.html
  http://zurich.ai.mit.edu/pipermail/rrrs-authors/1993-May/001643.html
  http://zurich.ai.mit.edu/pipermail/rrrs-authors/1993-May/001647.html
  
there are some compelling arguments that this simply doesn't work.

I'm sure we all agree that a naive multitasking implementation built
from call-with-current-continuation and dynamic-wind will not
correctly interact with code that uses continuations and dynamic-wind
for other purposes.  My argument is this:  

  1) The Scheme standard presents a single-threaded language model.

  2) Code that uses continuations for the purposes of error handling,
     exceptions, non-local exits, etc., and uses dynamic-wind for
     cleanup/uncleanup, *is* well-defined under the Scheme standard.

  3) In this single-threaded language model, dynamic-wind is analagous
     to unwind-protect.  (And *exactly* the same if you never re-enter
     the dynamic context.)

I think we can all agree on this (or a variation of this).  The
question is how we extend this to a multitasking model.  I argue this:

  4) A primary desiderata of a multitasking system (be it Scheme,
     Common Lisp, Unix, etc.) is that programs that ran under a
     single-threading model ought to run identically under a
     multi-threading model provided they don't share data.

That is, if a program doesn't share any data (and the user doesn't
asynchronously intervene), it should not be able to tell if it is
running in a multitasked environment.  If it *can* tell the
difference, then the multitasking model is at fault, *not* the
program.

But if your multitasking system is written using the user-visible (*)
call-with-current-continuation and dynamic-wind, not only is it
possible for a program to tell that this is the case, it is likely the
program will *fail completely*.

Therefore, a multitasking system that is written using the
user-visible call-with-current-continuation and dynamic-wind doesn't
satisfy the transparency desiderata.  Thus you shouldn't write your
multitasking system in this way.

(*) I say user-visible because I can imagine a multitasking system
that used `hidden' continuations and `hidden' dynamic-state and
carefully preserved the user dynamic-state during context switches.  I
don't even have to imagine this, many implementation actually work
this way.  But they don't wind and unwind user state on each context
switch.
From: Kent M Pitman
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <sfw8yta4oae.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> ······@qnci.net (William D Clinger) writes:
> 
> > [Kent Pitman] tried to argue instead that
> > call/cc and dynamic-wind have insufficient expressive power.
> > That's nonsense: the theorists believe that, when combined
> > with side effects and the rest of Scheme, call/cc is powerful
> > enough to implement every deterministic sequential control
> > structure that has ever been proposed.
> 
> 1.  What the theorists believe is irrelevant.  Theorists can be wrong just
> like everyone else.
> 
> 2.  Even if it is true that everything can be implemented in terms of
> call/cc and dw it might be so hard to do it that it's a vacuaous
> observation.  Obviously everything can be implemented in terms of IF and
> GOTO, but that doesn't mean we should be content with those control
> structures.
> 
> In fact, it seems to me (from my very non-expert point of view) that
> unwind-protect in the presence of cooperative multitasking points out a
> deep deficiency in the call/cc+dw point of view: you need different kinds
> of winding and unwinding depending on which continuation you are using to
> enter and exit the dynamic context.  If you're calling the thread-switch
> continuation you need to do one thing, but if you're calling an error
> continuation you need to do something else.  There is no built-in way for
> call/cc and dw to exchange this sort of information, so you have to add
> this communications channel yourself.  That is apparently a non-trivial
> thing to get right, as evidenced by the fact that so far no one has.

I know they say not to do "me too" posts, but...

I agree completely.

(It's just that it's so seldom I get to say this to Erann, I wanted him to
know I evaluate each of his posts on their own merits and it's nothing 
personal.)

- - - - - 

Oh, ok, while I'm here I might as well expand on his point #2: Not
having something as important as unwind-protect built in risks users
inventing it incorrectly. 

Consider as evidence, the following bit of history: People used to
argue that LET shouldn't be built-in because we already had lambda
combinations, but that wasn't good enough reason.  Even among the
"correct" implementations there were variations, which were like these,
which hopefully turn your stomach enough to make you want something as
important as LET  to be standard...

 (defmacro let (bindings &rest forms) ;for Maclisp, not CL
   `((lambda ,(mapcar #'car bindings) ,@forms) ,@(mapcar #'cadr bindings)))

 (defmacro let (bindings &rest forms) ;for Maclisp, not CL
   (setq forms (copy-list forms))
   ((lambda (vars vals)
      `(prog ,vars
             (setq ,@(mapcan #'list vars vals))
             ;; Yes, this 'implementation' allowed RETURN and GO
             ,@(prog1 forms
                 (cond (forms
                         ((lambda (lastcons)
                            (rplaca lastcons
                                    `(return ,(car lastcons))))
                          (last forms)))))))
    (mapcar #'car bindings)
    (mapcar #'cadr bindings)))

 (defmacro let (bindings &rest forms) ;for Maclisp, not CL
   (setq forms (copy-list forms))
   `(do ,bindings (t nil)
      ,@(prog1 forms
          (cond (forms
                  ((lambda (lastcons)
                     (rplaca lastcons
                             `(return ,(car lastcons))))
                   (last forms)))))))

Notes:

 I've used DEFMACRO and backquote here, even though those were emergent
 at the time and we used mostly (DEFUN name MACRO (formvar) ...) and
 lots of calls to CONS and LIST.  I tried not to use LET, though,
 just for grins so you could see what we were outgrowing.

 In Maclisp, we'd have said (APPEND x NIL) instead of
 (COPY-LIST x) but Maclisp append always copied, and CL's doesn't.

 Then again, in "real" Maclisp, we'd probably have destructively
 modified the source form ("displaced" it) so there was no macro
 to redundantly expand again later...

 Mostly no one used #'foo in Maclisp since 'foo did the same thing.
 There was no FLET nor MACROLET to interfere with local bindings.
 The whole language was dynamic anyway.  (Sort of.  The compiler was
 given license to take liberties with that in sometimes surprising
 ways.  Thank goodness CL cleaned up the issue of consistency between
 interpreter and compiler.  It was not unusual to have "runs only
 compiled" or "runs only interpreted" code in Maclisp.)


- - - - 

Incidentally, in the design of ISLISP (now available at
http://www.islisp.info if you missed my previous mention, though I'm
still previewing the site, so please don't re-echo this announcement
yet--I'll send a real announcement later), it was seriously proposed
to have only a subset of <, >, =, >=, <=, and /= because the user
could write the rest of them.  I likewise thought that was starting to
get nutty, but it wasn't until someone started to suggest we didn't
need TAN and that SIN and COS were enough that it became easy to
explain to people that they were going overboard and that these things
were best done in the language and not by amateurs...
From: Erann Gat
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <gat-1305031553390001@192.168.1.52>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> I know they say not to do "me too" posts, but...
> 
> I agree completely.
> 
> (It's just that it's so seldom I get to say this to Erann, I wanted him to
> know I evaluate each of his posts on their own merits and it's nothing 
> personal.)

I originally sent this to Kent as an email, but then I changed my mind and
decided I wanted to put this on the record.

The reason that Kent and I seem to disagree as much as we do is that I
think he is a very effective spokesman, and I don't often feel I have much
to add once he has had his say.  So I mostly sit back and let Kent do the
talking and try to refrain from saying "me too" too much.  It's only on
those relatively rare occasions when we disagree that I feel the need to
chime in.

E.
From: Joe Marshall
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <fzni7sdx.fsf@ccs.neu.edu>
Michael Livshin <······@cmm.kakpryg.net> writes:

> ······@qnci.net (William D Clinger) writes:
> 
> > I think there is a valid criticism of Scheme to be made here, but it
> > isn't that Scheme lacks expressive power---it's that Scheme provides
> > too much power without enough constraints or guidance on how to use
> > that power.
> 
> I fail to see the practical difference between "you can do that, but
> it won't really work because there no way to force the relevant
> assumptions on the execution environment" and "you can't do that".

The difference may be that ``if you follow discipline then you can do
it''.  The environment may not enforce assumptions upon you, but it
may be reasonable to take on a restricted style of programming in
order to assure the assumptions.
From: Tim Bradshaw
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <ey3k7curcm3.fsf@cley.com>
* Joe Marshall wrote:

> The difference may be that ``if you follow discipline then you can do
> it''.  The environment may not enforce assumptions upon you, but it
> may be reasonable to take on a restricted style of programming in
> order to assure the assumptions.

yes, it may be that.  But sometimes you want the hardest promise the
system can make for you that some behaviour will happen, even in the
presence of more-or-less completely uncontrolled code.  For instance I
don't want an OS that says `if you follow discipline then I won't
reformat your hard disk every other day', and I *really* don't want my
bank to run a database that offers transactional integrity for my
account if I follow discipline when using the cash machine.

Most of this debate about UNWIND-PROTECT-like things is over my head,
actually, but could someone answer this: How hard is it to write a
form in Scheme which makes a really strong promise for the following
code (pardon my terrible scheme):

    (define g-called-count 0)
    (define (g)
      (set! g-called-count (+ g-called-count 1)))

    (define h-called-count 0)
    (define (g)
      (set! h-called-count (+ h-called-count 1)))


    (define (test-up f)
      ;; F is an arbitrary function
      (set! g-called-count 0)
      (set! h-called-count 0)
      (unwind-protect
        (f)
       (g))
      (h))

Now, after calling (test-up f) for *any* function f, g-called-count
should be 1 and h-called-count should be 0 or 1.

Obviously there are always caveats (like: the machine does not catch
fire &c &c), but how hard is that promise to make (and how hard is it
to make in CL?).

--tim
From: Kent M Pitman
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <sfw7k8uy97d.fsf@shell01.TheWorld.com>
I've said to Will offline that I'll reply to his messages, but that unlike
some of the stuff I do on comp.lang.lisp, typing in my sleep, this requires
careful thought so this discussion will run on a longer timeline, at least
for me.  I have to find time when I can do more than dash off the kind of
half-thought-out drivel I often do here...

But the two [or three, depending on how you count] things I want to say 
are these:

Michael Livshin <······@cmm.kakpryg.net> wrote:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > Michael Livshin <······@cmm.kakpryg.net> writes:
> >
> >> I fail to see the practical difference between "you can do that, but
> >> it won't really work because there no way to force the relevant
> >> assumptions on the execution environment" and "you can't do that".
> >
> > The difference may be that ``if you follow discipline then you can do
> > it''.  The environment may not enforce assumptions upon you, but it
> > may be reasonable to take on a restricted style of programming in
> > order to assure the assumptions.
> 
> there's also a difference between a feature that is guaranteed to
> work no matter what portable code you use in your program and a
> feature that requires you to make sure that any code you use shares
> your assumptions.
> 
> the latter is, of course, quite sufficient in many situations.  but
> not all.

This resonates with me as being exactly the nature of my concern,
although the form of Will's message seems to be "you haven't thought
it through enough" so I'll have to figure out whether the problem is
that (a) my intuitions have jumped me to a place that Will's careful
thought has not or (b) my intuitions have jumped me past the need for
some critical piece of careful thought that Will has not jumped past.
Intuitions are oddly double-edged in this way.

[Off topic: I was thinking recently about the problem we had discussed
earlier about whether the Internet approximates an infinite network of
Turing machines in any particular way such that it might be more
powerful than a Turing machine due to its interconnectivity.  One
thing that occurred to me about having such Turing machines is that they
would be "already running" and at unknown points on their clock.  A 
consequence is that any given one might have precomputed data that could
be accessed in finite time (spacially, by connectivity) rather than in
its ordinary big-O amount of time (computationally, by starting it when
requested), and as such some problems that seemed unsolvable on a Turing
machine at a certain speed might seem solvable in a network. But if there
was a lower bounding time at which the network were assembled [e.g.,
birth of the Internet], then most or all of the knowledge is no older
than the net [unless transcribed from older knowledge bases], and so
there is a theoretical computational barrier that makes the entire network
finite.  But if you aren't asking unsolvable questions but rather things
that just "usually take a long time", then the longer the network is online
and the more diverse it is, the more likely it is that someone will have
precomputed some critical fact that can surprisingly speed up the jump to
a possible answer.  I mention this because I think 'intuitions' are of
this nature--the right brain, pre-computing all kinds of statistics and
interconnection among things that are not computationally linked in the
symbolic left-brain.  And so it sometimes jumps to non-sequitors and
sometimes to interesting ways of reformulating problems.  But always it
seems mysterious because the web of interconnection it offers is 
idiosyncratic and opportunistic both by its origin (how it was built up)
and by its identity (the choice of a particular human's intuition to use).
The internet has only the former kind of mystery and not the latter, but I
don't know that that decreases its power; it might.  Or it might just 
decrease its level of obfuscation.  I don't know if those are the same.
If none of this seems relevant to the unbracketed text in my message,
that's ok.  I just wanted to write this down while I was thinking about
it, and since it was vaguely on-topic with some prior messages on this 
newsgroup.]

Tim Bradshaw <···@cley.com> writes:

> yes, it may be that.  But sometimes you want the hardest promise the
> system can make for you that some behaviour will happen, even in the
> presence of more-or-less completely uncontrolled code.

I've often said ``If I could ask only one question about a language to
determine whether I could use it in day-to-day use, it would be "does
it have unwind-protect?"''  This does not imply that I think I use
unwind-protect most often. Many of my programs don't use it at all.
But it represents a certain mindset to me that, when language designers
have it, I trust the rest of their design and when language designers
don't have it, I don't trust it.  Call it that same right-brain kind of
statistics taking you seem me rambling about above, if you must.  I can't
explain it by hugely more scientific means anyway, though I'll try to 
summarize what I think it means...

Ordinary programming is imperative.  Do this.  Do that.  We've put some of
the do's and don't's into structured operations that hide the implementation,
but sometimes those get left out.  It's hard enough to teach people to
build those things as it is--people mostly learn from repetitive patterns.
Leaving out unwind-protect--leaving it as an exercise to the user--is like
saying "Learn to make your own operators. Oh, and learn to make your own
essential tools for making your own operators." when the community already
has many years experience understanding that this matters.  

I am as horrified at the omission of unwind-protect [a basic idiom] as
I perceive Scheme people to be at the requirement to use FUNCALL when calling
a functional parameter.

In adding dynamic-wind and offering it to the world as an apparent
super-generalized substitute for unwind-protect, the design community had
the same obligation as the CL community had when they offered TAGBODY and
BLOCK as a subsitute for PROG.  (a) provide compatibility and (b) publish
the relationship so that it's clear that your users are carried along.

Now, the Scheme community can pretend no one previously needed
unwind-protect, since it was undocumented, and therefore not offer
such pieces of documentation.  To me, this would be to say they never
previously needed any call-with-open-file kind of support that
reliably opened and released files at known times rather than at the
gc's liesure.  I just don't believe the operator wasn't there before,
I think the community was in denial about the omission of some
operator in this family [the conversation we are now having is whether
that operator was dynamic-wind, unwind-protect, or something else].
But I think the operator was there and that it has to be explained, and
should have been explained as part of the operator's introduction, not just
theorized on the side by people after-the-fact trying to find uses for the
operator.

People are constantly trying to suggest that UNWIND-PROTECT can be
used to implement dynamic binding in CL and I have to keep telling
them that this is inappropriate.  It is agreed by the language
designers and all vendors that the operator was not intended by this,
even though a single-threaded, plausible-looking implementation is
commonly cited by whose who don't understand the multi-tasking
implications.  My sense is that the designers of Scheme are in
disarray as to whether DYNAMIC-WIND is appropriate for the
implementation of UNWIND-PROTECT so I get a difference in status.

(One partial test of whether Will's code is portable is whether if you
took that code and inserted it into a large system that did
multi-tasking and used files and compiled it so that it did file opens
and closes with will's operator, would it work?  That is, if you wrote
that defintiion of unwind-protect and then used that definition to
implement the file opening code and then compiled all that code for
multi-tasking, would the system work?  Some systems might not test all
the situations with continuations I'm thinking of, so this might not
be a complete test.  But it at least starts to push on the parts I
think should be pushed on.)
From: Erann Gat
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <gat-1305031219330001@k-137-79-50-101.jpl.nasa.gov>
In article <···············@shell01.TheWorld.com>, Kent M Pitman
<······@world.std.com> wrote:

> I've often said ``If I could ask only one question about a language to
> determine whether I could use it in day-to-day use, it would be "does
> it have unwind-protect?"''

Out of curiosity, do you consider C++ to "have unwind-protect" via:

(unwind-protect (f) (g)) -->

class unwind {
  ~foo() g();
}

{
  unwind protect;
  f();
}

E.
From: Kent M Pitman
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <sfwvfwe38gh.fsf@shell01.TheWorld.com>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <···············@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> > I've often said ``If I could ask only one question about a language to
> > determine whether I could use it in day-to-day use, it would be "does
> > it have unwind-protect?"''
> 
> Out of curiosity, do you consider C++ to "have unwind-protect" via:
> 
> (unwind-protect (f) (g)) -->
> 
> class unwind {
>   ~foo() g();
> }
> 
> {
>   unwind protect;
>   f();
> }

Heh.  Yeah, people have pointed this out.  And I guess the answer is
"sort of".  It's there as a point of power, as it may or may not be
in Scheme for exactly the same reason if dynamic-wind ends up working
to implement it.  But it's not there as a service to users writing
other-than-idiomatic code... 

Just as you can claim that Maclisp had COPY-LIST because in there,
doing (APPEND X NIL) copied X reliably.  

Or just as you can claim ITS Teco had "absolute value" because ·@·@ 
(········@ ········@, not caret atsign caret atsign) did
absolute value and people got used to writing it as a single operator,
even though it was really two operators.  ·@ with a single numeric arg
gave you back two values: . and .+n if n was positive, or .+n and .
if n was negative where "." is the current buffer position expressed
as an integer.  ·@ with two arguments, x and y, (which it got as return
values from the first ·@) gave you back y-x.  So  ··@ done at point
17 in the buffer returned 17,22 and another ·@ yielded back 22-17,
that is, 5.  But ···@ done at the same point would yield  12,17 instead
which after going through the second ·@ would yield 17-12, or 5.
So two ·@'s always got absolute value.  Did Teco have an absolute
value operator?

At least in Teco, the idiom was concise and the result was no more or less
readable than the rest of the language.  I'm not sure you can say the 
same about the C++ situation, though someone might argue with me about it.

Personally, when I ask the question "Does the language have unwind-protect,
I am not asking about capability but intent."  ITS Teco, by the way, does
have unwind protect.  It was called ..N and whenever ..N was popped,
it was also executed.  so q1[..N pushed the contents of q1 (presumably
an unwind clause to execute) into ..N, assuring that it would be executed
on unwind.  That was the purpose of ..N, and so I claim Teco (at least
ITS Teco) had unwind-protect by intent.  That's good enough for me.
I don't think C++ meant to have unwind-protect, it just has the capability
by accident, as Teco has absolute value by accident.

The difference between "implementing" an idea and "expressing" an idea
matters to me.
From: Mario S. Mommer
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <fzr87255t9.fsf@cupid.igpm.rwth-aachen.de>
···@jpl.nasa.gov (Erann Gat) writes:
> > I've often said ``If I could ask only one question about a language to
> > determine whether I could use it in day-to-day use, it would be "does
> > it have unwind-protect?"''
> 
> Out of curiosity, do you consider C++ to "have unwind-protect" via:
> 
> (unwind-protect (f) (g)) -->
> 
> class unwind {
>   ~foo() g();
> }
> 
> {
>   unwind protect;
>   f();
> }
> 
> E.

This looks slightly allegoric, so I'm not so sure I got your idea. In
any case, I don't think it works. The following dies without ever
calling the "cleanup" form (at least with gcc version 2.95.3).

// C++ File starts
#include <iostream>

using namespace std;

void FOAD() {
        ++(*(int *)NULL);
}

class unwind {
public:
        ~unwind() {
                cerr << "Dying.\n";
                }
};

int main() {

        unwind protect;

        FOAD();

return 0;
}
// C++ File ends

Lisps unwind-protect is real:

* (defun can-die (a b)
        (unwind-protect (/ a b)
                (return-from can-die 'not-quite-dead)))

CAN-DIE
* (can-die 1 0)

Arithmetic error DIVISION-BY-ZERO signalled.
Operation was KERNEL::DIVISION, operands (1 0).

Restarts:
  0: [ABORT] Return to Top-Level.

Debug  (type H for help)

(KERNEL::INTEGER-/-INTEGER 1 0)
Source: Error finding source: 
Error in function DEBUG::GET-FILE-TOP-LEVEL-FORM:
  Source file no longer exists:
  target:code/numbers.lisp.
0] q

NOT-QUITE-DEAD
* 

This works even in presence of classical seg-faults.

Regards,
        Mario.
From: Bob Bane
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <3EC2693A.9040600@removeme.gst.com>
Mario S. Mommer wrote:

> ···@jpl.nasa.gov (Erann Gat) writes:
> 
> This looks slightly allegoric, so I'm not so sure I got your idea. In
> any case, I don't think it works. The following dies without ever
> calling the "cleanup" form (at least with gcc version 2.95.3).
> 
> // C++ File starts
> #include <iostream>
> 
> using namespace std;
> 
> void FOAD() {
>         ++(*(int *)NULL);
> }
> 
> class unwind {
> public:
>         ~unwind() {
>                 cerr << "Dying.\n";
>                 }
> };
> 
> int main() {
> 
>         unwind protect;
> 
>         FOAD();
> 
> return 0;
> }
> // C++ File ends
> 



You can almost fix this in Unix by creating signal handlers that raise 
an exception for all the ways to lose.  I say "almost" because:

* It's not reliable - signals can happen in mid-statement when it's 
illegal to raise an exception

* You can't catch some signals, most notably KILL.


	- Bob Bane
From: Michael Livshin
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <s3ptmlsd7u.fsf@laredo.verisity.com.cmm>
Bob Bane <····@removeme.gst.com> writes:

> You can almost fix this in Unix by creating signal handlers that raise
> an exception for all the ways to lose.  I say "almost" because:

is that even well-defined?  (raising exception in sighandlers, that
is?)  if C++ exceptions happen to be implemented with
sigsetjmp/siglongjmp, it'll probably work, though.

> * It's not reliable - signals can happen in mid-statement when it's
>   illegal to raise an exception

if we are already drawn into thinking about specific implementation
details, we might as well mandate that our C++ compiler emits the
appropriate signal masking/unmasking code around such statements.

at least for safe code.  er. :)

>
> * You can't catch some signals, most notably KILL.

can't catch those in Lisp either.

-- 
REALITY is an illusion that stays put.
From: Jeff Caldwell
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <IZqwa.80$793.111411@news1.news.adelphia.net>
Isn't this a better C++ model for unwind protect?

$ ./sample
Catching.
Dying.

#include <iostream>

using namespace std;

void FOAD() {
         ++(*(int *)NULL);
}

class unwind {
public:
   ~unwind() {
     cerr << "Dying.\n";
   }
};

int main() {

   unwind protect;

   try {
     FOAD();
   }
   catch(...) {
     cerr << "Catching.\n";
   }

return 0;
}



Mario
S. Mommer wrote:
...

> This looks slightly allegoric, so I'm not so sure I got your idea. In
> any case, I don't think it works. The following dies without ever
> calling the "cleanup" form (at least with gcc version 2.95.3).
> 
> // C++ File starts
> #include <iostream>
> 
> using namespace std;
> 
> void FOAD() {
>         ++(*(int *)NULL);
> }
> 
> class unwind {
> public:
>         ~unwind() {
>                 cerr << "Dying.\n";
>                 }
> };
> 
> int main() {
> 
>         unwind protect;
> 
>         FOAD();
> 
> return 0;
> }
> // C++ File ends
From: Kent M Pitman
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <sfwr871pr14.fsf@shell01.TheWorld.com>
Jeff Caldwell <·····@yahoo.com> writes:

> Isn't this a better C++ model for unwind protect? [...]

I didn't take the time to read the C++ code (since it's not my normal
language and I find it as much a pain to plod through as Scheme is),
but this underscores my remark about how if you don't provide
unwind-protect directly as 'a manner of expression' that offering it
merely as 'something you can implement' is less good.

I am conscious as I say this that I've recently said that lexical
variables can be implemented in CL, but are not expressed.  Please
don't assume that I mean that this is a good thing, just a survivable
thing.  I'd prefer we had lexical variables in CL, but it was at the
time we tried last, blocked for political reasons that I think are
best summarized by the sentence "The users who cared shirked their 
responsibility and stopped attending the meetings because it seemed
to them it was going fine without them, so they have no right to 
complain." 

Even so, at least CL has DEFMACRO and specific language manipulation
tools like DEFINE-SYMBOL-MACRO that were meant for the purpose of
implementing things like DEFLEXICAL, so there is also the issue of how
much stretch there is between the thing that is provided and the thing
that must yet be implemented.  The implementation of DEFLEXICAL is not
only short, but is not much of a conceptual stretch. In effect, you
are specializing a general capability.  In C++, to implement
UNWIND-PROTECT, you are generalizing, a specialized capability, and
that's weirder and (I claim) less good.

I recall a situation in Maclisp decades ago when we didn't have 
WITHOUT-INTERRUPTS.  We did have (NOINTERRUPTS T) and (NOINTERRUPTS NIL),
though, and so I wrote a macro 

 (DEFMACRO PI (&BODY FORMS)
    ;; I thought "binding forms" should be greek letters, so I picked
    ;; "PI" for "Program Interrupts" binding
    (LET ((OLD-PI-STATUS (GENSYM)))
      `(LET ((,OLD-PI-STATUS (STATUS NOINTERRUPTS)))
         (UNWIND-PROTECT (PROGN (NOINTERRUPTS (NOT ,OLD-PI-STATUS))
                                ,@FORMS)
           (NOINTERRUPTS ,OLD-PI-STATUS)))))

but it didn't work because, in Maclisp, unwind-protect also disabled
interrupts.  (The Symbolics Lisp Machine also did, I think, but only the
A-, L-, and G- machines.  When they went to the Ivory technology, they
lost this feature, and weird unreproducible bugs would occasionally occur
that I suspected were due to the loss of this feature.  Nevertheless,
I think there was some hardware reason they didn't do it... or maybe they
just couldn't fix it when it was finally reported because it was in 
hardware.) Anyway, my PI macro failed because it re-enabled interrupts
while in the interrupt-bound context of the UNWIND-PROTECT cleanup handler.
("Bound context?" I asked, lighting up...)  I finally found that
I could implement it by doing

 (DEFMACRO PI (&BODY FORMS)
   `(UNWIND-PROTECT NIL ,@FORMS))

But this is an example of generalizing a specialized action, and is not
something I'd easily make the claim was there "on purpose" nor something
I'd claim a programmer should find obvious.

- - - - -

Footnote: I suppose because interrupts are beyond the scope of the CL spec,
 implementations might differ on whether an interrupt can blow away a
 cleanup clause.   I personally still prefer a cleanup be treated as a 
 [non-interruptible] critical section in some manner.

 I think the LispM, even Ivory, finally added some complicated error 
 handlers that warned you interactively when you were trying to transfer
 out of a cleanup [which is not quite the same as not interrupting it
 in the first place, but was a big improvement... it also had the feature
 of addressing error exits from a cleanup clause.  Dave Moon, one of the
 key LispM people, was particularly concerned that this must be left to
 implementations so they could keep the machine from locking up
 incomprehensibly. I think there's a cleanup issue on this.

 Also, I remember I once wrote:

   (defun macsyma ()
     (unwind-protect (macsyma-toplevel-loop)
       (macsyma)))

 hoping to keep someone from getting out "into Lisp" by aborting to toplevel
 from Macsyma's toplevel loop.  In Vaxlisp, this meant that Lisp didn't let
 the process exit without the use of extreme force--it couldn't exit on
 its own... in other Lisps the unwinds weren't run by the QUIT function.
 I think in a lot of implementations, it's an argument to QUIT whether or not
 to run the final unwinds...

 I finally found other ways to keep people from getting to toplevel.
 But this kind of problem lingers for people who are trying to use 
 portable control structures, and brings us back to some issues raised
 in this thread...

 I'll stop rambling now.
From: Joe Marshall
Subject: KMP rambling and Jrm ranting [was Re: UNWIND-PROTECT in Scheme]
Date: 
Message-ID: <issd4dkg.fsf_-_@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> In Maclisp, unwind-protect also disabled interrupts.  (The Symbolics
> Lisp Machine also did, I think, but only the A-, L-, and G-
> machines.  

The LMI-Lambda did this, MIT-Scheme does this on the entry and exit
thunks in a dynamic-wind.  Lucid common lisp did this.

> When they went to the Ivory technology, they lost this
> feature, and weird unreproducible bugs would occasionally occur that
> I suspected were due to the loss of this feature.)

I would suspect this as well.

> - - - - -
> 
> Footnote: I suppose because interrupts are beyond the scope of the CL spec,
>  implementations might differ on whether an interrupt can blow away a
>  cleanup clause.   I personally still prefer a cleanup be treated as a 
>  [non-interruptible] critical section in some manner.

I disagree to the extent that I think that implementations that allow
you to blow away a cleanup clause are fundamentally broken.  If you
have the ability to asynchronously interrupt a cleanup, you simply
cannot write `bulletproof' code.  If you cannot interrupt a cleanup,
it is possible (not easy, but possible).

>  I think the LispM, even Ivory, finally added some complicated error 
>  handlers that warned you interactively when you were trying to transfer
>  out of a cleanup [which is not quite the same as not interrupting it
>  in the first place, but was a big improvement... it also had the feature
>  of addressing error exits from a cleanup clause.  Dave Moon, one of the
>  key LispM people, was particularly concerned that this must be left to
>  implementations so they could keep the machine from locking up
>  incomprehensibly.  I think there's a cleanup issue on this.

I disagree with Moon here (kind of).  A system cannot both be
interruptable and non-interruptable at the same time.  If you have a
mechanism by which you can bypass an impassable barrier, then you no
longer have an impassable barrier.

That being said, there is a lot of leeway in implementation.  For
instance, in Allegro CL, you *can* interrupt a `non-interruptable'
section of code, but only by hitting control-C five times.  One could
imagine a system that allowed you to interrupt an unwind-protect
cleanup, but established a new unwind-protect that would pick up where
the old one left off.

But blowing away the cleanup and returning to top-level (or throwing
to an intermediate handler) isn't useful.

>  I'll stop rambling now.

I'll stop ranting, then, too.
From: Janis Dzerins
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <twkfzniq8i7.fsf@gulbis.latnet.lv>
···@jpl.nasa.gov (Erann Gat) writes:

> In article <···············@shell01.TheWorld.com>, Kent M Pitman
> <······@world.std.com> wrote:
> 
> > I've often said ``If I could ask only one question about a language to
> > determine whether I could use it in day-to-day use, it would be "does
> > it have unwind-protect?"''
> 
> Out of curiosity, do you consider C++ to "have unwind-protect" via:
> 
> (unwind-protect (f) (g)) -->
> 
> class unwind {
>   ~foo() g();
> }

I think it should have been something like:

class unwind {
  ~unwind() { g(); }
}

> 
> {
>   unwind protect;
>   f();
> }

Almost.  It could work in this case.  But for unwind protect to be
useful, the function g should have access to lexical environment (in
lispspeak).  One could try to cross this with closure kludge in C++ to
prove Greenspuns tenth once more.

-- 
Janis Dzerins

  If million people say a stupid thing, it's still a stupid thing.
From: William D Clinger
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <b84e9a9f.0305131406.43dff31d@posting.google.com>
Kent M Pitman wrote:
> (One partial test of whether Will's code is portable is whether if you
> took that code and inserted it into a large system that did
> multi-tasking and used files and compiled it so that it did file opens
> and closes with will's operator, would it work?....

That depends on what you mean by "work".

I suspect that you want the cleanup forms of an UNWIND-PROTECT
to be executed even if an unrelated error occurs, say (CAR 'A).
It is flatly impossible to guarantee this in portable Scheme,
because the Scheme standards explicitly allow systems to do
anything whatsoever in an error situation.  Thus there is no
guarantee that an error, even if detected, will perform a throw
that could be handled by an UNWIND-PROTECT.  It could instead
perform some kind of system reset that bypasses all of the
cleanup code that an UNWIND-PROTECT was supposed to execute.

That, IMO, is the real problem here: Scheme doesn't specify a
semantics for errors and/or exceptions.  The best we can do is
to write an UNWIND-PROTECT that does what you want provided no
errors occur, and that is not good enough IMO.

But I don't believe this shortcoming of Scheme can be blamed on
call/cc or DYNAMIC-WIND, and I don't believe it can be fixed by
changing the semantics of those two procedures.  In short, I
agree that Scheme has major problems, but I disagree with your
diagnosis and with your proposed fixes.

Will
From: Joe Marshall
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <of254efc.fsf@ccs.neu.edu>
······@qnci.net (William D Clinger) writes:

> That, IMO, is the real problem here: Scheme doesn't specify a
> semantics for errors and/or exceptions.  The best we can do is
> to write an UNWIND-PROTECT that does what you want provided no
> errors occur, and that is not good enough IMO.
> 
> But I don't believe this shortcoming of Scheme can be blamed on
> call/cc or DYNAMIC-WIND, and I don't believe it can be fixed by
> changing the semantics of those two procedures.  In short, I
> agree that Scheme has major problems, but I disagree with your
> diagnosis and with your proposed fixes.

That's certainly one problem.  Another is that Scheme doesn't specify
what happens when you attempt a non-local exit in a cleanup form.
This is an issue similar to the Common Lisp issue of throwing out of
an unwind-protect cleanup while you are in the middle of a different
throw.  This behavior needs to be specified in addition to error and
exception semantics.  (I presume that interrupts will be covered in
that as well.)
From: Joe Marshall
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <n0hq63fm.fsf@ccs.neu.edu>
Kent M Pitman <······@world.std.com> writes:

> People are constantly trying to suggest that UNWIND-PROTECT can be
> used to implement dynamic binding in CL and I have to keep telling
> them that this is inappropriate.  It is agreed by the language
> designers and all vendors that the operator was not intended by this,
> even though a single-threaded, plausible-looking implementation is
> commonly cited by whose who don't understand the multi-tasking
> implications.  My sense is that the designers of Scheme are in
> disarray as to whether DYNAMIC-WIND is appropriate for the
> implementation of UNWIND-PROTECT so I get a difference in status.

This is clearly the case.  I'm constantly trying to persuade Schemers
that call-with-current-continuation is inappropriate for simulating
time-slice multiplexing (although you can sort-a-kind-a kludge it,
it doesn't really work), and that DYNAMIC-WIND cannot be used
implement dynamic binding in Scheme any more than UNWIND-PROTECT can
be used for the same purpose in Common Lisp. 

The `serious' Scheme implementations (I'll include Larceny, PLT, MIT
Scheme, but this is not meant to be exhaustive or imply
`frivolousness' to other implementations) already understand this and
have understood this for years.
From: Dorai Sitaram
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <b9rdnp$r2h$1@news.gte.com>
In article <···············@cley.com>, Tim Bradshaw  <···@cley.com> wrote:
>
>could someone answer this: How hard is it to write a
>form in Scheme which makes a really strong promise for the following
>code (pardon my terrible scheme):
>
>    (define g-called-count 0)
>    (define (g)
>      (set! g-called-count (+ g-called-count 1)))
>
>    (define h-called-count 0)
>    (define (g)
>      (set! h-called-count (+ h-called-count 1)))
>
>
>    (define (test-up f)
>      ;; F is an arbitrary function
>      (set! g-called-count 0)
>      (set! h-called-count 0)
>      (unwind-protect
>        (f)
>       (g))
>      (h))
>
>Now, after calling (test-up f) for *any* function f, g-called-count
>should be 1 and h-called-count should be 0 or 1.
>
>Obviously there are always caveats (like: the machine does not catch
>fire &c &c), but how hard is that promise to make (and how hard is it
>to make in CL?).

Note: Your example isn't taut enough, because it is
possible for an arbitrary f to call g and h how many
ever times, so that g- and h-called-count could
both rack up values larger than 1.  

Allowing for that, if all you want is that the g in
test-up be called definitely, and the h in it be called
possibly, the solution is equally hard in CL and
Scheme.  In Scheme, you would use dynamic-wind.  

Another way of putting it is that your example doesn't
really require an unwind-protect in Scheme.  For this
kind of safety, dynamic-wind is "good enough".  All of
Friedman and Haynes's unwind-protects would work to the
same level of fault-tolerance too.  Note that Kent's
suggested Scheme unwind-protects could both fail,
because they will only perform cleanup for (a) an
escaping continuation in the first proposal, and (b) an
explicitly last-use continuation in the second proposal
-- other continuations would slip through.  This
is not a bug in Kent's specs -- they are just intended
to do something different, and it would be a mistake to
assume that they are precise generalizations of CL-type
unwind-protect to Scheme.  
From: Tim Bradshaw
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <ey3bry5rakw.fsf@cley.com>
* Dorai Sitaram wrote:
> Note: Your example isn't taut enough, because it is
> possible for an arbitrary f to call g and h how many
> ever times, so that g- and h-called-count could
> both rack up values larger than 1.  

Yes.

> Allowing for that, if all you want is that the g in
> test-up be called definitely, and the h in it be called
> possibly, the solution is equally hard in CL and
> Scheme.  In Scheme, you would use dynamic-wind.  

I also want that the G be called *exactly once* for each call to
TEST-UP.  In particular never any more than once.

--tim
From: Dorai Sitaram
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <b9tf5k$t92$1@news.gte.com>
In article <···············@cley.com>, Tim Bradshaw  <···@cley.com> wrote:
>* Dorai Sitaram wrote:
>> Note: Your example isn't taut enough, because it is
>> possible for an arbitrary f to call g and h how many
>> ever times, so that g- and h-called-count could
>> both rack up values larger than 1.  
>
>Yes.
>
>> Allowing for that, if all you want is that the g in
>> test-up be called definitely, and the h in it be called
>> possibly, the solution is equally hard in CL and
>> Scheme.  In Scheme, you would use dynamic-wind.  
>
>I also want that the G be called *exactly once* for each call to
>TEST-UP.  In particular never any more than once.

I assume you are referring to the fact that the pre-
and postludes of dynamic-winds can happen many times
because of non-escaping continuations?  Ordinary
conditionals (as illustrated in code presented by others
here) suffice to prevent them from happening more than
once.  

(define (test-up f)
  ...
  (dynamic-wind
    (lambda () 'nothing-to-do)
    f
    (let ((done? #f))
      (lambda ()
        (unless done?
          (set! done? #t)
          (g)))))
  ...)

A lot of your own context has been removed by you.  So,
I should probably add a disclaimer to other readers
that this pattern is not at all meant as a
general translation of unwind-protect.  This is in
answer to a very specific problem posed by Tim.  

The assurance that g will be called at all is not
certain of course, but subject to the same
imponderables as you described for the
corresponding unwind-protect code.

[BTW, one could write that postlude without
conditionals:

(letrec ((g1 (lambda ()
               (set! g1 (lambda () 'nothing-to-do))
               (g))))
  (lambda () (g1)))

Which is better is probably beholder-dependent.]

--d
From: Tim Bradshaw
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <ey37k8tr55f.fsf@cley.com>
* Dorai Sitaram wrote:

> I assume you are referring to the fact that the pre-
> and postludes of dynamic-winds can happen many times
> because of non-escaping continuations?  Ordinary
> conditionals (as illustrated in code presented by others
> here) suffice to prevent them from happening more than
> once.  

yes, that works of course.  Now I have to work out why I couldn't see
this...

--tim
From: Michael Livshin
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <s3el32j0g5.fsf@laredo.verisity.com.cmm>
Joe Marshall <···@ccs.neu.edu> writes:

> Michael Livshin <······@cmm.kakpryg.net> writes:
>
>> I fail to see the practical difference between "you can do that, but
>> it won't really work because there no way to force the relevant
>> assumptions on the execution environment" and "you can't do that".
>
> The difference may be that ``if you follow discipline then you can do
> it''.  The environment may not enforce assumptions upon you, but it
> may be reasonable to take on a restricted style of programming in
> order to assure the assumptions.

there's also a difference between a feature that is guaranteed to
work no matter what portable code you use in your program and a
feature that requires you to make sure that any code you use shares
your assumptions.

the latter is, of course, quite sufficient in many situations.  but
not all.

-- 
Nobody can fix the economy.  Nobody can be trusted with their finger on the
button.  Nobody's perfect.  VOTE FOR NOBODY.
From: Thien-Thi Nguyen
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <7gr872vmrz.fsf@gnufans.net>
Michael Livshin <······@cmm.kakpryg.net> writes:

> there's also a difference between a feature that is guaranteed to
> work no matter what portable code you use in your program and a
> feature that requires you to make sure that any code you use shares
> your assumptions.

portability is just a set of such features (and related assumptions,
restrictions, foldings, spindlings and mutilations ;-).

thi
From: Michael Livshin
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <s3addqiz4d.fsf@laredo.verisity.com.cmm>
Thien-Thi Nguyen <···@glug.org> writes:

> Michael Livshin <······@cmm.kakpryg.net> writes:
>
>> there's also a difference between a feature that is guaranteed to
>> work no matter what portable code you use in your program and a
>> feature that requires you to make sure that any code you use shares
>> your assumptions.
>
> portability is just a set of such features (and related assumptions,
> restrictions, foldings, spindlings and mutilations ;-).

some sets of such features are called "standards", and can come in
form of paper volumes restricted by binding (or in the form of a bunch
of electrons, restricted by URL).  for some reason, those feature sets
have special significance, enough to have easily identifiable names. 
those names, due to their being locally obvious, are often omitted in
relevant contexts.

:)

-- 
It's tough being a bug: You screech in a tree all night, trying to get
laid, and then freeze to death.  It's kind of how teenagers look at
life.
                  -- Fred Reed (<URL:http://www.fredoneverything.net>)
From: Joe Marshall
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <y91a6at6.fsf@ccs.neu.edu>
Michael Livshin <······@cmm.kakpryg.net> writes:

> Joe Marshall <···@ccs.neu.edu> writes:
> 
> > Michael Livshin <······@cmm.kakpryg.net> writes:
> >
> >> I fail to see the practical difference between "you can do that, but
> >> it won't really work because there no way to force the relevant
> >> assumptions on the execution environment" and "you can't do that".
> >
> > The difference may be that ``if you follow discipline then you can do
> > it''.  The environment may not enforce assumptions upon you, but it
> > may be reasonable to take on a restricted style of programming in
> > order to assure the assumptions.
> 
> there's also a difference between a feature that is guaranteed to
> work no matter what portable code you use in your program and a
> feature that requires you to make sure that any code you use shares
> your assumptions.
> 
> the latter is, of course, quite sufficient in many situations.  but
> not all.

Agreed.
From: William D Clinger
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <b84e9a9f.0305131126.30d4b3ba@posting.google.com>
Michael Livshin wrote:
> I fail to see the practical difference between "you can do that, but
> it won't really work because there no way to force the relevant
> assumptions on the execution environment" and "you can't do that".

Agreed, but I think the situation we've been discussing is of the
form "you can do that, and it will work just fine, but first you
will have to make the relevant assumptions explicit so someone who
knows how to do it will understand what you're trying to do".

By the way, the second macro I posted isn't exactly PROG1.  It's
a PROG1 that executes the for-effect forms only once, no matter
how often the first form returns.  Think of it this way: the
cleanup forms are executed the first time a certain predicate is
true, and are not executed thereafter.  Apparently I haven't yet
figured out the predicate that Kent Pitman wants.  If he could
figure it out, though, it shouldn't be hard to modify my second
macro to accomodate the predicate he wants.

I think Kent is having trouble figuring out what predicate he wants.
I think the reason he is having trouble is that his mental model of
computation assumes proper nesting of "dynamic contours", and Scheme
doesn't enforce that except via DYNAMIC-WIND.  Furthermore Kent has
made some peculiar assumptions about the purpose of DYNAMIC-WIND, so
he is unwilling to use DYNAMIC-WIND to define the "dynamic contours"
that he needs in order to formalize his predicate.

Will
From: Kent M Pitman
Subject: Re: UNWIND-PROTECT in Scheme
Date: 
Message-ID: <sfw4r3y4nsd.fsf@shell01.TheWorld.com>
······@qnci.net (William D Clinger) writes:

> I think Kent is having trouble figuring out what predicate he wants.

When I have enough time to think about it, this may or may not be true.

For now, please don't speculate on my delay as "having trouble
figuring out" anything, since the only thing I'm having trouble with
right now is getting sufficient time to sit and thinking about it
enough to understand Will's question and to read his sample code.

Although I know that Will's code is pretty ordinary for Scheme programs,
and uses a bunch of weird pseudo-loopish idioms that are common in
conjunction with call/cc, I find that kind of code really hard to read.
So this is not going to be a quick discussion for me like so many other
discussions here which are often, for me, just memory exercises rather 
than requests for new thought.
From: Dorai Sitaram
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <b9rbl2$qrp$1@news.gte.com>
In article <····························@posting.google.com>,
William D Clinger <······@qnci.net> wrote:
>Dorai Sitaram wrote:
>
>> Last weekend, I wrote up an attempt at solving Kent's
>> proposal #1 using Friedman and Haynes's
>> constraining-control techniques (from 1985!).
>> Interested people may please take a look at
>> 
>>   http://www.ccs.neu.edu/~dorai/uwcallcc/uwcallcc.html
>
>Thank you for doing this.

I have since updated my page to include an
implementation of Kent's proposal #2 also -- the one
where continuations trigger cleanups only on their
(user-specified) last use.  

>Well, you clearly have a better idea of what Kent wants
>than I do.  I hope you two continue this conversation here
>so I can eavesdrop.

I too hope Kent takes a look and checks if it's indeed
faithful to his informal spec.  I've only
performed smallish tests so far.

--d
From: William D Clinger
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <b84e9a9f.0305141857.433ec53c@posting.google.com>
Dorai Sitaram wrote:
> Last weekend, I wrote up an attempt at solving Kent's
> proposal #1 using Friedman and Haynes's
> constraining-control techniques (from 1985!).
> Interested people may please take a look at
> 
>   http://www.ccs.neu.edu/~dorai/uwcallcc/uwcallcc.html

and

> I have since updated my page to include an
> implementation of Kent's proposal #2 also -- the one
> where continuations trigger cleanups only on their
> (user-specified) last use.  

Dorai's code is a little more complex than it needs to
be because, in effect, he reimplements DYNAMIC-WIND.
I have rewritten his implementation of Kent's proposal
#1 to use DYNAMIC-WIND, which should make it easier to
understand.  See

    http://www.ccs.neu.edu/home/will/Temp/uwesc.sch

Will
From: Dorai Sitaram
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <ba05bk$8br$1@news.gte.com>
In article <····························@posting.google.com>,
William D Clinger <······@qnci.net> wrote:
>Dorai Sitaram wrote:
>> Last weekend, I wrote up an attempt at solving Kent's
>> proposal #1 using Friedman and Haynes's
>> constraining-control techniques (from 1985!).
>> Interested people may please take a look at
>> 
>>   http://www.ccs.neu.edu/~dorai/uwcallcc/uwcallcc.html
>
>and
>
>> I have since updated my page to include an
>> implementation of Kent's proposal #2 also -- the one
>> where continuations trigger cleanups only on their
>> (user-specified) last use.  
>
>Dorai's code is a little more complex than it needs to
>be because, in effect, he reimplements DYNAMIC-WIND.
>I have rewritten his implementation of Kent's proposal
>#1 to use DYNAMIC-WIND, which should make it easier to
>understand.  See
>
>    http://www.ccs.neu.edu/home/will/Temp/uwesc.sch
>
>Will

I've seen this and Will's code does the right thing
more concisely and clearly than my attempt.  I think
it's very neat indeed.  

--d
From: Pekka P. Pirinen
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <ubry0v0lt.fsf@globalgraphics.com>
> Dorai's code is a little more complex than it needs to
> be because, in effect, he reimplements DYNAMIC-WIND.

So perhaps Scheme ought to take FLUID-LET as a primitive instead of
DYNAMIC-WIND, since FLUID-LET is simpler and can be used to implement
D-W.

>    http://www.ccs.neu.edu/home/will/Temp/uwesc.sch

So that's how it's done.  It does seem to me that you want to reset
full? when invoking an escaping continuation, but the approach will
clearly work.

Since it changes call/cc incompatibly, it doesn't provide a general
solution to the problem of cleaning up resources, though.  This is
effectively creating a new language that is almost like Scheme, but
where you can have an UNWIND-PROTECT.
-- 
Pekka P. Pirinen
The great problem with Lisp is that it is just good enough to keep us
from developing something really good.  - Alan Kay
From: Dorai Sitaram
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <baalg9$fsp$1@news.gte.com>
In article <·············@globalgraphics.com>,
Pekka P. Pirinen <···············@globalgraphics.com> wrote:
>Will Clinger wrote:
>> Dorai's code is a little more complex than it needs to
>> be because, in effect, he reimplements DYNAMIC-WIND.
>
>So perhaps Scheme ought to take FLUID-LET as a primitive instead of
>DYNAMIC-WIND, since FLUID-LET is simpler and can be used to implement
>D-W.

The definition of dynamic-wind in terms of call/cc may
not be substantially altered by assuming a native
call/cc-correct fluid-let.  

E.g., see the (fairly concise) dynamic-wind definition
in Dybvig, http://www.scheme.com/tspl2d/control.html,
and see if a native fluid-let would have a
simplifying impact.  It will save having to explictly
restore the `winders' variable, but the code for
deciding which elements in `winders' to perform -- the
most involved decision in the code -- will remain, I
think.  A new call/cc would still have to be defined
alongside the new dynamic-wind.

On the other hand, having a call/cc-correct fluid-let
available natively is an attractive idea.  Most
ordinary programming tasks (certainly mine) will use
fluid-let a lot more than they do dynamic-wind or even
call/cc, so a native fluid-let that has the bonus of
being call/cc-correct would find lots of use.   

>>    http://www.ccs.neu.edu/home/will/Temp/uwesc.sch
>
>So that's how it's done.  It does seem to me that you want to reset
>full? when invoking an escaping continuation, but the approach will
>clearly work.

I thought initially that resetting full? would be
redundant, since it is only ever true from the time of
invocation of a non-escaping continuation till control
arrives at its context.  But given that this stretch of
time when full? is true is also the time when all
the postludes are getting performed, a purely local
escaping continuation call within them could
erroneously reset full? by your method, and thus
disable some postludes.  Thus, Will's code is better as
is.

On the other hand, Will's call/cc-escaping should
disable the escaping continuation for normal return
also.   

>Since it changes call/cc incompatibly, it doesn't provide a general
>solution to the problem of cleaning up resources, though.  This is
>effectively creating a new language that is almost like Scheme, but
>where you can have an UNWIND-PROTECT.

This is inevitable.  The postludes must get a chance to
happen somehow, so the jumps of call/cc-continuations
must have actions attached to them that do something
extra than just the raw jump, which dictates modifying
call/cc...
From: Tim Bradshaw
Subject: Re: UNWIND-PROTECT in Scheme (was Re: Why seperate function namespaces?)
Date: 
Message-ID: <ey3y91c7zxo.fsf@cley.com>
* William D Clinger wrote:
> I haven't heard anything from Kent Pitman concerning Jonathan Rees's
> comments, so I'll go ahead and post a new UNWIND-PROTECT macro that
> allows any number of throws out of or into the UNWIND-PROTECT form,
> but executes the cleanup forms at most once, when the UNWIND-PROTECT
> is exited via the first normal return.

If this is what Kent wants it's basically useless as far as I can see.
What happens if there is an error exit?  Isn't the problem with this
whole thing that you need to know the *last* time you exit the form,
and run the cleanups then, and only then.

--tim
From: Dorai Sitaram
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9eg3h$1br$1@news.gte.com>
In article <············@ccs.neu.edu>, Joe Marshall  <···@ccs.neu.edu> wrote:
>
>> Joe Marshall <···@ccs.neu.edu> writes:
>> [unwind-protect implementation]
>
>Kent M Pitman <······@world.std.com> writes:
>
>> This will implement unwind-protect only at the expense of NOT allowing
>> you to do the things dynamic-wind is intended for, such as implementing
>> dynamic binding.
>
>It does not interfere with dynamic binding.

Even so, it is probably an interesting exercise to try
to implement a Scheme unwind-protect to the spec in
Kent's web article.  I.e., multiple exits and entries
allowed (like Kent's indecisive moviegoer bobbing to
and fro the theatre), and the only exit that performs
the cleanup is either normal exit; or the calling of a
continuation that was explicitly created as an escaping
continuation; or is explicitly annotated as
last-use.  After this exit, no more entries into the
unwind-protect body are allowed.

Both your and Will's code does cleanup for any
exit at all, which is against the spirit of Kent's
article, which perceives a useful user-specified
distinction between continuations that are allowed to
trigger unwind-protect cleanup, and those that aren't. 

(Yes, I'll try my hand at implementing it in standard
Scheme when I have time (and if it seems possible).
But it may not in real life be a usable distinction
that Kent is making.  It is not clear to me that in
large programs one can predict which
continuations can be allowed to skip cleanup, or what
should be the last use of a continuation.)  
From: William D Clinger
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b84e9a9f.0305081748.18ac3b2e@posting.google.com>
Dorai Sitaram wrote:
> Even so, it is probably an interesting exercise to try
> to implement a Scheme unwind-protect to the spec in
> Kent's web article.  I.e., multiple exits and entries
> allowed (like Kent's indecisive moviegoer bobbing to
> and fro the theatre), and the only exit that performs
> the cleanup is either normal exit; or the calling of a
> continuation that was explicitly created as an escaping
> continuation; or is explicitly annotated as
> last-use.  After this exit, no more entries into the
> unwind-protect body are allowed.
> 
> Both your and Will's code does cleanup for any
> exit at all, which is against the spirit of Kent's
> article, which perceives a useful user-specified
> distinction between continuations that are allowed to
> trigger unwind-protect cleanup, and those that aren't. 

This semantics is certainly different from the one that I had
understood Kent to want.  He didn't complain that our macros
didn't do what he wanted, however; he just complained that he
could imagine ways in which they could be used to write buggy
code.

If Kent will confirm that Dorai's semantics above is the sort
of thing he wants, then we can try to make progress toward
understanding that semantics, and then see whether it is easy
or impossible to implement that semantics in portable Scheme.

Will
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfw3cjodb0k.fsf@shell01.TheWorld.com>
······@qnci.net (William D Clinger) writes:

> Dorai Sitaram wrote:
> > Even so, it is probably an interesting exercise to try
> > to implement a Scheme unwind-protect to the spec in
> > Kent's web article.  I.e., multiple exits and entries
> > allowed (like Kent's indecisive moviegoer bobbing to
> > and fro the theatre), and the only exit that performs
> > the cleanup is either normal exit; or the calling of a
> > continuation that was explicitly created as an escaping
> > continuation; or is explicitly annotated as
> > last-use.  After this exit, no more entries into the
> > unwind-protect body are allowed.
> > 
> > Both your and Will's code does cleanup for any
> > exit at all, which is against the spirit of Kent's
> > article, which perceives a useful user-specified
> > distinction between continuations that are allowed to
> > trigger unwind-protect cleanup, and those that aren't. 
> 
> This semantics is certainly different from the one that I had
> understood Kent to want.  He didn't complain that our macros
> didn't do what he wanted, however; he just complained that he
> could imagine ways in which they could be used to write buggy
> code.

No, I complained that there was one too few operators because 
the capability I thought you had given me for dynamic-wind was taken
back when you said I had to use it to implement unwind-protect.  By
this I meant what Dorai says above--that there is value to being
able to repeatedly enter and exit the code with one operator in order
to implement time slices, and then still be able to call a final 
cleanup when you're really done.  The former can be used to implement
the effect of specials _even if_ specials themselves are not what you're
using it for--it's just a general purpose mechanism for managing 
dynamic state.  e.g., a database lock that you don't want to hold open
when something is not in use, or access to a printer or just having the
lights in the building turned on.  (which is not
to say some problems aren't potentially introduced by freeing and re-taking
the lock.)  This is different than what unwind-protect manages, which
is an overarching set of state that one holds continuously.  

Consider the question:

 "Are you working on X?"

If your boss says "Are you working on X?" he might mean "Are you on
the timeclock now?" or he might mean "Are you assigned to the
project?"  The timeclock is the state you can do and undo many times.
The project assignment is done once at the beginning and undone at
the end; in between, you are considered to be "virtually" doing the
thing even at times when you aren't.
 
Dorai is right that there is no way to get an escape procedure in the
presence of a dynamic-wind that does not trigger a change in dynamic 
state.  Hence you can't do any kind of process suspend.

> If Kent will confirm that Dorai's semantics above is the sort
> of thing he wants, then we can try to make progress toward
> understanding that semantics, and then see whether it is easy
> or impossible to implement that semantics in portable Scheme.

Yes, I think he's right.

I talked to Jonathan Rees about this, too.  I'll forward you the email
he sent me in reply for you to ponder.
From: William D Clinger
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b84e9a9f.0305090618.4ade9109@posting.google.com>
Kent M Pitman wrote:
> No, I complained that there was one too few operators because 
> the capability I thought you had given me for dynamic-wind was taken
> back when you said I had to use it to implement unwind-protect.  By
> this I meant what Dorai says above--that there is value to being
> able to repeatedly enter and exit the code with one operator in order
> to implement time slices, and then still be able to call a final 
> cleanup when you're really done....

I appreciate your explanation.  I believe that what you want can be
implemented in portable Scheme, though not as simply as in the macro
I posted, but I may still be missing something.  I'll wait to see
what Jonathan said, and then get back to you next week on this.

Will
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <isskxld6.fsf@ccs.neu.edu>
> > Dorai Sitaram wrote:
> > > Even so, it is probably an interesting exercise to try
> > > to implement a Scheme unwind-protect to the spec in
> > > Kent's web article.  I.e., multiple exits and entries
> > > allowed (like Kent's indecisive moviegoer bobbing to
> > > and fro the theatre), and the only exit that performs
> > > the cleanup is either normal exit; or the calling of a
> > > continuation that was explicitly created as an escaping
> > > continuation; or is explicitly annotated as
> > > last-use.  After this exit, no more entries into the
> > > unwind-protect body are allowed.

Kent M Pitman <······@world.std.com> writes:
> No, I complained that there was one too few operators because 
> the capability I thought you had given me for dynamic-wind was taken
> back when you said I had to use it to implement unwind-protect.  By
> this I meant what Dorai says above --- that there is value to being
> able to repeatedly enter and exit the code with one operator in order
> to implement time slices, and then still be able to call a final 
> cleanup when you're really done.  The former can be used to implement
> the effect of specials _even if_ specials themselves are not what you're
> using it for --- it's just a general purpose mechanism for managing 
> dynamic state.
>
> Dorai is right that there is no way to get an escape procedure in the
> presence of a dynamic-wind that does not trigger a change in dynamic 
> state.  Hence you can't do any kind of process suspend.

Again, I don't think that user-level continuations are the appropriate
mechanism for implementing a time-division multiplexing model of
multiple tasks.  The existance of join-points implies
non-deterministic continuations.  However, I think I can come up with
a model that captures what you and Dorai are suggesting.

Let me propose a new primitive procedure IN-PARALLEL.  I feel free to
do this because we are talking about extending the Scheme language
with multiple threads and fluid variables.  This will necessarily
require new primitives. 

IN-PARALLEL takes two thunks and runs them `simultaneously' returning
the value of the last one to `finish'.  This creates a
non-deterministic continuation that is shared by both computations.
When in an IN-PARALLEL form, each computation may freely create and
invoke continuations with the following constraints.

  1)  An error is signalled should a computation attempt to invoke a
      continuation captured by the other computation.

  2)  Once leaving the dynamic state of an IN-PARALLEL form, it is
      illegal to return to a continuation captured within the
      in-parallel form.  An error is signalled.

  3)  Each computation maintains its own dynamic state.  Users must
      take care to avoid exposing that state to other computations (as
      is always the case in a multitasked environment).

  4)  Should either computation invoke a continuation captured
      *before* IN-PARALLEL was invoked (i.e., outside the dynamic
      state of IN-PARALLEL), the other computation is destroyed
      (note that if this other computation is within the
      entry or exit clause of a DYNAMIC-WIND, it must be allowed to
      continue until it is not).  The dynamic state of the other
      computation will be unwound to where it was when IN-PARALLEL was
      invoked. 

How does this interact with special variables?  By clause 3, you
probably should *not* implement specials by this trick:

(let ((outer-value)
      (inner-value))
 (dynamic-wind
    (lambda () (set! outer-value *foo*)
               (set! *foo* inner-value))
    body
    (lambda () (set! inner-value *foo*)
               (set! *foo* outer-value))))

because the other computation may attempt to do the same thing.
However, there are many other techniques that could be used:

(let ((outer-value)
      (inner-value))
 (dynamic-wind
    (lambda () (set! outer-value (thread-local-value *foo*))
               (set! (thread-local-value *foo*) inner-value))
    body
    (lambda () (set! inner-value (thread-local-value *foo*))
               (set! (thread-local-value *foo*) outer-value))))
From: Michael Livshin
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <s34r46ve1u.fsf@laredo.verisity.com.cmm>
Kent M Pitman <······@world.std.com> writes:

> Michael Livshin <······@cmm.kakpryg.net> writes:
>
>> > Kent Pitman's article on this issue seems to suggest that it
>> > is unsolvable (but I am not sure whether I understood this
>> > correctly).
>> 
>> that's my understanding of it, too.
>
> Btw, I just this minute made a change to the article 
>  http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
> in order to add some examples to make it clear what the options
> for fixing it would look like in terms of code.
>
> If you read the version where the proposed changes had no code examples,
> you might want to take a second look.
>
> Hopefully this will avoid the sense that the problem is pragmatic
> and will make it more clear that the problem is political.

it is, of course, entirely political.

another indication is the fact that all (I think) serious Scheme
implementations have primitive support for at least one of the
following:

exception handling
upward-only continuations
one-shot continuations

no `unwind-protect', though.

-- 
All ITS machines now have hardware for a new machine instruction --
CIZ
Clear If Zero.
Please update your programs.
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <d6iubmc8.fsf@ccs.neu.edu>
Michael Livshin <······@cmm.kakpryg.net> writes:

> it is, of course, entirely political.
> 
> another indication is the fact that all (I think) serious Scheme
> implementations have primitive support for at least one of the
> following:
> 
> exception handling
> upward-only continuations
> one-shot continuations
> 
> no `unwind-protect', though.

upward-only + dynamic-wind = unwind-protect
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9b6tj$o2g$1@f1node01.rhrz.uni-bonn.de>
Michael Livshin wrote:

> it's indeed important to finally decide what are you talking about,
> yes.

Hmm, I have thought we are talking about how hard programming language 
design is. (But we might also be talking about how hard newsgroup 
discussions are. ;)

> let's see:
> 
> I said that sometimes it is a good idea to restrict a language,
> because well-chosen restrictions let you assume all kinds of useful
> things.  as an example, I brought up general pointers.
> 
> you said that you need pointers for FFI's and at best, given the
> existence of the so-called Real World, we can talk about
> compartmentalization and not outright omission of certain semantic
> features from the language.
> 
> I said "fine", and brought up call/cc as a possibly better example.  I
> thought this example to be better /exactly/ because there's no
> Real-World-related need to include it in CL.  I hoped this would help
> you see my point.
> 
> but it didn't help.  oh well.

OK, I seem to have misunderstood you. However, your wording is a little 
dangerous: "sometimes it is a good idea to restrict a language, because 
well-chosen restrictions let you assume all kinds of useful things". 
This sounds like the usefulness of the assumptions that follow from a 
restriction is a sufficient condition for making that restriction, and I 
  have interpreted your postings as arguments that support this view.

Now I guess what you really wanted to say is that sometimes it is a good 
idea to restrict a language because well-chosen restrictions can result 
in properties that are more useful in practice than what has been taken 
away. (But you already hinted toward that by saying that it is always a 
trade-off. So probably I haven't you thoroughly enough - sorry for that.)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Nils Goesche
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <lywuh26e5b.fsf@cartan.de>
Pascal Costanza <········@web.de> writes:

> Michael Livshin wrote:
> 
> > the (forced) abstaining from using pointers, on the other hand,
> > obviously does buy you some beneficial language properties.  so I
> > wouldn't want to make grand sweeping claims about desirability of
> > semantic restrictions or lack thereof.  it's always a trade-off.
> 
> No, it's not. The lack of pointers means that it is relatively hard
> to implement a straightforward FFI.

The ``lack�� of pointers?  Lisp's semantics can be perfectly well
described using an ``everything is a pointer�� approach.  I personally
do not like this wording very much.  If /everything/ is a `foo�, the
statement ``X is a foo�� contains zero information, after all.  But
nevertheless, using the ``everything is a pointer�� model can be very
helpful for explaining Lisp's semantics to newbies having
misconceptions about Lisp's semantics.  As long as they think ``X is a
pointer�� is a meaningful and interesting statement, they are still
working with a flawed model of Lisp's semantics.  Once they get it,
using the ``pointer�� word is no longer necessary.  Somewhat like
Wittgenstein's ladder :-)

No, FFI calls to C are non-trivial simply because C is a different
language having totally incompatible objects that have to be modelled
somehow in the calling language.  C's pointers are no different from
C's other objects in that regard.  Note that the other way around is
even harder: Just imagine trying to call a function FOO like this

(handler-case
    (foo 'bar)
  (error () 42))

from C!  Would you say that it is C's ``lack�� of symbols and
conditions that make this hard?  Even if C /had/ builtin symbols and
conditions, calling FOO this way would be /no easier/ because in all
likelihood they would be totally incompatible with Lisp's, anyway.

Speaking of a ``lack�� of pointers sounds as if you could easily add
them to Lisp and somebody simply decided not to do so, for whatever
reason.  But I don't think this is so.  Even if you added something to
Lisp and called it ``pointers��, what you'd get would be something
totally different and incompatible to C's pointers, anyway, because
the whole idea behind C's pointers is totally meaningless in a Lisp
context and hence inapplicable.  And, consequently, the FFI wouldn't
be any simpler, either.

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9b5js$s16$1@f1node01.rhrz.uni-bonn.de>
Nils Goesche wrote:
> Pascal Costanza <········@web.de> writes:
> 
>>Michael Livshin wrote:
>>
>>>the (forced) abstaining from using pointers, on the other hand,
>>>obviously does buy you some beneficial language properties.  so I
>>>wouldn't want to make grand sweeping claims about desirability of
>>>semantic restrictions or lack thereof.  it's always a trade-off.
>>
>>No, it's not. The lack of pointers means that it is relatively hard
>>to implement a straightforward FFI.
> 
> The ``lack�� of pointers?  Lisp's semantics can be perfectly well
> described using an ``everything is a pointer�� approach. 

That's a misunderstanding. Michael talked about "abstaining from using 
pointers" - this is incompatible with "everything is a pointer".

The term pointer is used very ambiguously throughout computer science, 
and this causes some confusions. So sorry for not having been clear 
enough. What's missing in Common Lisp in order to make an FFI work is a 
notion of pointers _as understood in the C world_ (or better: 
machine-level addresses). So especially, you cannot pass addresses of 
Common Lisp objects or functions to C code.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Nils Goesche
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <lysmrq67pe.fsf@cartan.de>
Pascal Costanza <········@web.de> writes:

> Nils Goesche wrote:
> > Pascal Costanza <········@web.de> writes:

> >> The lack of pointers means that it is relatively hard to
> >> implement a straightforward FFI.

> > The ``lack�� of pointers?  Lisp's semantics can be perfectly well
> > described using an ``everything is a pointer�� approach.

> That's a misunderstanding. Michael talked about "abstaining from
> using pointers" - this is incompatible with "everything is a
> pointer".
> 
> The term pointer is used very ambiguously throughout computer
> science, and this causes some confusions. So sorry for not having
> been clear enough.  What's missing in Common Lisp in order to make
> an FFI work is a notion of pointers _as understood in the C world_
> (or better: machine-level addresses).

I guess I still don't get it.  First of all, until now I used to
believe that I'm using a ``working FFI�� all the time in the Lisp
implementation I use, and it is an implementation of Common Lisp, not
some other language with a notion of pointers.  Sure, I can do

  (fli:make-pointer :address #x04000800 :type :int)

but note that this thing is an ordinary Lisp struct, which is used to
model a C pointer by the FFI.  And ok, I can even do some black magic
like

(setf (fli:dereference (fli:make-pointer :address #x04000800
                                         :type :int))
      42)

although the consequences of this are undefined.  Something else I
could do is

CL-USER 1 > (defparameter *pointer*
                          (fli:allocate-foreign-object
                           :type :int
                           :initial-element 42))
*POINTER*

CL-USER 2 > (fli:dereference *pointer*)
42

CL-USER 3 > (setf (fli:dereference *pointer*) 17)
17

CL-USER 4 > (fli:dereference *pointer*)
17

CL-USER 5 > (fli:free-foreign-object *pointer*)
#<Pointer to type :INT = #x00000000>

So now I have a freaky new kind of container.  But it's not that
Lisp's semantics have been significantly changed by some new kind of
thing called ``pointer��.

> So especially, you cannot pass addresses of Common Lisp objects or
> functions to C code.

That wouldn't help you much, either.  Suppose there is a foreign C
function

struct blark {
       int x;
       char y;
       short z;
};

int foo(struct blark *);



Now, I /could/ call that thing in Lispworks like this:

(defstruct foo
  x
  y
  z)

(fli:define-foreign-function (foo "foo" :source)
    ((arg-1 (:pointer (:struct blark))))
  :result-type :int
  :language :ansi-c)

(defstruct foo
  x
  y
  z)

(let ((obj (sys:in-static-area (make-foo :x 42 :y 17 :z 13))))
  (foo (fli:make-pointer :type '(:struct blark)
                         :address (sys:object-address obj))))

but this is not going to do much good :-) About the only use for
SYS:OBJECT-ADDRESS is when a C library wants some opaque void *
pointer that it will pass back to me in some callback.  There is
absolutely nothing useful I can do with such an address from Lisp
(other than converting it back to a real Lisp object, hoping I didn't
forget to allocate it in the static area and keep a Lisp reference to
it around, so GC has neither moved nor removed it in the mean time).
The main reason being that in Lisp we, thank God, do not think of
objects as being an array of octets.  The garbage collector being
another one.  Unlike C, these ``addresses�� do not have any real
semantic meaning in Lisp.  (Actually, in these days of MMU's they are
just an abstraction in C, too).  In C, these addresses are part of the
very semantics of the language!  Have just a glance into

  http://citeseer.nj.nec.com/papaspyrou01denotational.html

to see what I mean.

Or simply consider

int bar(int *x);

and

(let ((x 42))
  (bar (what-now? x)))

What in the world is WHAT-NOW? supposed to do for this to make sense?

So, we have a working FFI in LispWorks, and yes, there is even a way
of obtaining a memory address of Lisp objects (although this is not
very significant, as what we really have to pass is some special kind
of object that bears little resemblance to a Lisp FOO struct, anyway),
but I wouldn't say that now we have ``pointers�� in LispWorks.  There
is not some particular new language construct called ``pointer�� in
the Lisp as implemented by LispWorks, which is integrated into the
language in a way that might justify saying that this language now
doesn't ``lack�� pointers.  When you say something like

> What's missing in Common Lisp in order to make an FFI work is a
> notion of pointers _as understood in the C world_ (or better:
> machine-level addresses).

I would expect that then it should be somehow possible to slightly
change Lisp's core semantics, yielding a language where calling C is
easy and ``pointers�� have some kind of semantic meaning in Lisp, too.
But I believe this is not true.  If you change Lisp enough for this to
be the case, what you'd get will not be a Lisp at all anymore.  And as
long as you don't, a ``pointer��, whatever that means, will always be
a totally alien concept in Lisp, thus making FFIs clumsy.

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9bg2r$uhs$1@f1node01.rhrz.uni-bonn.de>
Nils Goesche wrote:

> I guess I still don't get it.  First of all, until now I used to
> believe that I'm using a ``working FFI�� all the time in the Lisp
> implementation I use, and it is an implementation of Common Lisp, not
> some other language with a notion of pointers.

OK, maybe I am missing something - how does LispWork's FFI communicate 
with C and implement call backs, for example? The C code needs _some_ 
kind of handle to access Common Lisp objects, doesn't it? (and vice 
versa...)


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Nils Goesche
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <lyof2e62mj.fsf@cartan.de>
Pascal Costanza <········@web.de> writes:

> Nils Goesche wrote:
> 
> > I guess I still don't get it.  First of all, until now I used to
> > believe that I'm using a ``working FFI�� all the time in the Lisp
> > implementation I use, and it is an implementation of Common Lisp,
> > not some other language with a notion of pointers.
> 
> OK, maybe I am missing something - how does LispWork's FFI
> communicate with C and implement call backs, for example? The C code
> needs _some_ kind of handle to access Common Lisp objects, doesn't
> it? (and vice versa...)

The C code cannot do anything with Common Lisp objects.  If a C
function wants to operate on a struct

struct blark {
       int x;
       char y;
       short z;
};

we cannot simply pass it a real Lisp object like a struct

(defstruct blark
  x
  y
  z)

because an object of this type will have a totally different memory
layout, anyway.  What gets ultimately passed by LispWorks is a /C/
struct, not a Lisp struct (under the hood, of course).  I tell
LispWorks about what kind of C function I want to call (it's type
signature), then I have some macros that will let me construct some
special Lisp objects which /represent/ C objects for the internal FFI
code, and when I finally call the C function with such a
representation as argument, the FFI will, under the hood, convert that
thing into something C can operate on sensibly.  But the whole process
is rather opaque, and you do not have any new semantic Lisp constructs
you could use for anything /but calling C/.  The situation for
callbacks is similar: You can make some special kind of functions C
will be able to call.  You have to specify the C type signature of the
function you want, and C will call a function /with C objects/ again.
The FFI will convert or wrap these C objects into special kinds of
Lisp objects you can at least manipulate from Lisp.  Once there, you
can call ordinary Lisp functions with ordinary Lisp objects as
arguments.

The whole process is somewhat awkward and non-trivial to use, but this
is because Lisp objects and C objects are so hopelessly different.

Note that machine addresses of Lisp objects are not involved.
Moreover, it's not like you could use in ``LispWorks Lisp�� some
significant new language construct called ``pointer��.

When I hear ``Lisp lacks FOO��, or ``Lisp abstains from using FOO��,
this would mean, to me, that FOO is something that could be added to
the HyperSpec such that it would have some semantic meaning to Lisp
code operating with Lisp objects, not only for calling C.  Like
call/cc, for instance.  I do not believe that there is such a thing
that could sensibly be called ``pointer��.

And the ``everything is a pointer�� approach I mentioned is not
totally irrelevant to this, either.  The implementation /could/ simply
send the ordinary object descriptors it uses internally anyway, with
tag bits and all, down to C functions.  These object descriptors do
have some resemblance to pointers.  Of course, they are different from
C pointers, and libjpeg wouldn't like them at all, but this is not
surprising as C structs are differently implemented than Lisp structs,
too.

Regards,
-- 
Nils G�sche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0
From: Eric Smith
Subject: Foreign function libraries etc. (was Re: Why seperate function namespaces)
Date: 
Message-ID: <ceb68bd9.0305071602.570e23e1@posting.google.com>
Nils Goesche <······@cartan.de> wrote in message news:<··············@cartan.de>...

> The whole process is somewhat awkward and non-trivial to use, but this
> is because Lisp objects and C objects are so hopelessly different.

I would call it non-trivial to wrap, not non-trivial to use.
The end result of building FFI/FLI wrappers is a set of Lisp
functions and macros that neatly encapsulate the foreign
functionality desired, and are no more complicated to use than
other Lisp functions and macros.  And in fact they tend to be
easier to use than their equivalents in C/C++/etc.  The fastest
way to learn all the subtle details, bugs, and quirks of
Microsoft Windows XP programming might be to wrap all the
Microsoft libraries in Lispworks and proceed to experiment with
them, far faster using Lispworks than you could with a C++
compiler and debugger.

Or rather than calling it non-trivial to wrap I would call
it non-trivial to learn to wrap.  Once you get going and learn
how, you can proceed much faster, so wrapping all the libraries
might not take nearly as long as might be expected.

And this might be another good way to get Lisp in one's workplace
through the back door.  Using it as the fastest way to learn the
subtle details of a set of foreign libraries, rather than trying
to justify using it for direct development of the end product.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-7088E5.23415507052003@news.netcologne.de>
In article <··············@cartan.de>, Nils Goesche <······@cartan.de> 
wrote:

> The whole process is somewhat awkward and non-trivial to use, but this
> is because Lisp objects and C objects are so hopelessly different.
> 
> Note that machine addresses of Lisp objects are not involved.

You are talking about how you are using LispWork's FFI. I was talking 
about how to implement an FFI. At some stage, the FFI needs to represent 
something in terms of C pointers, whether you see this as a user or not.

As far as I understand, the UFFI cannot be ported to all Common Lisp 
implementations because it depends heavily on what they respectively 
provide in terms of their own FFIs. This situation would be very 
different if Common Lisp had some limited support for C-style pointers 
and a way of telling the garbage collector to fix specific objects in 
memory.

Pascal
From: Nils Goesche
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <87issmpe2z.fsf@darkstar.cartan>
Pascal Costanza <········@web.de> writes:

> In article <··············@cartan.de>, Nils Goesche <······@cartan.de> 
> wrote:
> 
> > The whole process is somewhat awkward and non-trivial to use,
> > but this is because Lisp objects and C objects are so
> > hopelessly different.
> > 
> > Note that machine addresses of Lisp objects are not involved.
> 
> You are talking about how you are using LispWork's FFI.

But I do not really want to :-) /You/ brought FFI into the
discussion, and I am saying that FFI is irrelevant to what we're
talking about.

> I was talking about how to implement an FFI. At some stage, the
> FFI needs to represent something in terms of C pointers,
> whether you see this as a user or not.

Sure.

> As far as I understand, the UFFI cannot be ported to all Common
> Lisp implementations because it depends heavily on what they
> respectively provide in terms of their own FFIs.

Undoubtedly.  Some Lisp's do not provide a usable, documented way
of giving callbacks to be called from C libraries, for instance.
They probably should.  Then, UFFI had a chance for providing a
portable way of doing this, too.  But I do not think such a thing
belongs into the HyperSpec.  And pointers have little or nothing
to do with it.

> This situation would be very different if Common Lisp had some
> limited support for C-style pointers and a way of telling the
> garbage collector to fix specific objects in memory.

First, I have already mentioned that the ability to allocate Lisp
objects in static memory is not terribly important.  You could
simply pass integers to C and use a hash table to retrieve your
Lisp objects, for instance.  And C style pointers wouldn't help
you if you don't also standardize how to model C style objects
(which are, again, very different from Lisp style objects).
Second, I would not /want/ to see anything like this in the
HyperSpec.  As of now, the HyperSpec doesn't even /mention/ a
garbage collector.  And you want it to even talk about static
memory areas?  For the sole purpose of calling a totally
different language like C (which C, BTW?  K&R?  C90?  C99?)?
Such things do not /belong/ into a language definition!

The right way to do this would be to write some layered standard
on top of CL that specifies, for instance, how you can get C call
a function written in Lisp.  If any CL implementation cannot do
this now, they'd have to implement it if they want to support
UFFI.  What you seem to be asking for is not pointers, but the
addition of UFFI to the HyperSpec.  And I think it doesn't belong
there.



But my original point was totally unrelated to FFIs: I do not
like hearing people say things like ``Lisp lacks pointers�� or
``Lisp abstains from using pointers��.  Because saying this seems
to imply that it is so obvious what a ``pointer�� would be in
Lisp.  I think it is not, and that talking about ``Lisp
pointers�� will only confuse people.

Regards,
-- 
Nils G�sche
Ask not for whom the <CONTROL-G> tolls.

PGP key ID #xD26EF2A0
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-E551CF.01202808052003@news.netcologne.de>
In article <··············@darkstar.cartan>,
 Nils Goesche <···@cartan.de> wrote:

> First, I have already mentioned that the ability to allocate Lisp
> objects in static memory is not terribly important.  You could
> simply pass integers to C and use a hash table to retrieve your
> Lisp objects, for instance.  And C style pointers wouldn't help
> you if you don't also standardize how to model C style objects
> (which are, again, very different from Lisp style objects).

I still don't get this. At some level, some bit fiddling has to take 
place in order to pass information between Lisp and C, so there has to 
be an agreement about how the data is supposed to look like. So both 
languages need to understand a common set of abstractions. Pointers are 
definitely part of this. Even when you serialize all your data and 
create IDs for all your objects, there needs to be at least one shared 
buffer, or something along these lines, and the languages need to agree 
on where it is, or how to access it. So you need at least one pointer. ;)

> Second, I would not /want/ to see anything like this in the
> HyperSpec.  As of now, the HyperSpec doesn't even /mention/ a
> garbage collector.  And you want it to even talk about static
> memory areas?  For the sole purpose of calling a totally
> different language like C (which C, BTW?  K&R?  C90?  C99?)?
> Such things do not /belong/ into a language definition!

OK, I understand that this would be problematic, especially under the 
additional requirement that the ANSI standard also needs to cover 
systems that are not based on C.

> The right way to do this would be to write some layered standard
> on top of CL that specifies, for instance, how you can get C call
> a function written in Lisp.  If any CL implementation cannot do
> this now, they'd have to implement it if they want to support
> UFFI.  What you seem to be asking for is not pointers, but the
> addition of UFFI to the HyperSpec.  And I think it doesn't belong
> there.

Yes, a layered standard would be good.

> But my original point was totally unrelated to FFIs: I do not
> like hearing people say things like ``Lisp lacks pointers�� or
> ``Lisp abstains from using pointers��.  Because saying this seems
> to imply that it is so obvious what a ``pointer�� would be in
> Lisp.  I think it is not, and that talking about ``Lisp
> pointers�� will only confuse people.

OK, I agree, you definitely have a point here.


Pascal
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305072202530.17348-100000@gwen.sixfingeredman.net>
On 6 May 2003, Kent M Pitman wrote:
> Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:
> I hate to speak for another, so Adrian can correct me if he's wrong, but
> I have to assume that what he means implies that languages allow one to
> express 'good' things and 'bad' things and that to the extent that one can
> make illegal the bad things that one can easily reach, one has reduced the
> amount of runtime grief.  This is sort of like childproofing your house.

Only it's better because you've increased the signal-noise ratio of your
language and so can express correct programs with less information. It's
like childproofing your house by designing a way to transmit electricity
through the air directly to the appliances which need it.

I should have clarified in my blatant generality, though, that this
"reduction of expressiveness" must be done relative to your problem domain
-- different languages for different problems, although many languages
still allow one to express things which aren't useful for any problem
domain at all.

By the way, I certainly didn't intend your extrapolation of the metaphor
to politics -- though if I had, I would have mentioned something about
newspeak.

> However, the basic notion that freedom and safety are in tension with
> one another does mostly carry over, and is worth giving serious heed to.

I disagree. A programming language is like a building, and your goal, the
answer to the problem you're solving, is a room far away from you. A
"safe" language has only one corridor, which leads straight to the goal.
An "unsafe" language is like a maze. If you put someone in a maze, you may
not have technically decreased their freedom to leave it, but practically
you have. In the same way, an unsafe programming language actually
decreases your freedom.
From: Eric Smith
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ceb68bd9.0305080753.49b1ec1b@posting.google.com>
Adrian Kubala <······@sixfingeredman.net> wrote in message news:<········································@gwen.sixfingeredman.net>...

> I disagree. A programming language is like a building, and your goal, the
> answer to the problem you're solving, is a room far away from you. A
> "safe" language has only one corridor, which leads straight to the goal.
> An "unsafe" language is like a maze. If you put someone in a maze, you may

The room you want might be near, but with
a limited number of corridors, you might
have to travel a long distance overall to
get there.  A newbie might be safer, by
taking the simpler route at the higher
cost.  But one who already knows the maze
of smaller corridors would not want to
waste the time just for the privilege of
doing what newbies do.

So the real question is whether programming
languages should be designed for beginners
or experts.

The case for beginners is that if they don't
meet with enough early success they won't be
motivated to continue long enough to become
experts.  The case for experts is that they
spend a limited time becoming experts but a
much longer time remaining experts, so the
overall cost/benefit formula should favor that
much longer time when they are already experts.
From: Paul Wallich
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <pw-55CD97.09513608052003@reader1.panix.com>
In article 
<········································@gwen.sixfingeredman.net>,
 Adrian Kubala <······@sixfingeredman.net> wrote:
[snip]
>. A programming language is like a building, and your goal, the
> answer to the problem you're solving, is a room far away from you. A
> "safe" language has only one corridor, which leads straight to the goal.
> An "unsafe" language is like a maze. If you put someone in a maze, you may
> not have technically decreased their freedom to leave it, but practically
> you have. In the same way, an unsafe programming language actually
> decreases your freedom.

This analogy is fundamentally flawed. The "safe" language doesn't have a 
corridor that leads straight to the goal, it has a long and twisty 
passageway that goes down to the basement, up to the attic through a 
service shaft, back and forth multiple times from one end of the 
building to another, and eventually winds up at the goal. Along the way 
there are a myriad locked doors, through some of whose windows you can 
see the room with the goal. You are guaranteed to reach your goal if you 
follow the passage to the end, but if you go mad or die of hunger or 
thirst on the way, that's just too bad. The unsafe language unlocks all 
those doors and exposes many possible shorter routes, at the cost of 
more dead ends.

paul    ain't metaphors grand
From: Duane Rettig
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <4znlxbjew.fsf@beta.franz.com>
Paul Wallich <··@panix.com> writes:

> In article 
> <········································@gwen.sixfingeredman.net>,
>  Adrian Kubala <······@sixfingeredman.net> wrote:
> [snip]
> >. A programming language is like a building, and your goal, the
> > answer to the problem you're solving, is a room far away from you. A
> > "safe" language has only one corridor, which leads straight to the goal.
> > An "unsafe" language is like a maze. If you put someone in a maze, you may
> > not have technically decreased their freedom to leave it, but practically
> > you have. In the same way, an unsafe programming language actually
> > decreases your freedom.
> 
> This analogy is fundamentally flawed. The "safe" language doesn't have a 
> corridor that leads straight to the goal, it has a long and twisty 
> passageway that goes down to the basement, up to the attic through a 
> service shaft, back and forth multiple times from one end of the 
> building to another, and eventually winds up at the goal. Along the way 
> there are a myriad locked doors, through some of whose windows you can 
> see the room with the goal. You are guaranteed to reach your goal if you 
> follow the passage to the end,

Not if your goal is not at the end of the passage which was created for
you...

> but if you go mad or die of hunger or 
> thirst on the way, that's just too bad. The unsafe language unlocks all 
> those doors and exposes many possible shorter routes, at the cost of 
> more dead ends.

Only dead if they do not represent possible goals.

> paul    ain't metaphors grand

:-)

-- 
Duane Rettig    ·····@franz.com    Franz Inc.  http://www.franz.com/
555 12th St., Suite 1450               http://www.555citycenter.com/
Oakland, Ca. 94607        Phone: (510) 452-2000; Fax: (510) 452-0182   
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305081052410.19960-100000@gwen.sixfingeredman.net>
On Thu, 8 May 2003, Paul Wallich wrote:
> Adrian Kubala <······@sixfingeredman.net> wrote:
> > A programming language is like a building, and your goal, the
> > answer to the problem you're solving, is a room far away from you. A
> > "safe" language has only one corridor, which leads straight to the goal.
> This analogy is fundamentally flawed. The "safe" language doesn't have a
> corridor that leads straight to the goal, it has a long and twisty
> passageway that goes down to the basement, up to the attic through a
> service shaft, back and forth multiple times from one end of the
> building to another, and eventually winds up at the goal.

Then you need to find a better-designed language -- I'm surprised at all
the resistance to this idea, I thought Lispers of all people understood
the value of domain languages.
From: Peter Seibel
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <m3n0hx8kfw.fsf@javamonkey.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On Thu, 8 May 2003, Paul Wallich wrote:
> > Adrian Kubala <······@sixfingeredman.net> wrote:
> > > A programming language is like a building, and your goal, the
> > > answer to the problem you're solving, is a room far away from you. A
> > > "safe" language has only one corridor, which leads straight to the goal.
> > This analogy is fundamentally flawed. The "safe" language doesn't have a
> > corridor that leads straight to the goal, it has a long and twisty
> > passageway that goes down to the basement, up to the attic through a
> > service shaft, back and forth multiple times from one end of the
> > building to another, and eventually winds up at the goal.
> 
> Then you need to find a better-designed language -- I'm surprised at
> all the resistance to this idea, I thought Lispers of all people
> understood the value of domain languages.

Well, *this* Lisper, understands the value of having my cake and
eating it too. Give me a nice powerful, non-constraining language like
Common Lisp so *I* can write the domain language that *actually*
matches *my* domain. Yes, I have take more responsibility for certain
things--for one I have to design a good (enough) domain language. But
at least I can. Maybe you work in a more constricted domain than I do
but there's no *one* path from where I am now to all the places I want
to go.

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

  The intellectual level needed   for  system design is  in  general
  grossly  underestimated. I am  convinced  more than ever that this
  type of work is very difficult and that every effort to do it with
  other than the best people is doomed to either failure or moderate
  success at enormous expense. --Edsger Dijkstra
From: Nikodemus Siivola
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9e1b0$55pga$1@midnight.cs.hut.fi>
Adrian Kubala <······@sixfingeredman.net> wrote:

> Then you need to find a better-designed language -- I'm surprised at all
> the resistance to this idea, I thought Lispers of all people understood
> the value of domain languages.

To build a domain language on top of an existing language the base
language has to be flexible. Every subset except an empty one has a
superset...

Note: "on top of", not "with".

Cheers,

  -- Nikodemus
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305081345510.20655-100000@gwen.sixfingeredman.net>
On 8 May 2003, Nikodemus Siivola wrote:
> To build a domain language on top of an existing language the base
> language has to be flexible. Every subset except an empty one has a
> superset...
> Note: "on top of", not "with".

I'm not sure I understand the distinction between the two -- if I write
some functions and data structures in lisp, that's "on top of", and if I
design a new language which conveniently uses sexprs and write an
interpreter/compiler in lisp, that's "with", right? Of course, the great
thing about lisp is that doing the second isn't much harder than doing the
first.
From: Nikodemus Siivola
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9eg2t$587id$1@midnight.cs.hut.fi>
Adrian Kubala <······@sixfingeredman.net> wrote:

> I'm not sure I understand the distinction between the two -- if I write
> some functions and data structures in lisp, that's "on top of", and if I
> design a new language which conveniently uses sexprs and write an
> interpreter/compiler in lisp, that's "with", right? Of course, the great
> thing about lisp is that doing the second isn't much harder than doing the
> first.

Yes, the line is blurry, but that's pretty much what I ment. Example:

 * You could trivially implement a subset of Scheme on top of CL.

 * You could easily implement a subset of Schemeiwith C.

Cheers,

  -- Nikodemus
From: Coby Beck
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9ermd$9ek$1@otis.netspace.net.au>
"Adrian Kubala" <······@sixfingeredman.net> wrote in message
·············································@gwen.sixfingeredman.net...
>
> I disagree. A programming language is like a building, and your goal, the
> answer to the problem you're solving, is a room far away from you. A
> "safe" language has only one corridor, which leads straight to the goal.

But I need fresh air, how do I get out of here! :)

-- 
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305071035040.14943-100000@gwen.sixfingeredman.net>
On 7 May 2003, Nikodemus Siivola wrote:
> Adrian Kubala <······@sixfingeredman.net> wrote:
> > the purpose of a language is to limit the number of easily-expressible
> > programs
> What!? That's like saying "purpose of harmony is chaos", or any other
> oxymoron of your choise... Please explain.

Each new level of abstraction limits the number of possible choices, and
in so doing makes it possible to express the desired choice in fewer bits.
Ideally, you would like your language to offer you only one choice --
"DWIM" -- which represents the precise program you wish to write. I'll
just ignore turing equivalence, so when I say "expressible", I mean easily
expressible, without writing a turing machine emulator in the language.

Hardware offers you the freedom to draw doodles with solder or build
little transistor sculptures. Assembly language denies you that but
continues to offer you the ability to alter and jump to arbitrary pieces
of memory. C adds the concept of functions, which keep you from jumping
about willy-nilly and enforce stack convention. Then you add
bounds-checking. This limits your ability to access unallocated memory,
which hardly anybody misses. Then you get garbage collection, which
(mostly) keeps you from hanging onto resources longer than you need them,
which again not many people miss. Then you get objects, which enforce
(often useful) conventions for associating functions to data. And types,
which enforce that functions do what they say they do in a rough sense.

This is the whole principle behind abstraction -- you see yourself doing
the same thing many times, and you formalize it so that you do in fact do
the same thing, and not possible variations thereof. You do this until you
can't do it anymore, and you've built a domain language in which precisely
the things you need to do are expressible, and no more.
From: Nikodemus Siivola
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9e2co$55pga$2@midnight.cs.hut.fi>
Adrian Kubala <······@sixfingeredman.net> wrote:

> Each new level of abstraction limits the number of possible choices, and
> in so doing makes it possible to express the desired choice in fewer bits.

Not true, or then we mean very different things by abstraction. 
you. What do you understand by abstraction?

Consider the CL function REDUCE, the following:

 (defun arithmetic-reduction (op list)
     (assert (memeber op '(+ - * /)))
     (apply op list))         

ARITHMETIC-REDUCTION limits your choises but, REDUCE is more abstract. 
Yeat both are about the "same thing".

Granted, the example is silly, but in general abstraction doesn't limit
the number of choise, but quite opposite. Or can you give (an equally
silly) counterexample?

Cheers,

  -- Nikodemus
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305081519580.20956-100000@gwen.sixfingeredman.net>
On 8 May 2003, Nikodemus Siivola wrote:
> Adrian Kubala <······@sixfingeredman.net> wrote:
> > Each new level of abstraction limits the number of possible choices, and
> > in so doing makes it possible to express the desired choice in fewer bits.
> [...]
> ARITHMETIC-REDUCTION limits your choises but, REDUCE is more abstract.
> Yeat both are about the "same thing".

When you say "more abstract", you mean "more general" -- REDUCE
generalizes ARITHMETIC-REDUCTION, but ARITHMETIC-REDUCTION abstracts
REDUCE, just as REDUCE abstracts the recursion in its body.

Generalizing isn't bad, but you want to keep it to the minimum
necessary for your domain. See also
http://c2.com/cgi/wiki?YouArentGonnaNeedIt
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <wuh2c273.fsf@ccs.neu.edu>
Adrian Kubala <······@sixfingeredman.net> writes:

> the purpose of a language is to limit the number of
> easily-expressible programs.

Perl and C++ fulfill that purpose admirably.


The purpose of a language is to get the computer to do what you want.

The main desiderata of any computer language are

   to make it easy to express what you want,

   to make it easy to modify the computer's behavior when you change
   your mind about what you want,

   to make it easy to determine what you wanted from what you
   expressed, 

   to make it obvious to both you and the computer when what you
   expressed is different from what you wanted.
From: Paul F. Dietz
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <rSednUMTvYWoGyqjXTWcrg@dls.net>
Adrian Kubala wrote:

> In other words, CL exports the complexity involved in making macros
> hygenic into the whole rest of the language, and it /still/ doesn't
> completely solve the problem of unintentionally-shadowed bindings.

Which is ok, since it's not a problem in practice.  Why screw up
the language to solve a nonproblem?

	Paul
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b97oig$na4$1@f1node01.rhrz.uni-bonn.de>
Adrian Kubala wrote:
> On 2 May 2003, Kent M Pitman wrote:
> 
>>[...]
>>This works in Scheme because they added infinite hair in their hygeinic
>>macro system in order to remember whose car FIRST expands into; but it works
>>in CL because our rules and customs for macro expansion and namespace
>>separation do not lead to any name conflict using a _far simpler_ macro
>>system.  Not just simpler to implement (which I don't care about at all),
>>but simpler to _use_, which I care about a lot.
> 
> In other words, CL exports the complexity involved in making macros
> hygenic into the whole rest of the language, and it /still/ doesn't
> completely solve the problem of unintentionally-shadowed bindings.

Hygienic macros make as much sense as static typing. They repel 
elephants. From http://c2.com/cgi/wiki?StaticTypingRepelsElephants :

     Salesman: "Want to buy some elephant repellent?"
     Customer: "But there aren't any elephants around here!"
     Salesman: "See how well it works?"

;-)

Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305061741390.8930-100000@gwen.sixfingeredman.net>
On Tue, 6 May 2003, Pascal Costanza wrote:
> Adrian Kubala wrote:
> > In other words, CL exports the complexity involved in making macros
> > hygenic into the whole rest of the language, and it /still/ doesn't
> > completely solve the problem of unintentionally-shadowed bindings.
>
> Hygienic macros make as much sense as static typing. They repel
> elephants. From http://c2.com/cgi/wiki?StaticTypingRepelsElephants :

I don't see the purpose of static typing as catching "type errors" (at
least in the common, limited definition of a type error) -- it's an
attempt at formalizing certain properties of your programs in a
computer-understandable way. In the same way, scheme's macros formalize
rules about how bindings are captured (or not) by macros.

It's probably true that both these ideas are still too immature and not
pragmatic in the short term.
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwaddzfwok.fsf@shell01.TheWorld.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On Tue, 6 May 2003, Pascal Costanza wrote:
> > Adrian Kubala wrote:
> > > In other words, CL exports the complexity involved in making macros
> > > hygenic into the whole rest of the language, and it /still/ doesn't
> > > completely solve the problem of unintentionally-shadowed bindings.
> >
> > Hygienic macros make as much sense as static typing. They repel
> > elephants. From http://c2.com/cgi/wiki?StaticTypingRepelsElephants :
> 
> I don't see the purpose of static typing as catching "type errors" (at
> least in the common, limited definition of a type error) -- it's an
> attempt at formalizing certain properties of your programs in a
> computer-understandable way. In the same way, scheme's macros formalize
> rules about how bindings are captured (or not) by macros.
> 
> It's probably true that both these ideas are still too immature and not
> pragmatic in the short term.

It's probably also true that even when mature, some of us will not
want to bother with them.  Our resistence to them is not their
maturity.  It is the lack of demonstrated need.  They are solving a
problem we don't have.  It's fine for them to experiment this to
death, but we have our own problems to experiment with.  That's WHY
we are different languages.
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305071053540.14943-100000@gwen.sixfingeredman.net>
On 6 May 2003, Kent M Pitman wrote:
> Adrian Kubala <······@sixfingeredman.net> writes:
> > It's probably true that both these ideas are still too immature and not
> > pragmatic in the short term.
> It's probably also true that even when mature, some of us will not
> want to bother with them.  Our resistence to them is not their
> maturity.  It is the lack of demonstrated need.

"Need" is so subjective -- none of us need computers, do we? But, IF it
were possible for your computer to PROVE that your program didn't have
a certain set of bugs, that would be a good thing, would it not? Static
typing is the only direction which will allow this, and the set of bugs it
can disprove grows larger with more research. Eventually, the cost will
decrease to the point that it's worthwhile no matter how rarely you make
mistakes. Anything which offloads work to the computer, no matter how
small, is a step in the right direction.
From: Kent M Pitman
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <sfwvfwm67gu.fsf@shell01.TheWorld.com>
Adrian Kubala <······@sixfingeredman.net> writes:

> On 6 May 2003, Kent M Pitman wrote:
> > Adrian Kubala <······@sixfingeredman.net> writes:
> > > It's probably true that both these ideas are still too immature and not
> > > pragmatic in the short term.
> > It's probably also true that even when mature, some of us will not
> > want to bother with them.  Our resistence to them is not their
> > maturity.  It is the lack of demonstrated need.
> 
> "Need" is so subjective -- none of us need computers, do we? But, IF it
> were possible for your computer to PROVE that your program didn't have
> a certain set of bugs, that would be a good thing, would it not?

I am leary of politicians hawking things as "free".

Do not use the one-place predicate "would x be good?" because it forces
an answer that is misleading.

Tell me the cost and ask me if it's worth the cost.

If the cost is restricting my programming style, I can already get dozens
of programming languages that do this.

> Static typing is the only direction which will allow this, and the
> set of bugs it can disprove grows larger with more research.

A human being can also disprove these by "reading code", even untyped code.
I'd rather see energy spent in that direction, leaving my code as it is,
than have the problem declared "solved" by giving up a critical freedom.

> Eventually, the cost will
> decrease to the point that it's worthwhile no matter how rarely you make
> mistakes.

The cost is already low enough and high enough.  Low enough in the sense that
it already finds plenty of bugs.  High enough in the sense that it restricts
my programming freedom more than I want.

I want it to infer the declarations I need.  Eventually the cost of that
will decrease to the point it will impress even you.  Meanwhile, I plan to
program in the futuristic style that this change in inference capability
will support, in anticipation of such cool compilation.
 
Programming in a restricted way that presumes there will be no advances
in inferencing and that I must do with what can be inferenced now seems
shortsighted.

> Anything which offloads work to the computer, no matter how
> small, is a step in the right direction.

No, that is wrong.

There are several fallacies built into this.

 - Things that computers don't do right are included in what you say.
   For example, is offloading "art" to the computer a step in the
   right direction?

 - Things that involve judgment are included in what you say.
   For example, is offloading the judge's or jury's role in a trial
   best offloaded to a computer?

 - Things that require more work to offload to the computer than the
   work they save are not best done by computer.  For example, is adding
   2+2 best done by punching it into a calculator?

 - Things where there are more than one solution are not necesarily
   best off put into a computer unless you ALSO do the work to assure
   that either the program is as flexible as a non-program.  For example,
   is leaving your vacation planning up to a computer the best thing?
   It may do very well at finding fares to Rio, but if it is just going
   to say "Going to Rio is your only option for less than $200" are you
   going to believe it?  Sometimes I stay home on my vacations or go on
   a day-trip.  What if all the options are not in the computer?

 - Things that restrict freedom are not necessarily better.  Is it better
   to leave the planning of your trip across town to a guidance computer?
   I haven't seen MAPQUEST.COM or MAPS.YAHOO.COM ever plot me a best course
   on any path I knew personally.

 - Things that are potential hill-climbing problems are not best 
   programmed into a computer unless you're sure you're not climbing 
   a bad hill.  You didn't offer any caveats.

I'm sure this list is not exhaustive.
From: Adrian Kubala
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <Pine.LNX.4.44.0305072148250.17348-100000@gwen.sixfingeredman.net>
On 7 May 2003, Kent M Pitman wrote:
> I want it to infer the declarations I need.  Eventually the cost of that
> will decrease to the point it will impress even you.  Meanwhile, I plan to
> program in the futuristic style that this change in inference capability
> will support, in anticipation of such cool compilation.

Yes, I agree that there is a convergence between statically-typed and
dynamic languages, in the sense that expressing types statically will look
more and more like programming, and that computers will be able to
statically infer more and more things about non-explicitly-typed programs.

> > Anything which offloads work to the computer, no matter how
> > small, is a step in the right direction.
> No, that is wrong.
> There are several fallacies built into this.

You're right, so I'll qualify it -- trying to make computers solve our
problems is itself a hill-climbing problem, so approaches which are local
improvements may still take us away from the theoretical maximum. However,
aiming for the theoretical maximum is still an admirable goal.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <b9bfb8$uhq$1@f1node01.rhrz.uni-bonn.de>
Adrian Kubala wrote:

> "Need" is so subjective -- none of us need computers, do we? But, IF it
> were possible for your computer to PROVE that your program didn't have
> a certain set of bugs, that would be a good thing, would it not?

No, because it necessarily restricts the language so that I can do less 
in a statically typed language than in a dynamically typed one. The 
question is, if in doubt, do we prefer safety or flexibility?

One might not need total flexibility 80% of the time (make that 90%, or 
95%, it doesn't really matter). So this leads many people to believe 
that it is quite ok to restrict the language. However, the real counter 
argument by proponents of dynamically typed languages is that the kind 
of bugs that are caught by static type systems simply do not occur in 
practice. So a restriction of a language in order to make it safer 
becomes gratuitous. Why bother?

Here is a quote, as recently seen on the ll1 mailing list:

: I was reading Joe Armstrong's paper "The development of Erlang" from
: ICFP '97 and noticed this interesting point when talking about the
: new type system being applied to the standard libraries:
:
:   "The type system has uncovered no errors.  The kernel libraries
:    were written by Erlang "experts"--it seems that good programmers
:    don't make type errors.

Here is another recent article by Robert C. Martin who claims to have 
been a fan of static typing in the past: 
http://www.artima.com/weblogs/viewpost.jsp?thread=4639

Here is another article on a language construct that sucks in large 
programs as soon as it gets statically checked: 
http://www.mindview.net/Etc/Discussions/CheckedExceptions

Another example is that in the past, there have been languages that have 
statically checked array boundaries, which has been largely dismissed in 
almost all languages.

So at least you have to admit that this could be a pattern, and that 
dynamic checking could turn out as generally superior to static 
checking. (I am not sure this is the case, but it could be.)

The real problem I see in the usual arguments in favor of static 
checking is that they start by defining a set of bugs they want to get 
rid of, then define a static type system that can get rid of these bugs, 
and finally "prove" the usefulness of that type system by formally 
proving that the type sytem indeed helps to get rid of these bugs. This 
is a circular argument.

What is really needed is an empirical evaluation of static type systems: 
Under what settings do they help? Are there certain kinds of 
applications that require static type systems more than others? Is this 
perhaps just a matter of style? Does a higher criticality require more 
static checks, or rather a more complete test suite in practice? And so 
forth.

Such empirical studies are largely missing, and this is clearly a bad 
sign for the field of computer science in general.

Until we have some hard data on these issues, we are simply left alone 
with our own prejudices and experiences.

However, one thing should be clear: it's the job of the propenents of 
static type systems to make a good case for them, and not the job of the 
proponents of dynamic type systems to disprove their value. You can't 
come up with an arbitrary piece of technology, claim that it increases 
the quality of software, and then require other people to disprove this 
claim. Such an approach gives us RUP, UML, and the like...

> Static
> typing is the only direction which will allow this, and the set of bugs it
> can disprove grows larger with more research. Eventually, the cost will
> decrease to the point that it's worthwhile no matter how rarely you make
> mistakes. Anything which offloads work to the computer, no matter how
> small, is a step in the right direction.

This is what proponents of static type systems believe in. But you 
should not portray this as a given fact (or as "the only direction").

To the contrary, it can be shown that static proofs of program 
correctness are limited by the halting problem. This boundary means that 
you can only statically check a strict subset of all programs that might 
be dynamically safe. It is your hope that you will reach a subset that 
is complete enough to cover all relevant cases, and that's ok, but you 
shouldn't disguise hope as truth.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Michael Sullivan
Subject: Why dynamic typing?  (was Re: Why seperate function namespaces?
Date: 
Message-ID: <1fuwodc.1b51jfyz38o59N%michael@bcect.com>
Adrian Kubala <······@sixfingeredman.net> wrote:

> On 6 May 2003, Kent M Pitman wrote:
> > Adrian Kubala <······@sixfingeredman.net> writes:
> > > It's probably true that both these ideas are still too immature and not
> > > pragmatic in the short term.
> > It's probably also true that even when mature, some of us will not
> > want to bother with them.  Our resistence to them is not their
> > maturity.  It is the lack of demonstrated need.

> "Need" is so subjective -- none of us need computers, do we? But, IF it
> were possible for your computer to PROVE that your program didn't have
> a certain set of bugs, that would be a good thing, would it not? Static
> typing is the only direction which will allow this, and the set of bugs it
> can disprove grows larger with more research. Eventually, the cost will
> decrease to the point that it's worthwhile no matter how rarely you make
> mistakes. Anything which offloads work to the computer, no matter how
> small, is a step in the right direction.

Only if the work required to make it happen is less than the work saved.
Given that there is a positive amount of work required to maintain
static typing (ask anyone who has used both static and dynamic languages
fluently, which one is faster to develop), you need to demonstrate a
positive frequency of actual production bugs that this will save, not
just a theoretical existence.

I just looked through some of the static typing defenses in one of the
webspaces linked in this thread, and I am very amused.  
The biggest thing proponents talk about is the advantage of finding bugs
at compile time rather than testing/debug time.  But why is this such a
big advantage?  Because compiling takes so long on large projects.  Why
does compiling take so long on large projects?  Because of *static
typing*.  Run-time type checking/dispatch means that you don't have to
recompile everything when you change a function.

When I write in Lisp, or even applescript[*], I don't recompile the
whole world everytime I change a function or class interface.  Going
from writing to testing is a matter of seconds in a good REPL.  once
I've written the code, and the unit tests, there is no significant
"compilation time" delay for me, as there is in C++.  And I get to test
out the function by typing commands into the very same editor I'm using
to write the functions I'm testing.  It's a very quick cycle, and if I
feel I need to add another test on the fly, it's easy.  

In standard C++ environments, I have to make the computer do all the
work of building a deployment package every time I want to run a simple
unit test.  No wonder it's so important to catch errors at compile time.

But the key here is that all this work is only required *because* of the
insistence on static type checking at compile time, and the lack of an
IDE that takes care of all the bookeeping necessary to know what's "out
there" in linked libraries so you don't have to recompile the world all
the time.

And remember, what people who work in dynamically typed languages,
including a few who have previously been staunch proponents of static
type checking, say that, in practice, type errors invariably show up as
program logic errors that a reasonable set of unit tests will discover.
That they rarely, if ever, discover an error *after* unit tests, that
static type checking would have uncovered.  Since it's good to do unit
tests *anyway*, what exactly is the point of static type checking,
unless their experience is unusual?  

I agree with Pascal that the burden of proof is on those who support
static type checking to show that some of these bugs will *not* be found
by a reasonable test-driven process, or that a test-driven process is
unnecessary given a good enough static type system.  *Then* you still
have to show that those benefits are worth any costs.  But AFAICT, no
one has shown that the benefits are real in practice at all.


Michael

[*] assuming I'm using Smile as an editor which acts about as much like
a lisp REPL as you can manage with applescript.  Even with more
traditional IDEs, I don't have to recompile the world, just the file I'm
working on.
From: Kent M Pitman
Subject: Re: Why dynamic typing?  (was Re: Why seperate function namespaces?
Date: 
Message-ID: <sfw65odsflp.fsf@shell01.TheWorld.com>
·······@bcect.com (Michael Sullivan) writes:

> Adrian Kubala <······@sixfingeredman.net> wrote:
> 
> > "Need" is so subjective -- none of us need computers, do we? But, IF it
> > were possible for your computer to PROVE that your program didn't have
> > a certain set of bugs, that would be a good thing, would it not? Static
> > typing is the only direction which will allow this, and the set of bugs it
> > can disprove grows larger with more research. Eventually, the cost will
> > decrease to the point that it's worthwhile no matter how rarely you make
> > mistakes. Anything which offloads work to the computer, no matter how
> > small, is a step in the right direction.
> 
> Only if the work required to make it happen is less than the work saved.
> Given that there is a positive amount of work required to maintain
> static typing (ask anyone who has used both static and dynamic languages
> fluently, which one is faster to develop), you need to demonstrate a
> positive frequency of actual production bugs that this will save, not
> just a theoretical existence.

You also need to demonstrate that CL is actively working against static
typing.  The only thing it's working against is REQUIRING a programmer
to use static typing.  Any programmer who believes Adrian's pitch can
ALREADY declare the type of each and every argument to every function,
and any compile writer who wants to heavily optimize based on types
(viz, CMU CL) is already welcome to do so.

So nothing about "Common Lisp" is keeping things from moving ahead in the
direction of static type checking.

What is being held back?

 - People who want to force programmers to do this over their own wishes
   are being kept from forcing those programmers.

 - People who want to force vendors to do this over their own desire to 
   spend their people and money resources on things that matter to them
   are being kept from forcing those vendors.  [By vendors I, as usual,
   mean to speak very generally, including makers of free implementations.
   I could call them 'implementors' but this would miss the sense that 
   even free software vendors claim to be motivated by a sense of their 
   market, and hence a desire to please SOME set of people.  If they fail
   to please that set of people, they will notice them leaving.  If they do
   please them, then we must accept the possibility that they are not 
   clamoring [enough] for static typing.

Bottom line: There are plenty of languages experimenting with static
typing; we don't have to just line up and be another when we have
another legitimate path to pursue.  If anyone thinks static typing
would improve CL, _nothing_ keeps them from going and implementing
compilers that show this.  And at that point, maybe they'll corner the
market and every other vendor will rush to match the power.  That's
how the free market works.  Government intervention (i.e., the
requirement that this be considered important, notwithstanding lack of
market pressure for it) seems inappropriate unless there is a
demonstration that this is somehow specially an issue that free
markets cannot resolve without government intervention.  I've seen
such arguments made about certain features in rare cases, but this
does not look like one of those features.

At the meta level, the argument made by Adrian above (and others
elsewhere) is that languages should be designed to suit capability.
That's a possible point of view.  I prefer to design languages to suit
goals.  That means in the future, as capability grows, my language
will yield better and better results for existing programs.  Other
languages, because they targeted year 1980 or 1990 or 2000 capabiliity
will grow more and more impoverished with time because they cannot
accomodate upgrades in proof technology without redesign.
(Deja vu.  I just made a similar point over in Slashdot.)
From: Ray Blaak
Subject: Re: Why dynamic typing?  (was Re: Why seperate function namespaces?
Date: 
Message-ID: <u8yt9gxb9.fsf@STRIPCAPStelus.net>
·······@bcect.com (Michael Sullivan) writes:
> I just looked through some of the static typing defenses in one of the
> webspaces linked in this thread, and I am very amused.  
> The biggest thing proponents talk about is the advantage of finding bugs
> at compile time rather than testing/debug time.  But why is this such a
> big advantage?  Because compiling takes so long on large projects.  

Uh, no. It is because it is found more "automatically". 

By relying on a runtime error, you may not encounter the bug at all, since
testing might not have properly covered it.

No need to point out that not all typing problems can be caught at compile
time, everyone knows that. 

The point is that static typing believers feel a useful set of errors are
caught automatically, enough to be worth the pain of declaring the type
information.

> And remember, what people who work in dynamically typed languages,
> including a few who have previously been staunch proponents of static
> type checking, say that, in practice, type errors invariably show up as
> program logic errors that a reasonable set of unit tests will discover.

Not in my experience. Type errors are usually brain farts. 

> That they rarely, if ever, discover an error *after* unit tests, that
> static type checking would have uncovered.  Since it's good to do unit
> tests *anyway*, what exactly is the point of static type checking,
> unless their experience is unusual?  

I prefer automated support for catching certain classes of errors, freeing me
to concentrate on more important ones. 

I don't want to have to write unit tests to show that yes indeed you can't
pass a string where a number was expected. That's tedious. I want instead to
know that such a thing is impossible by construction.

Note, also, I prefer languages where one can choose the level of strictness
that is needed, as appropriate to a situation. 

The static vs dynamic debate is not a matter of extremes. Instead (and as
usual) there is a set of considered tradeoffs that one can/should make for
different situations.

The important thing is let your slave computer do as much work for you as
possible, letting you concentrate on the fun stuff. 

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Michael Sullivan
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <1fuyibu.1indl9k145imtcN%michael@bcect.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote:

> ·······@bcect.com (Michael Sullivan) writes:
> > I just looked through some of the static typing defenses in one of the
> > webspaces linked in this thread, and I am very amused.  
> > The biggest thing proponents talk about is the advantage of finding bugs
> > at compile time rather than testing/debug time.  But why is this such a
> > big advantage?  Because compiling takes so long on large projects.
> 
> Uh, no. It is because it is found more "automatically". 

I dunno.  Finding runtime errors in obvious unit tests feels pretty
automatic to me when I'm working in a REPL environment.

> By relying on a runtime error, you may not encounter the bug at all, since
> testing might not have properly covered it.

dynamic typing != weak typing.

If the compiler would catch a type error in C++, then there are a few
possible things that might happen in a dynamically typed language:

1. As soon as a function is called with an argument it cannot handle,
there will be a run-time error.  This kind of error will be flagged by
*any* reasonable unit test.  It's not going to sit hidden in your code
for weeks/months/years, unless you don't test at all.

2. There is a standard way to convert types or operate on the given
(unexpected) type that gives correct results in all needed cases.  In
this case, the type error isn't really an error, won't become an error,
and why worry?  C++ will, of course, make you worry.

3.   There is a standard way to convert types or operate on the given
(unexpected) type that gives *incorrect* results in all needed cases.
This will fail with any reasonable unit test:  see category 1.

4.  There is a standard way to convert types or operate on the given
(unexpected) type that gives correct results in *some*, but not all
cases.   This last is the only dangerous case, and it's only really
dangerous for the subset where it gives correct results often enough to
pass a reasonable set of unit tests.  

What experience I have tells me that typos generally fall into category
1 or category 3, that there are a fair number of useful perversions that
fall into category 2 (i.e. impossible or painful in strong statically
typed languages, no problem in dynamically typed languages), and that
category 4 errors are relatively rare unless I'm specifically wading in
dangerous waters (a la "reinterpret cast" in C++).  These are waters
that almost don't exist in languages where the innards are better
abstracted, unless you are specifically pushing the envelope.

Admittedly, my preference is for a language (like CL) where you can be
type strict when necessary (probable category 4 situations), but most of
the time not worry about it.

> No need to point out that not all typing problems can be caught at compile
> time, everyone knows that. 

> The point is that static typing believers feel a useful set of errors are
> caught automatically, enough to be worth the pain of declaring the type
> information.

Of course they (you?) believe that.  They'd be going to an awful lot of
trouble for nothing if they didn't.  

I don't think they are generally right, unless they are comparing only
to type *unsafe* languages.  And not surprisingly, some of those
believers, having worked with some dynamically typed languages that are
not especially type unsafe, are changing their opinion.  

That said, I've heard really interesting things about working in static
typed languages that use type inference.  But I've not looked at any of
those languages enough to comment intelligently.  It's possible that
they do help catch errors faster/easier.  But I just don't find these
kinds of errors to be a big enough deal to worry about it.  

> > And remember, what people who work in dynamically typed languages,
> > including a few who have previously been staunch proponents of static
> > type checking, say that, in practice, type errors invariably show up as
> > program logic errors that a reasonable set of unit tests will discover.

> Not in my experience. Type errors are usually brain farts. 

Yes, they are brain farts.  My point is that they are brain farts which
will generally make unit tests fail.  You don't have to test *for* type
errors to catch them.

> > That they rarely, if ever, discover an error *after* unit tests, that
> > static type checking would have uncovered.  Since it's good to do unit
> > tests *anyway*, what exactly is the point of static type checking,
> > unless their experience is unusual?  

> I prefer automated support for catching certain classes of errors, freeing me
> to concentrate on more important ones. 
 
> I don't want to have to write unit tests to show that yes indeed you can't
> pass a string where a number was expected. That's tedious. I want instead to
> know that such a thing is impossible by construction.

I want to write code where if I pass a string with the value "98" it can
get treated as the number 98, and if I pass a string with the value
"foo" it gets an error.


> Note, also, I prefer languages where one can choose the level of strictness
> that is needed, as appropriate to a situation. 
> 
> The static vs dynamic debate is not a matter of extremes. Instead (and as
> usual) there is a set of considered tradeoffs that one can/should make for
> different situations.
> 
> The important thing is let your slave computer do as much work for you as
> possible, letting you concentrate on the fun stuff. 
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <u1xz1w36l.fsf@STRIPCAPStelus.net>
·······@bcect.com (Michael Sullivan) writes:
> If the compiler would catch a type error in C++, then there are a few
> possible things that might happen in a dynamically typed language:
> 
> 1. As soon as a function is called with an argument it cannot handle,
> there will be a run-time error.  This kind of error will be flagged by
> *any* reasonable unit test.  It's not going to sit hidden in your code
> for weeks/months/years, unless you don't test at all.

This is the heart of what I dispute. I believe that it is quite common to have
code paths that are not executed by even reasonable unit tests. Such uncovered
paths are prime candiates for undiscovered type errors.

Furthermore, I don't want to even write the tests that exercise type
errors. That's boring. Computers can find that stuff.

Unit testing is fundamentally a manual process, and thus is prone to error.
Those tests that can be readily automatically generated correspond to type
errors anyway. 

To me this is static typing in another form, or rather "computer verified type
checking". The only difference is whether it is done now or later. Either way,
as long as the computer does the tedious stuff, I'm happy.

> I want to write code where if I pass a string with the value "98" it can
> get treated as the number 98, and if I pass a string with the value
> "foo" it gets an error.

Nothing wrong with that. I just want to be able to control that process as
precisely as I need to, if at all, if not at all.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Matthew Danish
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <20030515025717.L22493@mapcar.org>
On Wed, May 14, 2003 at 10:57:32PM +0000, Ray Blaak wrote:
> Furthermore, I don't want to even write the tests that exercise type
> errors. That's boring. Computers can find that stuff.

The impression I was under is that one doesn't write tests to exercise type
errors, but rather you pick them up naturally as you run your other tests.

-- 
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
From: Tim Bradshaw
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ey3y918y2v5.fsf@cley.com>
* Ray Blaak wrote:

> This is the heart of what I dispute. I believe that it is quite
> common to have code paths that are not executed by even reasonable
> unit tests. Such uncovered paths are prime candiates for
> undiscovered type errors.

This kind of statement is at the core the problem with all these
arguments.  They all come down to `I believe x': in other words,
they're all religious, or at best pre-scientific arguments.  No one
actually *knows* anything: one group of people simply has faith in
something, and another has faith in something else.  We are, in fact,
living in the middle ages.

--tim
From: Kimmo T Takkunen
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <slrnbc72tf.o7.ktakkune@sirppi.helsinki.fi>
In article <···············@cley.com>, Tim Bradshaw wrote:
> * Ray Blaak wrote:
> 
>> This is the heart of what I dispute. I believe that it is quite
>> common to have code paths that are not executed by even reasonable
>> unit tests. Such uncovered paths are prime candiates for
>> undiscovered type errors.
> 
> This kind of statement is at the core the problem with all these
> arguments.  They all come down to `I believe x': in other words,
> they're all religious, or at best pre-scientific arguments.  No one
> actually *knows* anything: one group of people simply has faith in
> something, and another has faith in something else.  We are, in fact,
> living in the middle ages.
> 
> --tim

Here is one data point.

Years ago I worked with virtual machine implementation (written in
c). We made extensive unit testing. Then we had thousands of test cases
with small snippets of VM code. We were quite proud of all testing
that was done.

After all this work we evaluated new unit test writing tool [1] that
instrumented our code and gave us code path execution statistics.  I
can't remember exact numbers, but there was code paths (<5%) we did
not cover (unit tests + integration tests). After getting all that
execution statistics covering all remaining paths was easier. We also
found bugs in test- and production code using this tool.

My experience is that covering all code paths with unit test's is hard
without tools. Mostly not because its intellectually hard but
because small human errors. With good tools everything is different.

[1] http://www.testwell.fi/
 
 Kimmo
-- 
((lambda (integer) ;; http://www.iki.fi/kt/ 
   (coerce (loop for i upfrom 0 by 8 below (integer-length integer)
                 collect (code-char (ldb (byte 8 i) integer))) 'string))
 100291759904362517251920937783274743691485481194069255743433035)
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <uel30m7md.fsf@STRIPCAPStelus.net>
········@cc.helsinki.fi (Kimmo T Takkunen) writes:
> My experience is that covering all code paths with unit test's is hard
> without tools. Mostly not because its intellectually hard but
> because small human errors. With good tools everything is different.

This has been my experience also. With tool support one can easily find the
uncovered paths. Unfortunately, these kind of tools can be expensive to run
and such analysis is time consuming.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <uhe7wm7qh.fsf@STRIPCAPStelus.net>
Tim Bradshaw <···@cley.com> writes:
> * Ray Blaak wrote:
> 
> > This is the heart of what I dispute. I believe that it is quite
> > common to have code paths that are not executed by even reasonable
> > unit tests. Such uncovered paths are prime candiates for
> > undiscovered type errors.
> 
> This kind of statement is at the core the problem with all these
> arguments.  They all come down to `I believe x': in other words,
> they're all religious, or at best pre-scientific arguments.  No one
> actually *knows* anything: one group of people simply has faith in
> something, and another has faith in something else.  We are, in fact,
> living in the middle ages.

Fair enough. 

I base my belief, however, on my direct experience with software
development. We always had unit tests, and they were never as exhaustive as
they theoretically should be, either due to ignorance or it just was
impractical.

But you are right, it is still a belief. The best thing we can do is not to
get to fanatical about our beliefs and be open to alternative ways of
thinking. That's the value I get out of these discussions.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Costanza
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <b9vmc8$13q2$1@f1node01.rhrz.uni-bonn.de>
Ray Blaak wrote:

> Unit testing is fundamentally a manual process, and thus is prone to error.

Implementing type checkers is also a manual process, and thus also prone 
to error.

Now think about the factors that make you trust a type checker more than 
unit tests nonetheless. What are those factors? Can they be integrated 
in a test-driven approach?


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Tim Bradshaw
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ey3ptmky0zy.fsf@cley.com>
* Pascal Costanza wrote:
> Implementing type checkers is also a manual process, and thus also
> prone to error.

This has a fundamental misunderstanding which I've seen before.  To
de-emotionalise it somewhat I'll give another example of the same
misunderstanding (which I've seen in real life).

    Claim: bounds-checked implementations of languages often offer
    better safety than non-bounds-checked ones.

    Counter claim: but if you use a bounds-checked implementation then
    its runtime, and in particular the bounds-checker will (often) be
    implemented in a non-bounds-checked system and thus is prone to
    error.  Thus the implementation isn't actually any safer than
    using a non-bounds-checked one, and doing bounds-checking
    yourself.

Well, there's something wrong with this isn't there?  What's wrong is
to do with scaling issues.  If you use a bounds-checked
implementation, then the bounds-checker (and the rest of the runtime)
is a chunk of code which can be very carefully tested, (or even,
perhaps, proved correct).  Once it's done, it will then happily bounds
check *all* programs written in that implementation for no extra
implementational work.  If, instead, you check bounds yourself, the
amount of implementational work you need to do scales roughly as the
size of your program, and the amount of work you need to do to check
it is correct probably scales the same way (and perhaps worse).  The
only way out of this is to write a bounds-checker, and then write your
application in an idiom which uses that checker.  But that's actually
just using a bounds-checked implementation.

The same goes for a type-checker.  Yes they are no doubt hard to write
and potentially buggy, but once you have one working then you don't
have to do more work, whereas with unit tests you do.

(Nothing in this article should be taken to imply that I like
statically-typed languages.  I don't, and I like their advocates even
less.  I'm just trying to explain a misconception.)

--tim
From: Pascal Costanza
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ba00nv$tak$1@f1node01.rhrz.uni-bonn.de>
Tim Bradshaw wrote:
> * Pascal Costanza wrote:
> 
>>Implementing type checkers is also a manual process, and thus also
>>prone to error.
> 
> This has a fundamental misunderstanding which I've seen before.  To
> de-emotionalise it somewhat I'll give another example of the same
> misunderstanding (which I've seen in real life).

Yes, it has the misunderstanding that you have pointed out. That's the 
point of my argument. People don't trust unit tests because of the same 
misunderstanding.

[excellent example]

> The same goes for a type-checker.  Yes they are no doubt hard to write
> and potentially buggy, but once you have one working then you don't
> have to do more work, whereas with unit tests you do.

Your argument is mainly based on the assumption that type checkers are 
tested more rigorously - which essentially means more often IMHO. 
Advocates of unit tests don't say that occasional unit testing is good. 
They say that you need to do it regularly, on every change of your code. 
This is what makes them stable.

So the central question is: Does static typing offer a qualitative 
improvement over unit tests, or is it just a matter of quantity. If it 
is just a matter of quantity then you can achieve more or less the same 
rigor with unit tests as with static type checks. Just provide more unit 
tests, and execute them more often.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Matthias
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ba0aai$hob$1@trumpet.uni-mannheim.de>
Pascal Costanza wrote:
> Tim Bradshaw wrote:
>> The same goes for a type-checker.  Yes they are no doubt hard to write
>> and potentially buggy, but once you have one working then you don't
>> have to do more work, whereas with unit tests you do.
> 
> Your argument is mainly based on the assumption that type checkers are
> tested more rigorously - which essentially means more often IMHO.
> Advocates of unit tests don't say that occasional unit testing is good.
> They say that you need to do it regularly, on every change of your code.
> This is what makes them stable.

If I understood Tim correctly, his argument was that it is more economical 
to check the finite amount of code neccessary to implement a type checker 
than the potentially infinite amount of code which might be written in any 
programming language.

> So the central question is: Does static typing offer a qualitative
> improvement over unit tests, or is it just a matter of quantity. If it
> is just a matter of quantity then you can achieve more or less the same
> rigor with unit tests as with static type checks. Just provide more unit
> tests, and execute them more often.

It is a matter of quantities: Of finite and infinite ones. :-)

Matthias
From: Tim Bradshaw
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ey3isscxkh8.fsf@cley.com>
* Matthias  wrote:

> If I understood Tim correctly, his argument was that it is more
> economical to check the finite amount of code neccessary to
> implement a type checker than the potentially infinite amount of
> code which might be written in any programming language.

Yes, you understood me correctly.  The type checker is a finite thing
which gets written, debugged and tested (including unit tested).  Then
it gets used repeatedly as part of writing other programs.

To take it to the obvious extreme: the *unit test* framework is
something which can itself have bugs.  Does that mean we don't use
unit tests?

--tim
From: Pascal Costanza
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ba0fub$147k$1@f1node01.rhrz.uni-bonn.de>
Tim Bradshaw wrote:
> * Matthias  wrote:
> 
>>If I understood Tim correctly, his argument was that it is more
>>economical to check the finite amount of code neccessary to
>>implement a type checker than the potentially infinite amount of
>>code which might be written in any programming language.
> 
> Yes, you understood me correctly.  The type checker is a finite thing
> which gets written, debugged and tested (including unit tested).  Then
> it gets used repeatedly as part of writing other programs.
> 
> To take it to the obvious extreme: the *unit test* framework is
> something which can itself have bugs.  Does that mean we don't use
> unit tests?

No, I don't think so. I think that both unit testing and static type 
checking only work when they take on a substantially different view on 
the code than the code itself.

This might be the reason why both unit testing and static type checking 
by inference work better than, say, design by contract and explicit 
typing. In unit testing you deliberately try to break your own code and 
with type inferencing, things get checked that you haven't made explicit 
in your code. On the other hand, design by contract (and certain kinds 
of assertions) only restate what the programmer already thought anyway, 
and the same holds for explicit typing.

Maybe the camps are divided along the wrong borders.

Yes, I think that's the best way to summarize my view so far: Any 
approach that allows you to get a substantially new perspective on your 
code is potentially useful, whereas any approach that just redundantly 
repeats and affirms your old perspective(s) is probably useless.


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Peter Seibel
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <m3u1bwvwtg.fsf@javamonkey.com>
Pascal Costanza <········@web.de> writes:


> In unit testing you deliberately try to break your own code 

[snip stuff about static typing]

> On the other hand, design by contract (and certain kinds of
> assertions) only restate what the programmer already thought anyway,

[more snippage]

> Maybe the camps are divided along the wrong borders.
> 
> Yes, I think that's the best way to summarize my view so far: Any
> approach that allows you to get a substantially new perspective on
> your code is potentially useful, whereas any approach that just
> redundantly repeats and affirms your old perspective(s) is probably
> useless.

Why do you expect that if a progammer writes unit tests they will
automatically get a "substantially new perspective" but that when they
write assertions they won't?

I'm actually in favor of unit testing and do think the programmer will
uncover assumptions they made when they wrote the code[1]. But by the
same token, I think programmers who make good use of assertions, also
use them as another way to think different thoughts about their code
and are likely to discover assumptions that they might not if they
didn't think about assertions.

Also, assertions are quite powerful for documenting and checking what
are supposed to be shared assumptions when multiple people work on a
project.

-Peter

[1] Assuming they're not actually writing the tests first in which
case the dynamic is yet another thing.

-- 
Peter Seibel                                      ·····@javamonkey.com

  The intellectual level needed   for  system design is  in  general
  grossly  underestimated. I am  convinced  more than ever that this
  type of work is very difficult and that every effort to do it with
  other than the best people is doomed to either failure or moderate
  success at enormous expense. --Edsger Dijkstra
From: Pascal Costanza
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <costanza-F3CDCB.00592916052003@news.netcologne.de>
In article <··············@javamonkey.com>,
 Peter Seibel <·····@javamonkey.com> wrote:

> Why do you expect that if a progammer writes unit tests they will
> automatically get a "substantially new perspective" but that when they
> write assertions they won't?

No, not automatically. But it's part of the mindset as proposed by XP.

Trying to break code is the same as trying to think about ways to use 
the code as it wasn't supposed to be used. Stating invariants is "just" 
stating what you suppose to be true about a program.

Note that I am currently in brainstorming mode wrt this thread. I might 
be completely wrong.

> I'm actually in favor of unit testing and do think the programmer will
> uncover assumptions they made when they wrote the code[1]. But by the
> same token, I think programmers who make good use of assertions, also
> use them as another way to think different thoughts about their code
> and are likely to discover assumptions that they might not if they
> didn't think about assertions.

Hmm, maybe I have only seen poor examples until now in this regard.

> Also, assertions are quite powerful for documenting and checking what
> are supposed to be shared assumptions when multiple people work on a
> project.

Yes, that's true.


Pascal
From: Peter Seibel
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <m3u1bvu2vj.fsf@javamonkey.com>
Pascal Costanza <········@web.de> writes:

> In article <··············@javamonkey.com>,
>  Peter Seibel <·····@javamonkey.com> wrote:
> 
> > Why do you expect that if a progammer writes unit tests they will
> > automatically get a "substantially new perspective" but that when
> > they write assertions they won't?
> 
> No, not automatically. But it's part of the mindset as proposed by
> XP.

Yes. I agree--unit testing combined with a particular mindset is what
makes it powerful.

> 
> Trying to break code is the same as trying to think about ways to
> use the code as it wasn't supposed to be used. Stating invariants is
> "just" stating what you suppose to be true about a program.

This is the part where I want to challenge your thinking--I think that
stating invariants is *another* powerful way to think clearly about
what a piece of code is supposed to do. It may be that the nature of
testing makes it easier for people to break out of their normal mode
of thought but I think, with practice, learning to state invariants
clearly can be equally powerful. For example, I may have just written
some code that does whatever. But when I ask myself, okay, what's
*always* true about this code or what *must be true* for this code to
work, that's a mental discipline that's useful in a way similar to
asking *how can I break this* is.

> Note that I am currently in brainstorming mode wrt this thread. I
> might be completely wrong.

No worries. Me too.

> > I'm actually in favor of unit testing and do think the programmer
> > will uncover assumptions they made when they wrote the code[1].
> > But by the same token, I think programmers who make good use of
> > assertions, also use them as another way to think different
> > thoughts about their code and are likely to discover assumptions
> > that they might not if they didn't think about assertions.
> 
> Hmm, maybe I have only seen poor examples until now in this regard.

I'm afraid I don't have any I can just toss off. In fact, my best
experiences with assertions/invariants have been when I'm working on
large systems. When there are a lot of moving pieces, the key is to be
very clear about how they fit together. Every assertion is a little
stake in the ground on the way toward getting that understanding clear
in one's head.

One situation that may show how assertions can be a powerful tool for
understanding is when you have a bit of code and an assertion that you
think you'd like to put in but you can't even figure out if it should
be true or not, i.e. is it actually an invariant. This is not a
theoretical argument--when working on complex systems I have found
myself in that situation more times than I probably care to admit.

In fact the cases I'm thinking of were on a system that I largely pair
programmed. Neither I *nor* my partner could figure out if a
particular assertion we wanted to put in was a true invariant. Not
only did the question of whether that particular assertion was a true
invariant give us a framework to discuss the problem, in each case,
once we figured it out, our understanding of the whole system had
increased in an important way. Often we ended up simplifying the code
because the reason we couldn't figure out whether the proposed
assertion was an invariant was because the code was muddled--it was
handling multiple cases that should have been distinct.

> > Also, assertions are quite powerful for documenting and checking
> > what are supposed to be shared assumptions when multiple people
> > work on a project.
> 
> Yes, that's true.

And, they also work across time--one programmer I need to make sure
shares my assumption is *me*, yesterday or last week or a few minutes
ago. (As a friend of mine once said, "When you read your own code,
always remember it was written by a dumber programmer than you.")

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

  The intellectual level needed   for  system design is  in  general
  grossly  underestimated. I am  convinced  more than ever that this
  type of work is very difficult and that every effort to do it with
  other than the best people is doomed to either failure or moderate
  success at enormous expense. --Edsger Dijkstra
From: Pascal Costanza
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <costanza-21CE9B.22444916052003@news.netcologne.de>
In article <··············@javamonkey.com>,
 Peter Seibel <·····@javamonkey.com> wrote:

> In fact the cases I'm thinking of were on a system that I largely pair
> programmed. Neither I *nor* my partner could figure out if a
> particular assertion we wanted to put in was a true invariant. Not
> only did the question of whether that particular assertion was a true
> invariant give us a framework to discuss the problem, in each case,
> once we figured it out, our understanding of the whole system had
> increased in an important way. Often we ended up simplifying the code
> because the reason we couldn't figure out whether the proposed
> assertion was an invariant was because the code was muddled--it was
> handling multiple cases that should have been distinct.

This is a convincing illustration and I like this a lot. I have learned 
something from your post. Thanks a lot for that.


Pascal
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <un0hn7553.fsf@STRIPCAPStelus.net>
Pascal Costanza <········@web.de> writes:
> I think that both unit testing and static type checking only work when they
> take on a substantially different view on the code than the code itself.

Not true in the case of static type checking. The code itself is describing
(some of) the programmer's intended invariants, allowing the language
environment to detect (a useful amount of) mistaken deviations from such
intentions.

> with type inferencing, things get checked that you haven't made explicit 
> in your code.

Not true in general. Type inferencing is ultimately about guessing, and often
needs help from the programmer. Consider this:

(let ((v 1))
  (setq v 'something-else))

An error or intentional? The type inferencer can only warn. To silence such
warnings the programmer would have to provide the information as to what is
allowed, e.g., something like:

(let ((v 1))
  (declare (type atom v))
  (setq v 'something-else))

or in a dylan-like language:

(let ((v <atom> 1))
  (setq v 'something-else))

> On the other hand, design by contract (and certain kinds of assertions) only
> restate what the programmer already thought anyway, and the same holds for
> explicit typing.

With the very important property that the language environment now has
sufficient information to verify that things are consistent with what the
programmer already thought of and so stated.

> Yes, I think that's the best way to summarize my view so far: Any 
> approach that allows you to get a substantially new perspective on your 
> code is potentially useful, 

No real problem there.

> whereas any approach that just redundantly repeats and affirms your old
> perspective(s) is probably useless.

As baldly stated, that's silly. Programming is about precisely describing your
intentions to the computer, such that it will do what you want.

The whole point is to get what you write to very precisely repeat and affirm
your perspective. Tool support that can verify that what you intended is in
fact happening is a bonus. Shit, *humans* who can review your code and
reaffirm your intentions is critical for developing quality software.

Unless you mean something else, perhaps something like "don't get stuck in a
rut, open your mind to new possiblities...", which is fine.

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Costanza
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ba0bfd$t94$1@f1node01.rhrz.uni-bonn.de>
Matthias wrote:
> Pascal Costanza wrote:
> 
>>Tim Bradshaw wrote:
>>
>>>The same goes for a type-checker.  Yes they are no doubt hard to write
>>>and potentially buggy, but once you have one working then you don't
>>>have to do more work, whereas with unit tests you do.
>>
>>Your argument is mainly based on the assumption that type checkers are
>>tested more rigorously - which essentially means more often IMHO.
>>Advocates of unit tests don't say that occasional unit testing is good.
>>They say that you need to do it regularly, on every change of your code.
>>This is what makes them stable.
> 
> If I understood Tim correctly, his argument was that it is more economical 
> to check the finite amount of code neccessary to implement a type checker 
> than the potentially infinite amount of code which might be written in any 
> programming language.

There is no way that we will ever have infinite amount of code. The 
amount of code in the universe will always be finite, and it always has 
the potential to grow. So it doesn't make sense at all to speak about a 
"potentially infinite amount".

>>So the central question is: Does static typing offer a qualitative
>>improvement over unit tests, or is it just a matter of quantity. If it
>>is just a matter of quantity then you can achieve more or less the same
>>rigor with unit tests as with static type checks. Just provide more unit
>>tests, and execute them more often.
> 
> It is a matter of quantities: Of finite and infinite ones. :-)

No, it is not a matter of finite and infinite quantities because we can 
neither have an infinite amount of code nor a potentially infinite 
amount of code.

We can only have "lots of code". But then again, unit testing as 
proposed by its advocates implies that the amount of unit tests grows 
with the same speed as the amount of the code that needs to be tested, 
and that the amount of unit tests is always ahead of the amount of code 
("first write the tests, then write the code to fulfil the tests").


Pascal

-- 
Pascal Costanza               University of Bonn
···············@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)
From: Tim Bradshaw
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ey3el30xkea.fsf@cley.com>
* Pascal Costanza wrote:
> We can only have "lots of code". But then again, unit testing as
> proposed by its advocates implies that the amount of unit tests grows
> with the same speed as the amount of the code that needs to be tested,
> and that the amount of unit tests is always ahead of the amount of
> code ("first write the tests, then write the code to fulfil the
> tests").

Right.  Unit tests are proportional (at least) to lines of code, while
type checker / bounds checker is constant.  That's a big difference.

--tim
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <uk7csm81c.fsf@STRIPCAPStelus.net>
Pascal Costanza <········@web.de> writes:
> Ray Blaak wrote:
> 
> > Unit testing is fundamentally a manual process, and thus is prone to error.
> 
> Implementing type checkers is also a manual process, and thus also prone 
> to error.

Sure, but it is written "once" (at least, it can evolved to be relatively
error free). Unit tests, on the other hand, are written again and again for
each application.

> Now think about the factors that make you trust a type checker more than 
> unit tests nonetheless. What are those factors? Can they be integrated 
> in a test-driven approach?

Sure. If the type checker runs as part of the test suite, that's fine with me.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Michael Sullivan
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <1fv01rk.13cx6ku1xewgfcN%michael@bcect.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote:

> ·······@bcect.com (Michael Sullivan) writes:
> > If the compiler would catch a type error in C++, then there are a few
> > possible things that might happen in a dynamically typed language:
> > 
> > 1. As soon as a function is called with an argument it cannot handle,
> > there will be a run-time error.  This kind of error will be flagged by
> > *any* reasonable unit test.  It's not going to sit hidden in your code
> > for weeks/months/years, unless you don't test at all.
> 
> This is the heart of what I dispute. I believe that it is quite common to have
> code paths that are not executed by even reasonable unit tests. Such uncovered
> paths are prime candiates for undiscovered type errors.

I'd say they are prime candidates for undiscovered errors of all kinds.
Any non-tested code path is likely as not to be a broken code path
(IME).  I guess I feel that type errors (at least those which could
possibly elude basic unit tests) are a small enough portion of all
errors (note that many type errors are a result of typos *in the type
declarations*) as to be not worth the hassle in most cases.  

IME, 90% of the time, typing is mostly extra hassle that gains you
little.  I definitely agree that it's a good thing to have as an option,
just not as a forced option.

> Furthermore, I don't want to even write the tests that exercise type
> errors. That's boring. Computers can find that stuff.

You keep saying that.  But test driven design writes the tests as a
specification.  If I want a function to do X, I write the set of tests
that function X must pass in cooperation with or even before I write the
actual code to do X.  Once the function passes all the tests, it's good.
I don't touch it unless it is demonstrated (by a bug or feature request)
that the tests were flawed, insufficient or incomplete.

I'm saying this kind of testing will find 90%+ of common type errors
without ever testing specifically for type, and that much of the
remaining 10% are not *practical* errors -- they are only errors if you
decide to be anal about type strictness.

> > I want to write code where if I pass a string with the value "98" it can
> > get treated as the number 98, and if I pass a string with the value
> > "foo" it gets an error.
 
> Nothing wrong with that. I just want to be able to control that process as
> precisely as I need to, if at all, if not at all.

Well, this I agree with.  I hate working with purely statically typed
languages not for the control options they give you, but for the freedom
they take away.


Michael
From: Kaz Kylheku
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <cf333042.0305160938.5854de44@posting.google.com>
Ray Blaak <········@STRIPCAPStelus.net> wrote in message news:<·············@STRIPCAPStelus.net>...
> ·······@bcect.com (Michael Sullivan) writes:
> > If the compiler would catch a type error in C++, then there are a few
> > possible things that might happen in a dynamically typed language:
> > 
> > 1. As soon as a function is called with an argument it cannot handle,
> > there will be a run-time error.  This kind of error will be flagged by
> > *any* reasonable unit test.  It's not going to sit hidden in your code
> > for weeks/months/years, unless you don't test at all.
> 
> This is the heart of what I dispute. I believe that it is quite common to have
> code paths that are not executed by even reasonable unit tests. Such uncovered
> paths are prime candiates for undiscovered type errors.

Uncovered paths are prime candidates for undiscovered *errors*,
period!

In general, type errors are not any worse than any other kind of
error.

They are worse in static languages. People who understand only static
languages naturally regard run-time type errors as some kind of
supreme evil, because a run-time type error in a language whose
objects are devoid of any type information means that the machine
language translation is blindly mis-using the bitwise representation
of an object. Heck, the machine langauge program doesn't even know how
big the object is! If a small type is mistaken for a big one, an
out-of-bounds memory reference will result, possibly clobbering an
adjacent object or triggering a machine exception, the recovery from
which will range from nonexistent to unsatisfactory.

In a dynamically typed language, a type error simply means that some
object, whose structure is perfectly well understood to the runtime,
has the wrong property for an operation.

Such errors exist in static programs too: such programs imitate
dynamic typing by giving objects dynamic properties, and inspect those
properties at run time.

For example, in BSD Unix, if you open a regular file and then apply
the readdir() operation, the kernel will see that the struct vnode
object does not have the v_type field value of VDIR, and return an
error.

When you are compiling the kernel, it will tell you that you have
mistakenly used a ``struct mount *'' pointer where a ``struct vnode
*'' pointer is needed, but it won't tell you where a VREG type vnode
is being used as if it were a VREG.

Such checks had to be painstakingly coded everywhere the struct vnode
type is manipulated in the kernel. Miss one check and the results
could be disastrous: a gaping security hole could be opened, or the
system could be vulnerable to an exploitable crash.

In a dynamic language, they are part of the substrate shared by all
programs. The concern for these checks is concentrated in the language
implementation, where it is handled in an optimal way by code written
by expert programmers who have been honing that language
implementation for years, if not decades.

> Furthermore, I don't want to even write the tests that exercise type
> errors. That's boring. Computers can find that stuff.

Computers cannot find all instances of type errors, unless we
constrain the programming language such that we eliminate all programs
that can contain interesting, deep type errors.

Once you change the programming language such that programs have to be
changed, it's hard to conclude anything. You haven't succeeded in
finding any errors, because you threw away your original program, and
had to write a different program that satisfies the static constraints
in the dumbed-down language, yet which meets the same high level
functional requriements as the original program. To do that, you may
have had to write ten to a hundred times more lines of code.

What is overwhelmingly clear is that whenever human beings instruct
computers in dynamic languages, they are almost without exception
found to be vastly more productive than other human beings who
instruct computers in static languages. More productive means
developing software faster, with fewer errors, fewer lines of code,
and faster response to changing requirements.

> Unit testing is fundamentally a manual process, and thus is prone to error.
> Those tests that can be readily automatically generated correspond to type
> errors anyway. 
> 
> To me this is static typing in another form, or rather "computer verified type
> checking". The only difference is whether it is done now or later. Either way,
> as long as the computer does the tedious stuff, I'm happy.

``Static typing'' and ``computer verified type checking'' are not the
same thing; the former is an instance of the latter. The former relies
on simple information stuffed into symbol tables from declarations
that the human had to write. Type checking in general can involve much
deeper *inferences* about the program behavior, rather than simple
matching.

Static typing in fact pushes the ``tedious stuff'' onto the
programmer, who must fit the program to the constraints which make it
possible by writing reams of declarations, duplicated code and so on.
The human did all the hard inference work to write the program under
the painful constraints of the static type system; the machine has
very little reasoning to do at all, other than just perform some
trivial comparisons.

In the end, static typing does not banish dynamic types from appearing
in the code; it merely leaves many a programming language and
programmer without any tools to handle these second-order, ad-hoc
dynamic types.
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ufzne3ghu.fsf@STRIPCAPStelus.net>
···@ashi.footprints.net (Kaz Kylheku) writes:
[ good points]

> Static typing in fact pushes the ``tedious stuff'' onto the programmer, who
> must fit the program to the constraints which make it possible by writing
> reams of declarations, duplicated code and so on.  The human did all the
> hard inference work to write the program under the painful constraints of
> the static type system; the machine has very little reasoning to do at all,
> other than just perform some trivial comparisons.

If your static typing is tedious then you are working with a shitty statically
typed language, especially if you have to duplicate code. Declarations should
be "powerful", allowing an economy of expression. Declaring types should be no
more tedious than writing the equivalent unit test.

One should not have to "fight" the type system. Instead, one chooses what
abstractions to have, declares *their* abstractions to the language
environment, which is then enforced. 

Ideally, if you want full dynamic behaviour, just work with the "any" root
type everywhere instead of specific, restrictive types.

Or, and of course, just use a fully dynamic language directly.

> In the end, static typing does not banish dynamic types from appearing
> in the code; it merely leaves many a programming language and
> programmer without any tools to handle these second-order, ad-hoc
> dynamic types.

"Real" (as in practically useful) static languages always have some aspect of
dynamism to them. Consider object oriented programming with polymorphic method
calls. Consider subtyping, which all useful static languages have.

Also, it should be known by all, especially static typing freaks, that it is
impossible to remove runtime errors completely (languages like ML push this to
the limit by forcing the programmer to define their own runtime errors, but I
digress).

I have said this before here and I'll say it again: the static vs dynamic
typing debate is not a matter of two opposing philosphies. In reality there is
a single continuum of typing from completely static to completely dynamic.

Different programming languages live on different points along this continuum.
The best program languages (at least in terms of typing) let the programmer
choose where along this line they prefer to be.

At this point, I like Dylan's flexibility, where one can be as strict or as
dynamic as they prefer. My ideal language would be Dylan's type abilities with
Lisp's syntax.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Kent M Pitman
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <sfwof22yak5.fsf@shell01.TheWorld.com>
Ray Blaak <········@STRIPCAPStelus.net> writes:

> If your static typing is tedious then you are working with a shitty
> statically typed language, especially if you have to duplicate
> code. Declarations should be "powerful", allowing an economy of
> expression. Declaring types should be no more tedious than writing
> the equivalent unit test.

What if I don't want to write the equivalent unit test?

Seriously.

What if I only want to write tests for code that I'm going to release.
And what if that code is only one in one thousand of every programs I
write?

I often write programs merely to help me think and then to throw away.
I'd rather spend the time writing tests for things I do plan to use
on an ongoing basis.  I'd rather not be made to write "tests" for
mere passing thoughts.

I addressed this issue recently at Slashdot.  Rather than repeat myself,
I'll just paste a pointer:

http://slashdot.org/comments.pl?sid=64101&cid=5954537
From: Peter Seibel
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <m3u1busmo8.fsf@javamonkey.com>
Kent M Pitman <······@world.std.com> writes:

> I addressed this issue recently at Slashdot. Rather than repeat
> myself, I'll just paste a pointer:
> 
> http://slashdot.org/comments.pl?sid=64101&cid=5954537

Did you mean "optionally weak" in the first sentence, rather than
"optionally dynamic"?

-Peter

-- 
Peter Seibel                                      ·····@javamonkey.com

         Lisp is the red pill. -- John Fraser, comp.lang.lisp
From: Kent M Pitman
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <sfwhe7u3c3g.fsf@shell01.TheWorld.com>
Peter Seibel <·····@javamonkey.com> writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > I addressed this issue recently at Slashdot. Rather than repeat
> > myself, I'll just paste a pointer:
> > 
> > http://slashdot.org/comments.pl?sid=64101&cid=5954537
> 
> Did you mean "optionally weak" in the first sentence, rather than
> "optionally dynamic"?

Probably.
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ud6ii3atp.fsf@STRIPCAPStelus.net>
Kent M Pitman <······@world.std.com> writes:
> Ray Blaak <········@STRIPCAPStelus.net> writes:
> > If your static typing is tedious then you are working with a shitty
> > statically typed language, especially if you have to duplicate
> > code. Declarations should be "powerful", allowing an economy of
> > expression. Declaring types should be no more tedious than writing
> > the equivalent unit test.
> 
> What if I don't want to write the equivalent unit test?
> Seriously.

Well, then don't, of course. I am not trying to convert anyone to static
typing. I am only trying to counteract the idea that it has to be as onerous
as many people seem to think it is.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Pascal Costanza
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <costanza-7C7BE2.22400916052003@news.netcologne.de>
In article <·············@STRIPCAPStelus.net>,
 Ray Blaak <········@STRIPCAPStelus.net> wrote:

> If your static typing is tedious then you are working with a shitty statically
> typed language, especially if you have to duplicate code. Declarations should
> be "powerful", allowing an economy of expression. Declaring types should be no
> more tedious than writing the equivalent unit test.

In a test-driven approach you don't write tests that are "equivalent" to 
declaring types.

Assume you have a function f. Your test cases look something like this.

(defun test-f ()
  (assert (eql (f 5) 105)
  (assert (eql (f 6) 234)
  ...)

You wouldn't add (assert (typep (f 5) 'number)) because this is implied 
in the tests above.

It's also easier do add something like (assert (eql (f -1) 'undefined)) 
- you don't need to adapt any type declarations.

> I have said this before here and I'll say it again: the static vs dynamic
> typing debate is not a matter of two opposing philosphies. In reality there is
> a single continuum of typing from completely static to completely dynamic.
> 
> Different programming languages live on different points along this continuum.
> The best program languages (at least in terms of typing) let the programmer
> choose where along this line they prefer to be.

I agree.


Pascal
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <u4r3uawxf.fsf@STRIPCAPStelus.net>
Pascal Costanza <········@web.de> writes:
> In a test-driven approach you don't write tests that are "equivalent" to 
> declaring types.
> 
> Assume you have a function f. Your test cases look something like this.
> 
> (defun test-f ()
>   (assert (eql (f 5) 105)
>   (assert (eql (f 6) 234)
>   ...)
> 
> You wouldn't add (assert (typep (f 5) 'number)) because this is implied 
> in the tests above.

True, but your testing is not complete until you do things like:

  (assert-error (f 'fee-fi-fo-fum) "not a number: fee-fi-fo-fum")

or, depending on the semantics:

  (assert (eql (f "5") 105))
  (assert-error (f "silly") "not a number: silly")

In theory, unit testing has to at least cover boundary cases. In practice, as
Ken pointed out, you might not care.

In a static language, assuming the appropriate restricted types, such tests
are not needed since those cases are not even possible.

> It's also easier do add something like (assert (eql (f -1) 'undefined)) 
> - you don't need to adapt any type declarations.

This is a non-issue. If the function can return such a value, than it should
be tested as such regardless if the language is static or dynamic.

Note also, that being able to return a number or a symbol simultaneously is
not exclusively a dynamic language thing -- it all depends on the return type
defined for the function. Maybe it was <any>. Maybe it was
<number-or-undefined>, declared as:

(define-type <number-or-undefined> 
  (union-type <number> (singleton-type 'undefined))

This is actually a bad example, however. I myself would never define such a
type. "Error" return values tend to be a bad programming practice since it
allows errors to be ignored or improperly handled. I would instead make the
return type the appropriate natural type for normal execution (numbers, in
this case) and use an exception or error mechanism to handle bad inputs.

But don't get me wrong here. I am not actually dissing pure dynamic typing per
se. It has its advantages, and indeed some dynamism is inherently necessary
anyway. My points are:

a) static typing is not as hard as some people think it is
b) many programmers don't do *any* unit testing
c) we need all the help we can get
d) it's all about tradeoffs and what you're willing to live with, and what
   your goals are

--
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Nikodemus Siivola
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <b9vss8$5ggsu$1@midnight.cs.hut.fi>
Michael Sullivan <·······@bcect.com> wrote:

>> Not in my experience. Type errors are usually brain farts. 

> Yes, they are brain farts.  My point is that they are brain farts which
> will generally make unit tests fail.  You don't have to test *for* type
> errors to catch them.

Or they are the kinds of brain farts that simply do not exist in
dynamically typed languages: typos in type names, int instead of long,
etc.

Cheers,

  -- Nikodemus
From: Ray Blaak
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ubry4m7gl.fsf@STRIPCAPStelus.net>
Nikodemus Siivola <········@kekkonen.cs.hut.fi> writes:
> Michael Sullivan <·······@bcect.com> wrote:
> 
> >> Not in my experience. Type errors are usually brain farts. 
> 
> > Yes, they are brain farts.  My point is that they are brain farts which
> > will generally make unit tests fail.  You don't have to test *for* type
> > errors to catch them.
> 
> Or they are the kinds of brain farts that simply do not exist in
> dynamically typed languages: typos in type names, int instead of long,
> etc.

But there are analogous errors: typos in variable/function names. Would you
prefer to be warned about them immediately (as almost all Lisp systems I have
encountered do -- even ELisp), or would you rather wait till runtime for
things to fail, to rely on unit tests to catch such problems?

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
········@STRIPCAPStelus.net                    The Rhythm has my soul.
From: Nikodemus Siivola
Subject: Re: Why dynamic typing?
Date: 
Message-ID: <ba0mcl$5efft$1@midnight.cs.hut.fi>
Ray Blaak <········@stripcapstelus.net> wrote:

> But there are analogous errors: typos in variable/function names. Would you
> prefer to be warned about them immediately (as almost all Lisp systems I have
> encountered do -- even ELisp), or would you rather wait till runtime for
> things to fail, to rely on unit tests to catch such problems?

Sure, any error that the system discovers without me lifting a finger is
nice. But as you say, these analogous errors are typically found by
dynamic systems as well as static ones before runtime. 

And it certainly doesn't change the fact that a whole class of errors
(eg. typos in manifest types along the lines of 'imt' instead of
'int') cease to exist when you lose manifest typing. Of course, this
applies to both dynamic typing and static typing with inference.

Cheers,

  -- Nikodemus
From: Pekka P. Pirinen
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <ullxjudwo.fsf@globalgraphics.com>
Tim Bradshaw <···@cley.com> writes:
> The normal definition, I think, makes CL a Lisp7:
> 
> - Functions & macros
> - lexical variables
> - special variables
> - types and classes
> - labels (for GO)
> - block names
> - symbols in quoted expressions (such as tags for THROW).

I count four more than Norvig, discounting symbols (details in
<http://groups.google.com/groups?selm=ix6734k5a0.fsf%40gaspode.cam.harlequin.co.uk>:
 - structures
 - setf methods
 - compiler macros
 - methods
 - method combinations
plus, as you say, any number of user-defined namespaces.  Lisp11+.

Some of those could not be squeezed into a single namespace without
ugly hacks, such as C++ name mangling for methods.
-- 
Pekka P. Pirinen
The worst book in a trilogy is the fourth.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-B8AC34.03372601052003@news.netcologne.de>
Tom,

Thanks for replying - I am really interested in these things!

In article <··············@corp.supernews.com>,
 ····@emf.emf.net (Tom Lord) wrote:

> 	Pascal:
> 
>         > But I don't see the practical advantages of a 
> 	> Lisp-1,
> 
> It's hard to point to _absolute_ advantages for either lisp-1 or
> lisp-2 because they aren't very different -- just different in
> emphasis and syntax optimizations.

OK.

> In lisp-2, I could program in a style where I always use funcall
> and then I'm programming in lisp-1.
> 
> In lisp-1, I could program in a style where I always use something
> like (get 'symbol 'function) in the CAR of expressions, and then 
> I'm programming in lisp-2.
> 
> It's a case of "Anything you can do, I can do .... well, pretty much
> the same way."

No, not quite. As Erann said before in this thread, a programming 
language is a user interface - so it is not just a matter whether you 
can do something at all, but rather how easy it is, how well it fits 
your mental model, how well it is balanced against other features of a 
language, and so on.

So, as I said before, a Lisp-1 is probably better when you want/need to 
program in a functional style, say, 90% of the time.

> A few non-absolute pragmatic advantages of lisp-1:
> 
> * Um... lexical scoping?
> 
>   My CL (as compared to my Scheme) is certainly rusty.  Perhaps
>   I am full of s here and I'll humbly accept the fish-slap if so:
> 
>   Aren't function binding slots per-symbol, and thus not lexically
>   scoped?  In lisp-1, for example, I can take a function body, and
>   wrap it in a `let' that shadows some globally defined function, and
>   it all just works.  I don't need to go through the body adding
>   FUNCALL or taking FUNCALL out should I happen to remove the shadow
>   binding.

Nope, you are wrong in this regard - function are by default lexically 
scoped, and the ANSI standard even explicitly states that local function 
definitions cannot be declared to be special (dynamically scoped).

The point is not whether you can redefine a function locally, the point 
is what extent the redefinition has. With regard to local function 
definitons, Common Lisp behaves just like Scheme.

> * macros
> 
>   Especially macros that implement binding constructs or have
>   side-effects on bindings, and macros that have free variables.  In
>   lisp-1, to choose a trivial example, I can have a single macro for
>   `swap-values' where in lisp-2 I'd need `swap-values' and
>   `swap-function-bindings' or worse (see "exploratory programming",
>   below).

No, I think you wouldn't do it like this in Common Lisp. CL defines 
rotatef that acts like swap when it is passed two arguments. You can do 
the following:

(defun f (x) (print x))

(defun g (x) (print (1+ x)))

(rotatef (symbol-function 'f) (symbol-function 'g))

(f 5)
=> 6

So this means that you have a single macro for both cases, but you need 
to pass the right arguments.

> * pedagogy
> 
>   I'm not a college professor, but I'd bet a quarter that lisp-1 is
>   easier to teach, simply because there's less to learn.  Sure,
>   everybody messes up with `(let ((list ....)) ...)' -- once.  (After
>   which their intuitive understanding of both the evaluation rule and
>   lexical scoping is improved.)

I think students should be taught about both options. It's not up to the 
professor to teach students what he/she perceives as The Right Thing, 
but he/she should give them sufficient information so that they can make 
up their own minds.

> * automatic code transforms
> 
>   A generalization of macros.   The fewer primitive constructs
>   in your language, the easier it is to write high-level transforms.
>   No need for `(cond ((eq (car foo) 'funcall) ...) ...)'.

I don't really understand what you mean here, probably just because I 
don't have enough experience in this regard. Perhaps someone else can 
comment on this.

> * simpler implementation
> 
>   Consider a simple meta-circular interpreter.  The lisp-1 version is
>   smaller and simpler.  I gather people don't _really_ transform
>   everything-binding-related to lambda in the radical manner of
>   RABBIT.SCM in production compilers, but I'm not so sure it's really
>   a dead technique.

Well, "smaller" and "simpler" are aesthetical categories IMHO. ;)

> And the big one, I think, though the most abstract:
> 
> * exploratory programming
> 
>   In lisp-2, I have to decide whether a given function should be
>   treated as the value of a variable or the function slot binding
>   of a symbol.    My decision then becomes spread throughout 
>   the code in the form of the presence or absense of FUNCALL.
>   Then I change my mind about that decision.
> 
>   Worse: I have a package that assumes that decision goes one way, 
>   and applications that use that package that way.   Then I want to 
>   use the same package in an application that makes the decision the
>   other way.

Hmm, I am not quite sure if I understand you correctly. The default in 
Common Lisp is of course to always store functions in function cells. 
Functions are anly stored in variables when you want to parameterize a 
function with another function, so the cases are actually very clear cut.

Well, ok, when a function of an outer scope was called directly in one 
version of a program, but needs to be passed in a variable in the next 
version, this means that you need to refactor your program in this 
regard. Is this what you mean? In this case, I still don't fully 
understand the second paragraph (about packages).

> And the "general principles" one, though this is less directly a
> pragmatic issue:
> 
> * why stop at 2?
> 
>   Maybe, like C, I want a third binding for, say, structure types.
>   The step from 1 to 2 is almost always wrong.   Either stop at 1,
>   or make it N.   But that's just a rule of thumb.
> 
>   Binding is binding is binding.   Why do need 2?

Common Lisp has more than two bindings, and you can always create more 
by storing new bindings in hashtables or a-lists.


Pascal

-- 
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <r87iehq3.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> I have the following definitions in my startup files:
> 
> (defun open-curl-macro-char (stream char)
>   (declare (ignore char))
>   (let ((forms (read-delimited-list #\} stream t)))
>     `(funcall ,@forms)))
> 
> (set-macro-character #\{ #'open-curl-macro-char)
> (set-macro-character #\} (get-macro-character #\)))
> 
> 
> Now I can write {f args} instead of (funcall f args). 

Wow.  I bet it runs faster, too.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-1C54FB.19591701052003@news.netcologne.de>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> 
wrote:

> > Now I can write {f args} instead of (funcall f args). 
> 
> Wow.  I bet it runs faster, too.

You have forgotten the smiley. ;)


Pascal

-- 
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie
From: Joe Marshall
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <fznyebhl.fsf@ccs.neu.edu>
Pascal Costanza <········@web.de> writes:

> In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> 
> wrote:
> 
> > > Now I can write {f args} instead of (funcall f args). 
> > 
> > Wow.  I bet it runs faster, too.
> 
> You have forgotten the smiley. ;)

No I didn't.  I was being completely serious.
From: Pascal Costanza
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <costanza-20C600.02284502052003@news.netcologne.de>
In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu> 
wrote:

> > > > Now I can write {f args} instead of (funcall f args). 
> > > 
> > > Wow.  I bet it runs faster, too.
> > 
> > You have forgotten the smiley. ;)
> 
> No I didn't.  I was being completely serious.

Hmm, so why do you think it runs faster? {f args} just gets translated 
to (funcall f args) at readtime...

?!?

Pascal
From: Michael Israel
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <1aFsa.2972$153.562@news01.roc.ny.frontiernet.net>
"Joe Marshall" <···@ccs.neu.edu> wrote in message
·················@ccs.neu.edu...
> Pascal Costanza <········@web.de> writes:
>
> > In article <············@ccs.neu.edu>, Joe Marshall <···@ccs.neu.edu>
> > wrote:
> >
> > > > Now I can write {f args} instead of (funcall f args).
> > >
> > > Wow.  I bet it runs faster, too.
> >
> > You have forgotten the smiley. ;)
>
> No I didn't.  I was being completely serious.

You can even write:

{format t "~%C Sucks."} ;; ;)===>:)
{format t "~%&so doesWindows."} ;; 2tru2B_funny.
From: Adam Warner
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <pan.2003.04.30.12.09.27.142430@consulting.net.nz>
Hi Simon H.,

> Of all the functional languages I've used (Scheme, OCaml, etc) I
> notice that Lisp is the only one with a separate namespace for
> functions.  It just recently struck me that it seems to be a rather
> odd and counter-intuitive way of doing things, especially since I
> can't think of a single other language that does it, however I assume
> there's some reason for it...  Or at least an excuse.  ;-)

Ooh, a nice Google Groups+Hotmail troll who's a little short of
imagination. So let's be pathological:

(let ((print "variable namespace"))
  (print
   (block print
     (tagbody
        (go print)
        (print "unreachable code")
      print (print "tagbody namespace")
        (return-from print "block namespace")
      nil)))
  (print "we can still print a string")
  (print print))

Will print:

"tagbody namespace" 
"block namespace" 
"we can still print a string" 
"variable namespace"

(Was this functional enough?)

In a nutshell it's all about expressiveness and allowing the easy
avoidance of namespace collisions without having to resort to less
powerful hygienic macros. I recently learned about another reason from
Christian Queinnec's "Lisp In Small Pieces": Separating the world of
functions from the rest of computations "is a profitable distinction that
all good Scheme compilers exploit, according to [S�n89].... A user of
Lisp2 has to do much of the work of the compiler in that way and thus
understands better what it's worth." [page 41]

Code walking is more difficult. And the book also explains that macros
that expand into lambda forms are "highly problematic".

Regards,
Adam
From: Simon H.
Subject: Re: Why separate function namespaces?
Date: 
Message-ID: <e9904ec5.0304301211.3991cc3f@posting.google.com>
"Adam Warner" <······@consulting.net.nz> wrote in message news:<······························@consulting.net.nz>...

> (let ((print "variable namespace"))
>   (print
>    (block print
>      (tagbody
>         (go print)
>         (print "unreachable code")
>       print (print "tagbody namespace")
>         (return-from print "block namespace")
>       nil)))
>   (print "we can still print a string")
>   (print print))

*laughs*  Why grandmother, what pretty spaghetti code you have.  You
may consider the clue-by-four delivered.

Pax,
S
From: Patrick O'Donnell
Subject: Re: Why seperate function namespaces?
Date: 
Message-ID: <rtfznypw6j.fsf@ascent.com>
······@mail.com (Simon H.) writes:
> Of all the functional languages I've used (Scheme, OCaml, etc) I
> notice that Lisp is the only one with a seperate namespace for
> functions.

And, yet, it functions quite well.
	      
>	      It just recently struck me that it seems to be a rather
> odd and counter-intuitive way of doing things,

I do hope the strike was not painful.

> especially since I can't think of a single other language that does
> it,

Perhaps you have another think coming.  I can immediately think of at
least one.
						       
> however I assume there's some reason for it...

Reason further on the topic, and it may come to you.
						  
> Or at least an excuse.

You're excused.


My objection is to "counter-intuitive".  "Unaccustomed" is perhaps
more accurate.  If you find linguistic distinction between action and
naming counter-intuitive, it may be that your intuitions have been
reshaped to an unnatural degree by considerations of functional
languages.  Consider the natural world, and you'll find many examples
of multiple "namespaces".  The human mind is much more flexible than
your complaint gives it credit for.

		- Pat