From: Charles K Johnson
Subject: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <d216310f.0109132316.159b629e@posting.google.com>
Thanks for helping me out with my last question.  I just
have two more questions. Ok, I'm not in school;
these are just a few things that are bothering me.

1.  On pp. 81 of Ansi Common Lisp, it says, 

"We have seen progn already.  The expressions within its
body are evaluated in order, and the value of the last is
returned:

> (progn
    (format t "a")
    (format t "b")
    (+ 11 12))
ab
23

Since only the value of the last expression is returned,
the use of progn (or any block) implies side-effects."

Why is the last statement true? A side-effect is when,
in addition to returning a value, the environment is
changed.  Is the statement true because if the forms
are executed, the only way to tell if anything happened
(because no return value is saved) are by changes in
the environment?

But why should that be true in all cases that make use
of blocks? Is a side-effect defined in terms of input/
output or will a statement such as (defun....  be
considered a side-effect?

I know that this is something basic, but I'm not really
certain about what's going on.

I wonder if someone can recommend an online paper (or
something from ACM's Digital Library) that discusses
this sort of terminology from a Lisp perspective.   


2.  Question 6(a) ACL pp. 80.

The problem asks you to define a function that takes an
assoc-list whose elements are (key . value) and returns the 
corresponding hash table.

CL-USER 51 > assoc
((! . "exclamation") ($ . "dollar sign") (% . "percentage"))

CL-USER 52 >
(defun assoc-to-hash (assoc)
  (and (consp assoc)
       (let ((hash (make-hash-table)))
         (dolist (pair assoc)
           (setf (gethash (car pair) hash) (cdr pair)))
         hash)))
ASSOC-TO-HASH

CL-USER 53 > 
(maphash #'(lambda (key value)
             (format t "~A ~A~%" key value))
         (assoc-to-hash assoc))
$ dollar sign
! exclamation
% percentage
NIL


Which is what I need.  I was wondering why, when I substitute
push for setf, I get different output.

CL-USER 54 > 
(defun assoc-to-hash (assoc)
  (and (consp assoc)
       (let ((hash (make-hash-table)))
         (dolist (pair assoc)
           (push (cdr pair) (gethash (car pair) hash)))
         hash)))
ASSOC-TO-HASH


CL-USER 55 > 
(maphash #'(lambda (key value)
             (format t "~A ~A~%" key value))
         (assoc-to-hash assoc))
$ (dollar sign)
! (exclamation)
% (percentage)
NIL


Why do I get these extra parenthesis?  Shouldn't it be the same
thing, because, after all :

CL-USER 57 > (cdr '(! . "exclamation"))
"exclamation"

Ok, thanks - I really appreciate any pointers,

Charles
(using Lispworks Personal 4.1.20)

From: Frode Vatvedt Fjeld
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <2h1ylarygm.fsf@dslab7.cs.uit.no>
·················@hotmail.com (Charles K Johnson) writes:

> > (progn
>     (format t "a")
>     (format t "b")
>     (+ 11 12))
> ab
> 23
> 
> Since only the value of the last expression is returned,
> the use of progn (or any block) implies side-effects."
> 
> Why is the last statement true?

It is true in the sense that unless those forms which are "hidden"
inside the block have side-effects, there is (by definition) no reason
to keep them there, and they can/should be removed.

  (progn 1 (+ 2 3) 'a 'b)   == 'b
  (prog1 1 (+ 2 3) 'a 'b)   == 1
  (prog2 1 (+ 2 3) 'a 'b)   == (+ 2 3) == 5
  (tagbody 1 (+ 2 3) 'a 'b) == nil

Note that == here doesn't mean "evaluates to" but rather "is the same
as". For example, an optimizing compiler might make these
substitutions in order to avoid the pointless evaluations.

> But why should that be true in all cases that make use of blocks? Is
> a side-effect defined in terms of input/ output or will a statement
> such as (defun....  be considered a side-effect?

A defun statement is obviously evaluated for its
side-effects. Functionally, defun is defined such that

  (defun name (..) ...) == 'name

But this is not exactly the primary characteristic of DEFUN. Also note
that DEFUN is not a block in the same sens as PROGN etc. are. DEFUN
evaluates none of its arguments. It does however wrap its body forms
in a BLOCK form. So the following is true:

  (defun f () 1 (+ 2 3) 'a 'b) == (defun f () 'b)

(And by disassembling this function F in my lisp (Allegro CL) I see
that this compiler indeed does make this substitution.)

> Which is what I need.  I was wondering why, when I substitute push
> for setf, I get different output.

Why would you expect PUSH to do the same thing as SETF? PUSH is
supposed to insert an item at the head of a list, hence the result
will be a list.

   (push <item> <place>)

is about the same as

   (setf <place> (cons <item> <place>))

-- 
Frode Vatvedt Fjeld
From: Frode Vatvedt Fjeld
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <2hsndqqfg0.fsf@dslab7.cs.uit.no>
A side-note to my previous posting:

Frode Vatvedt Fjeld <······@acm.org> writes:

> [..] So the following is true:
> 
>   (defun f () 1 (+ 2 3) 'a 'b) == (defun f () 'b)
> 
> (And by disassembling this function F in my lisp (Allegro CL) I see
> that this compiler indeed does make this substitution.)

To my surprise, I found that the Allegro compiler also makes the
following substitution:

  (defun f () (declare (special x)) x 'b) == (defun f () 'b)

The issue is this: Can a dynamic variable lookup be considered to be
side-effect free? What if the variable is unbound, which normally
would cause a unbound-condition to be signalled?

-- 
Frode Vatvedt Fjeld
From: Barry Margolin
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <2too7.29$7o6.347@burlma1-snr2>
In article <··············@dslab7.cs.uit.no>,
Frode Vatvedt Fjeld  <······@acm.org> wrote:
>To my surprise, I found that the Allegro compiler also makes the
>following substitution:
>
>  (defun f () (declare (special x)) x 'b) == (defun f () 'b)
>
>The issue is this: Can a dynamic variable lookup be considered to be
>side-effect free? What if the variable is unbound, which normally
>would cause a unbound-condition to be signalled?

Signalling UNBOUND-VARIABLE is only required with the highest safety
setting; otherwise, the consequences are undefined.  When you performed the
above test, did you (declaim (optimize (safety 3)))?

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Frode Vatvedt Fjeld
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <2hofodr99h.fsf@dslab7.cs.uit.no>
Barry Margolin <······@genuity.net> writes:

> Signalling UNBOUND-VARIABLE is only required with the highest safety
> setting; otherwise, the consequences are undefined.  When you
> performed the above test, did you (declaim (optimize (safety 3)))?

No, but I tried it now and the dynamic variable reference was still
removed by the compiler. A minor compiler bug, I suppose.

-- 
Frode Vatvedt Fjeld
From: Barry Margolin
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <JBro7.36$7o6.309@burlma1-snr2>
In article <··············@dslab7.cs.uit.no>,
Frode Vatvedt Fjeld  <······@acm.org> wrote:
>Barry Margolin <······@genuity.net> writes:
>
>> Signalling UNBOUND-VARIABLE is only required with the highest safety
>> setting; otherwise, the consequences are undefined.  When you
>> performed the above test, did you (declaim (optimize (safety 3)))?
>
>No, but I tried it now and the dynamic variable reference was still
>removed by the compiler. A minor compiler bug, I suppose.

Does it also remove explicit calls to SYMBOL-VALUE?

(defun test ()
  (symbol-value 'foo)
  nil)

I wouldn't be surprised if it does.  When implementors designate functions
and expression types as being side-effect-free, which allows the optimizer
to elide them when they're not in a value-returning position, they probably
don't consider error signalling.

-- 
Barry Margolin, ······@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: Kent M Pitman
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <sfw1yl9a79r.fsf@world.std.com>
Frode Vatvedt Fjeld <······@acm.org> writes:

> A side-note to my previous posting:
> 
> Frode Vatvedt Fjeld <······@acm.org> writes:
> 
> > [..] So the following is true:
> > 
> >   (defun f () 1 (+ 2 3) 'a 'b) == (defun f () 'b)
> > 
> > (And by disassembling this function F in my lisp (Allegro CL) I see
> > that this compiler indeed does make this substitution.)
> 
> To my surprise, I found that the Allegro compiler also makes the
> following substitution:
> 
>   (defun f () (declare (special x)) x 'b) == (defun f () 'b)
> 
> The issue is this: Can a dynamic variable lookup be considered to be
> side-effect free? What if the variable is unbound, which normally
> would cause a unbound-condition to be signalled?

Read my paper 
 http://world.std.com/~pitman/Papers/Exceptional-Situations-1990.html
for a treatment of this general concern--particularly the first couple of
sections.  The subsection named "Efficiency" in section II is particularly
apropos to the above.
From: Janis Dzerins
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <871ylab4fl.fsf@asaka.latnet.lv>
·················@hotmail.com (Charles K Johnson) writes:

> 1.  On pp. 81 of Ansi Common Lisp, it says, 
> 
> "We have seen progn already.  The expressions within its
> body are evaluated in order, and the value of the last is
> returned:
> 
> > (progn
>     (format t "a")
>     (format t "b")
>     (+ 11 12))
> ab
> 23
> 
> Since only the value of the last expression is returned,
> the use of progn (or any block) implies side-effects."
> 
> Why is the last statement true? A side-effect is when,
> in addition to returning a value, the environment is
> changed.  Is the statement true because if the forms
> are executed, the only way to tell if anything happened
> (because no return value is saved) are by changes in
> the environment?

If the statements do not have any impact on the environment and their
return values cannot be accesed -- the statements are no-ops and could
as easily not be there for the program to function.

> CL-USER 51 > assoc
> ((! . "exclamation") ($ . "dollar sign") (% . "percentage"))
> 
> CL-USER 52 >
> (defun assoc-to-hash (assoc)
>   (and (consp assoc)
>        (let ((hash (make-hash-table)))
>          (dolist (pair assoc)
>            (setf (gethash (car pair) hash) (cdr pair)))
>          hash)))
> ASSOC-TO-HASH
> 
> CL-USER 53 > 
> (maphash #'(lambda (key value)
>              (format t "~A ~A~%" key value))
>          (assoc-to-hash assoc))
> $ dollar sign
> ! exclamation
> % percentage
> NIL
> 
> 
> Which is what I need.  I was wondering why, when I substitute
> push for setf, I get different output.
> 
> CL-USER 54 > 
> (defun assoc-to-hash (assoc)
>   (and (consp assoc)
>        (let ((hash (make-hash-table)))
>          (dolist (pair assoc)
>            (push (cdr pair) (gethash (car pair) hash)))
>          hash)))
> ASSOC-TO-HASH
> 
> 
> CL-USER 55 > 
> (maphash #'(lambda (key value)
>              (format t "~A ~A~%" key value))
>          (assoc-to-hash assoc))
> $ (dollar sign)
> ! (exclamation)
> % (percentage)
> NIL
> 
> 
> Why do I get these extra parenthesis?

Read CLHS entry on push.
(In short: push acts on list and returns a list.)

> Shouldn't it be the same thing, because, after all :
> 
> CL-USER 57 > (cdr '(! . "exclamation"))
> "exclamation"

push and setf are different beasts. And the difference you observe is
just because of that -- the value is stored in different ways into
hashtable.

-- 
Janis Dzerins

  If million people say a stupid thing it's still a stupid thing.
From: ···@itasoftware.com
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <bskdzq81.fsf@itasoftware.com>
Janis Dzerins <·····@latnet.lv> writes:

> ·················@hotmail.com (Charles K Johnson) writes:
> 
> > Since only the value of the last expression is returned,
> > the use of progn (or any block) implies side-effects."
> > 
> > Why is the last statement true? A side-effect is when,
> > in addition to returning a value, the environment is
> > changed.  Is the statement true because if the forms
> > are executed, the only way to tell if anything happened
> > (because no return value is saved) are by changes in
> > the environment?
> 
> If the statements do not have any impact on the environment and their
> return values cannot be accesed -- the statements are no-ops and could
> as easily not be there for the program to function.

There is a question about this, though:
(progn
  (loop)
  (print x))

The loop statement has no `impact' on the environment, and the return
value cannot be accessed.  Is it legit to throw it out?  I think so,
but others disagree.  My argument is as follows:

1.  The entire form *would* return a value under normal order
    semantics. 

2.  Normal order semantics always produces the same value as
    applicative order when applicative order produces a value.
    (Ignoring such issues as timing of side effects.)  Thus normal
    order semantics can *never* produce the `wrong' value.

3.  The compiler is generally expected to produce code that runs `just
    like' the source code on legal input.  `Just like' meaning that it
    returns the same value and has the same side effects as the
    original code.  It is generally not expected that the compiled
    code exactly reproduce the effect of the source code on illegal
    input.

So I think that semantically the compiler ought to be able to discard
useless looping constructs.  But there are further issues:

1.  Looping is undecidable in general, therefore requiring that the
    compiler preserve the termination characteristics of a piece of
    code means that it must preserve a property it cannot completely
    model.

2.  There are optimizations that maintain the invariants of return
    value and side effects, but do not maintain termination.  Among
    these are beta reduction, which is too important to discard.

3.  Writing code that depends on maintaining divergence under compiler
    optimizations is rare and poor design methodology anyway.

So I think that from a practical point of view you ought to throw out
side-effect free code when the return value is not used.

 
From: Kent M Pitman
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <sfw3d5pa7rw.fsf@world.std.com>
···@itasoftware.com writes:

> Janis Dzerins <·····@latnet.lv> writes:
> 
> > ·················@hotmail.com (Charles K Johnson) writes:
> > 
> > > Since only the value of the last expression is returned,
> > > the use of progn (or any block) implies side-effects."
> > > 
> > > Why is the last statement true? A side-effect is when,
> > > in addition to returning a value, the environment is
> > > changed.  Is the statement true because if the forms
> > > are executed, the only way to tell if anything happened
> > > (because no return value is saved) are by changes in
> > > the environment?
> > 
> > If the statements do not have any impact on the environment and their
> > return values cannot be accesed -- the statements are no-ops and could
> > as easily not be there for the program to function.
> 
> There is a question about this, though:
> (progn
>   (loop)
>   (print x))
> 
> The loop statement has no `impact' on the environment, and the return
> value cannot be accessed.

There is no return value.  The (print x) is not accessible and is what can
be removed.  The (LOOP) cannot be removed.

> ... So I think that from a practical point of view you ought to throw out
> side-effect free code when the return value is not used.

You must take into acount reachability.
From: ···@itasoftware.com
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <7kuxzzv0.fsf@itasoftware.com>
Kent M Pitman <······@world.std.com> writes:

> ···@itasoftware.com writes:
> 
> > Janis Dzerins <·····@latnet.lv> writes:
> > 
> > > ·················@hotmail.com (Charles K Johnson) writes:
> > > 
> > > > Since only the value of the last expression is returned,
> > > > the use of progn (or any block) implies side-effects."
> > > > 
> > > > Why is the last statement true? A side-effect is when,
> > > > in addition to returning a value, the environment is
> > > > changed.  Is the statement true because if the forms
> > > > are executed, the only way to tell if anything happened
> > > > (because no return value is saved) are by changes in
> > > > the environment?
> > > 
> > > If the statements do not have any impact on the environment and their
> > > return values cannot be accesed -- the statements are no-ops and could
> > > as easily not be there for the program to function.
> > 
> > There is a question about this, though:
> > (progn
> >   (loop)
> >   (print x))
> > 
> > The loop statement has no `impact' on the environment, and the return
> > value cannot be accessed.
> 
> There is no return value.  The (print x) is not accessible and is what can
> be removed.  The (LOOP) cannot be removed.

I find this viewpoint weird.  Suppose that it were written as follows:

(defun bar (x)
  (flet ((foo (x)
           (do ((i x (if (oddp i) (1+ (* i 3)) (/ i 2))))
               ((= i 1) nil))))
    (foo x)
    (print x)))

(assume the compiler knows x is a positive integer)

Should the compiler be able to remove the call to the internal
function FOO?  It is not known whether there is a positive integer
value for X which makes the print statement unreachable, so if you
want to preserve `reachability', you have to call foo and wait for it
to finish.

On the other hand, the return value of FOO can never be anything but
the value NIL, and this value is not used anyway.  It is also clear
that no invocation of FOO will cause a side effect in the
environment.  FOO does not signal any errors, either.

It seems to me that the compiler is justified in removing the call to
FOO, even though there may be values of X that cause FOO to not
terminate.

> > ... So I think that from a practical point of view you ought to throw out
> > side-effect free code when the return value is not used.
> 
> You must take into acount reachability.

I'm not convinced.
From: Kent M Pitman
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <sfw4rq1ii6i.fsf@world.std.com>
···@itasoftware.com writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > There is no return value.  The (print x) is not accessible and is what can
> > be removed.  The (LOOP) cannot be removed.
> 
> I find this viewpoint weird.  Suppose that it were written as follows:
> 
> (defun bar (x)
>   (flet ((foo (x)
>            (do ((i x (if (oddp i) (1+ (* i 3)) (/ i 2))))
>                ((= i 1) nil))))
>     (foo x)
>     (print x)))
> 
> (assume the compiler knows x is a positive integer)
> 
> Should the compiler be able to remove the call to the internal
> function FOO?  It is not known whether there is a positive integer
> value for X which makes the print statement unreachable, so if you
> want to preserve `reachability', you have to call foo and wait for it
> to finish.
> 
> On the other hand, the return value of FOO can never be anything but
> the value NIL, and this value is not used anyway.  It is also clear
> that no invocation of FOO will cause a side effect in the
> environment.  FOO does not signal any errors, either.
> 
> It seems to me that the compiler is justified in removing the call to
> FOO, even though there may be values of X that cause FOO to not
> terminate.

The compiler can't have make an observable change to guaranteed semantics.
If the program might not stop, that has to be tested.  It might be getting
called for the purpose of keeping you from getting to something else.
 
> > > ... So I think that from a practical point of view you ought to throw out
> > > side-effect free code when the return value is not used.
> > 
> > You must take into acount reachability.
> 
> I'm not convinced.

The burden is on you to show by what right you remove it.
The language semantics are quite clear about left to right 
order of evaluation.  Compilation may optimize that, but not
change the meaning.  And the meaning is plain.
From: ···@itasoftware.com
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <lmjck0hb.fsf@itasoftware.com>
Kent M Pitman <······@world.std.com> writes:

> ···@itasoftware.com writes:
>
> > It seems to me that the compiler is justified in removing the call to
> > FOO, even though there may be values of X that cause FOO to not
> > terminate.
> 
> The compiler can't have make an observable change to guaranteed semantics.

Optimizing compilers often make observable changes to semantics (not
the `guaranteed' ones, of course, because then they wouldn't be
`guaranteed').  There are several components to the semantics, 

  1.  The value returned.
  2.  The observable side effects.
  3.  The order in which externally observable side effects takes place.
  4.  The resource use of the program (time and space).
  5.  Whether the program halts.

and another 5 for invalid input.

When you compile the program, you probably want the compiled program
to return the same value for valid input, you probably want the same
observable side effects, and you probably want same order of side
effects.  You probably want the compiled program to halt on all values
for which the interpreted program did.

On the other hand, you probably *don't* want the program to take
exactly the same time and space when compiled.

If safety is not a priority, you probably don't care if the
interpreted program and the compiled program exhibit identical
behavior when that behavior is undefined.

> If the program might not stop, that has to be tested.  It might be getting
> called for the purpose of keeping you from getting to something else.

That would be piss-poor design.  This would be similar to writing a
program that references an undefined free variable with the intent of
causing an unbound variable error (and thus preventing you from
getting to something else).  Or putting in a call to
(double-recursive-fibonacci 30) as a delay tactic.

> > > > ... So I think that from a practical point of view you ought to throw out
> > > > side-effect free code when the return value is not used.
> > > 
> > > You must take into acount reachability.
> > 
> > I'm not convinced.
> 
> The burden is on you to show by what right you remove it.

Here is my argument.  Consider these programs:

(defun foo (x)
  (when (integerp x)
    (do ((i x (if (evenp i) (/ i 2) (1+ (* i 3)))))
        ((< i 2) nil))))

(defun foo-prime (x)
  nil)

It is easy to show that FOO cannot return a value other than NIL.  It
is also easy to show that there is no X for which 

(eq (foo x) (foo-prime x)) => false

It is also easy to show that both FOO and FOO-PRIME are identical with
respect to externally observable side effects.

This is my justification for replacing FOO with FOO-PRIME.

I have a further pragmatic justification:  many compiler optimizations
such as inlining work by beta-reducing the form at compile time.  This
is a normal-order reduction.  When the arguments are side-effect free
(pure), then it can be shown that normal-order reduction produces the
exact same result as applicative order reduction, when applicative
order returns a result.  But in general, you cannot determine whether
applicative order evaluation will return a result.  Many optimizations
will become impossible if you require the compiler to prove that
argument evaluation terminates. 

> The language semantics are quite clear about left to right 
> order of evaluation.  Compilation may optimize that, but not
> change the meaning.  And the meaning is plain.

The meaning is less plain than you think.  What is the return value of
function FOO above?
From: Kent M Pitman
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <sfwadzsjrx6.fsf@world.std.com>
···@itasoftware.com writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > ···@itasoftware.com writes:
> >
> > > It seems to me that the compiler is justified in removing the call to
> > > FOO, even though there may be values of X that cause FOO to not
> > > terminate.
> > 
> > The compiler can't have make an observable change to guaranteed semantics.
> 
> Optimizing compilers often make observable changes to semantics (not
> the `guaranteed' ones, of course, because then they wouldn't be
> `guaranteed').

But execution of the next form moving left to right IS guaranteed.
I don't see how you can possibly justify this as optional.

> There are several components to the semantics, 
> 
>   1.  The value returned.
>   2.  The observable side effects.
>   3.  The order in which externally observable side effects takes place.
>   4.  The resource use of the program (time and space).
>   5.  Whether the program halts.
> 
> and another 5 for invalid input.
> 
> When you compile the program, you probably want the compiled program
> to return the same value for valid input, you probably want the same
> observable side effects, and you probably want same order of side
> effects.  You probably want the compiled program to halt on all values
> for which the interpreted program did.
> 
> On the other hand, you probably *don't* want the program to take
> exactly the same time and space when compiled.

This is not the issue.  Infinite time is not the same as "more time".
Infinite time is the same as "not getting to the next statement".
Even Turing Machines, the "Patient Saint" (so to speak) of infinite 
computability, will not work past infinite loops.
 
> If safety is not a priority, you probably don't care if the
> interpreted program and the compiled program exhibit identical
> behavior when that behavior is undefined.

This use of "probably" is the key.  You simply have no basis for saying
what I probably do or probably don't mean.  The language semantics
say what must happen, and I think that's the end of it unless you ask for
some nonstandard optimization to occur, in which case we ware not
debating language semantics.  Any time you opt to breach the language
semantics, you can do anything you like, and "caveat emptor" applies.

> > If the program might not stop, that has to be tested.  It might be getting
> > called for the purpose of keeping you from getting to something else.
> 
> That would be piss-poor design.  This would be similar to writing a
> program that references an undefined free variable with the intent of
> causing an unbound variable error (and thus preventing you from
> getting to something else).  Or putting in a call to
> (double-recursive-fibonacci 30) as a delay tactic.
> 
> > > > > ... So I think that from a practical point of view you ought to throw out
> > > > > side-effect free code when the return value is not used.
> > > > 
> > > > You must take into acount reachability.
> > > 
> > > I'm not convinced.
> > 
> > The burden is on you to show by what right you remove it.
> 
> Here is my argument.  Consider these programs:
> 
> (defun foo (x)
>   (when (integerp x)
>     (do ((i x (if (evenp i) (/ i 2) (1+ (* i 3)))))
>         ((< i 2) nil))))
> 
> (defun foo-prime (x)
>   nil)
> 
> It is easy to show that FOO cannot return a value other than NIL.  It
> is also easy to show that there is no X for which 
> 
> (eq (foo x) (foo-prime x)) => false
> 
> It is also easy to show that both FOO and FOO-PRIME are identical with
> respect to externally observable side effects.
> 
> This is my justification for replacing FOO with FOO-PRIME.
> 
> I have a further pragmatic justification:  many compiler optimizations
> such as inlining work by beta-reducing the form at compile time.  This
> is a normal-order reduction.  When the arguments are side-effect free
> (pure), then it can be shown that normal-order reduction produces the
> exact same result as applicative order reduction, when applicative
> order returns a result.  But in general, you cannot determine whether
> applicative order evaluation will return a result.  Many optimizations
> will become impossible if you require the compiler to prove that
> argument evaluation terminates. 

Sorry, I'm not sold.

It is critical to the interchangeability of interpreted and compiled
code that you not do such optimizations.

Optimizations are not something we bend language semantics to accomodate.

> > The language semantics are quite clear about left to right 
> > order of evaluation.  Compilation may optimize that, but not
> > change the meaning.  And the meaning is plain.
> 
> The meaning is less plain than you think.  What is the return value of
> function FOO above?

If it doesn't terminate, and I'm taking your word on that, then it has
no return value.

A simpler example is (loop (if nil (return nil))) or even just (loop).
The answer is that it doesn't return and has no return value.  End of
story.  Don't be removign it.

Just my opinion.  But I'm firm on this one, at least until I hear a far
more compelling argument than "aw, you're spoiling some of my favorite
optmimizations" for which I have zero sympathy.

I invite others who've thought hard about this issue to opine.
From: ···@itasoftware.com
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <vgigb6h2.fsf@itasoftware.com>
Kent M Pitman <······@world.std.com> writes:

> > If safety is not a priority, you probably don't care if the
> > interpreted program and the compiled program exhibit identical
> > behavior when that behavior is undefined.
> 
> This use of "probably" is the key.  You simply have no basis for saying
> what I probably do or probably don't mean.  The language semantics
> say what must happen, and I think that's the end of it unless you ask for
> some nonstandard optimization to occur, in which case we ware not
> debating language semantics.  Any time you opt to breach the language
> semantics, you can do anything you like, and "caveat emptor" applies.

If the behavior is `undefined', it is `undefined', and the compiler
can do whatever it wishes.

> > I have a further pragmatic justification:  many compiler optimizations
> > such as inlining work by beta-reducing the form at compile time.  This
> > is a normal-order reduction.  When the arguments are side-effect free
> > (pure), then it can be shown that normal-order reduction produces the
> > exact same result as applicative order reduction, when applicative
> > order returns a result.  But in general, you cannot determine whether
> > applicative order evaluation will return a result.  Many optimizations
> > will become impossible if you require the compiler to prove that
> > argument evaluation terminates. 
> 
> Sorry, I'm not sold.
> 
> It is critical to the interchangeability of interpreted and compiled
> code that you not do such optimizations.
> 
> Optimizations are not something we bend language semantics to accomodate.

Some things, like the speed of processing, are not considered to be
part of the `language semantics', other things, like the value
returned for valid input *are*.  Why do you think halting is one of
these?

> > > The language semantics are quite clear about left to right 
> > > order of evaluation.  Compilation may optimize that, but not
> > > change the meaning.  And the meaning is plain.
> > 
> > The meaning is less plain than you think.  What is the return value of
> > function FOO above?
> 
> If it doesn't terminate, and I'm taking your word on that, then it has
> no return value.

No one knows if it always terminates.  I suspect it does.

(BTW, there is a difference between having no return value (as in
(values)) and not returning.)

But if I were to assert that the call to FOO returns 42, you'd say I
was out of my mind.  The call simply cannot return anything but NIL.

> Just my opinion.  But I'm firm on this one, at least until I hear a far
> more compelling argument than "aw, you're spoiling some of my favorite
> optmimizations" for which I have zero sympathy.

Well, of course that is reason I'm arguing this.  But I suspect that
this optimization may be far more common than you think.

> I invite others who've thought hard about this issue to opine.

I'd be interested as well.
From: Kent M Pitman
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <sfwitegxkul.fsf@world.std.com>
···@itasoftware.com writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > > If safety is not a priority, you probably don't care if the
> > > interpreted program and the compiled program exhibit identical
> > > behavior when that behavior is undefined.
> > 
> > This use of "probably" is the key.  You simply have no basis for saying
> > what I probably do or probably don't mean.  The language semantics
> > say what must happen, and I think that's the end of it unless you ask for
> > some nonstandard optimization to occur, in which case we ware not
> > debating language semantics.  Any time you opt to breach the language
> > semantics, you can do anything you like, and "caveat emptor" applies.
> 
> If the behavior is `undefined', it is `undefined', and the compiler
> can do whatever it wishes.

The behavior of a loop that does not stop (or might not stop) is 
not undefined.  It is extremely well-defined.
 
> > > I have a further pragmatic justification:  many compiler optimizations
> > > such as inlining work by beta-reducing the form at compile time.  This
> > > is a normal-order reduction.  When the arguments are side-effect free
> > > (pure), then it can be shown that normal-order reduction produces the
> > > exact same result as applicative order reduction, when applicative
> > > order returns a result.  But in general, you cannot determine whether
> > > applicative order evaluation will return a result.  Many optimizations
> > > will become impossible if you require the compiler to prove that
> > > argument evaluation terminates. 
> > 
> > Sorry, I'm not sold.
> > 
> > It is critical to the interchangeability of interpreted and compiled
> > code that you not do such optimizations.
> > 
> > Optimizations are not something we bend language semantics to accomodate.
> 
> Some things, like the speed of processing, are not considered to be
> part of the `language semantics', other things, like the value
> returned for valid input *are*.  Why do you think halting is one of
> these?

Control is part of the semantics.

The speed of control is not the same as the fact of control.
 
> > > > The language semantics are quite clear about left to right 
> > > > order of evaluation.  Compilation may optimize that, but not
> > > > change the meaning.  And the meaning is plain.
> > > 
> > > The meaning is less plain than you think.  What is the return value of
> > > function FOO above?
> > 
> > If it doesn't terminate, and I'm taking your word on that, then it has
> > no return value.
> 
> No one knows if it always terminates.  I suspect it does.

Then the only way to find out is to run it.
The language provides for no substitute.

> (BTW, there is a difference between having no return value (as in
> (values)) and not returning.)

You're certainly right but I didn't really think this was in issue and
I let my terminology get sloppy.  I accept your correction here.
 
> But if I were to assert that the call to FOO returns 42, you'd say I
> was out of my mind.  The call simply cannot return anything but NIL.

I'm sorry. I perhaps should have said "potential return value".

IF you knew with certainty that it would return, I agree it could
return quickly.  But if you only suspect it might return, then its job
is to search.  IMO, you can optimize any finite amount of computation,
but not an infinite amount.
 
> > Just my opinion.  But I'm firm on this one, at least until I hear a far
> > more compelling argument than "aw, you're spoiling some of my favorite
> > optmimizations" for which I have zero sympathy.
> 
> Well, of course that is reason I'm arguing this.  But I suspect that
> this optimization may be far more common than you think.

I'd certainly like to know.
I'd call it a bug unless there were a true proof of termination.

> > I invite others who've thought hard about this issue to opine.
> 
> I'd be interested as well.
From: ···@itasoftware.com
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <lmjbax7j.fsf@itasoftware.com>
Kent M Pitman <······@world.std.com> writes:

> ···@itasoftware.com writes:
> 
> > Kent M Pitman <······@world.std.com> writes:
> > 
> > > > If safety is not a priority, you probably don't care if the
> > > > interpreted program and the compiled program exhibit identical
> > > > behavior when that behavior is undefined.
> > > 
> > > This use of "probably" is the key.  You simply have no basis for saying
> > > what I probably do or probably don't mean.  The language semantics
> > > say what must happen, and I think that's the end of it unless you ask for
> > > some nonstandard optimization to occur, in which case we ware not
> > > debating language semantics.  Any time you opt to breach the language
> > > semantics, you can do anything you like, and "caveat emptor" applies.
> > 
> > If the behavior is `undefined', it is `undefined', and the compiler
> > can do whatever it wishes.
> 
> The behavior of a loop that does not stop (or might not stop) is 
> not undefined.  It is extremely well-defined.

I was speaking in the general case:  Where the semantics are
undefined, it is not necessary for the behavior to be identical
between interpreted and compiled code.  (The specific question about
loop notwithstanding.  For instance, suppose you supplied both :test
and :test-not values to a sequence function.  The spec says that the
behavior of the function is undefined.  I wouldn't expect the effect
or result to be consistent between compiled and interpreted code even
in the same implementation).

> > > > I have a further pragmatic justification:  many compiler optimizations
> > > > such as inlining work by beta-reducing the form at compile time.  This
> > > > is a normal-order reduction.  When the arguments are side-effect free
> > > > (pure), then it can be shown that normal-order reduction produces the
> > > > exact same result as applicative order reduction, when applicative
> > > > order returns a result.  But in general, you cannot determine whether
> > > > applicative order evaluation will return a result.  Many optimizations
> > > > will become impossible if you require the compiler to prove that
> > > > argument evaluation terminates. 
> > > 
> > > Sorry, I'm not sold.
> > > 
> > > It is critical to the interchangeability of interpreted and compiled
> > > code that you not do such optimizations.
> > > 
> > > Optimizations are not something we bend language semantics to accomodate.
> > 
> > Some things, like the speed of processing, are not considered to be
> > part of the `language semantics', other things, like the value
> > returned for valid input *are*.  Why do you think halting is one of
> > these?
> 
> Control is part of the semantics.

I'm not suggesting a change in control, the order of execution remains
exactly the same.

> The speed of control is not the same as the fact of control.

This is part of the crux of the issue.  Why do you assert that?  What
is wrong with considering this to be an `infinite' speedup?

> > Well, of course that is reason I'm arguing this.  But I suspect that
> > this optimization may be far more common than you think.
> 
> I'd certainly like to know.
> I'd call it a bug unless there were a true proof of termination.

You aren't likely to see a compiler attempting a proof of
termination (or non-termination) except in the most trivial of
circumstances.
From: Kent M Pitman
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <sfwn13r58nf.fsf@world.std.com>
···@itasoftware.com writes:

> Kent M Pitman <······@world.std.com> writes:
> 
> > ···@itasoftware.com writes:
> > 
> > > Kent M Pitman <······@world.std.com> writes:
> > > 
> > > > > If safety is not a priority, you probably don't care if the
> > > > > interpreted program and the compiled program exhibit identical
> > > > > behavior when that behavior is undefined.
> > > > 
> > > > This use of "probably" is the key.  You simply have no basis for saying
> > > > what I probably do or probably don't mean.  The language semantics
> > > > say what must happen, and I think that's the end of it unless you ask for
> > > > some nonstandard optimization to occur, in which case we ware not
> > > > debating language semantics.  Any time you opt to breach the language
> > > > semantics, you can do anything you like, and "caveat emptor" applies.
> > > 
> > > If the behavior is `undefined', it is `undefined', and the compiler
> > > can do whatever it wishes.
> > 
> > The behavior of a loop that does not stop (or might not stop) is 
> > not undefined.  It is extremely well-defined.
> 
> I was speaking in the general case:  Where the semantics are
> undefined, it is not necessary for the behavior to be identical
> between interpreted and compiled code.  (The specific question about
> loop notwithstanding.  For instance, suppose you supplied both :test
> and :test-not values to a sequence function.  The spec says that the
> behavior of the function is undefined.  I wouldn't expect the effect
> or result to be consistent between compiled and interpreted code even
> in the same implementation).

But it is.  The effect is "undefined" in both cases.
 
That IS the semantics.

The case of LOOP is not undefined.

> > > > > I have a further pragmatic justification:  many compiler optimizations
> > > > > such as inlining work by beta-reducing the form at compile time.  This
> > > > > is a normal-order reduction.  When the arguments are side-effect free
> > > > > (pure), then it can be shown that normal-order reduction produces the
> > > > > exact same result as applicative order reduction, when applicative
> > > > > order returns a result.  But in general, you cannot determine whether
> > > > > applicative order evaluation will return a result.  Many optimizations
> > > > > will become impossible if you require the compiler to prove that
> > > > > argument evaluation terminates. 
> > > > 
> > > > Sorry, I'm not sold.
> > > > 
> > > > It is critical to the interchangeability of interpreted and compiled
> > > > code that you not do such optimizations.
> > > > 
> > > > Optimizations are not something we bend language semantics to accomodate.
> > > 
> > > Some things, like the speed of processing, are not considered to be
> > > part of the `language semantics', other things, like the value
> > > returned for valid input *are*.  Why do you think halting is one of
> > > these?
> > 
> > Control is part of the semantics.
> 
> I'm not suggesting a change in control, the order of execution remains
> exactly the same.

Control is more than order.  It includes reachability.  Making the
optimization implies a truth about the universe.  If that truth is true,
you have made a correct optimization.  But if you cannot show that truth
true, you are in error.  The middle ground goes to not removing it, not
to removing it.  The burden is not on the person alleging the bug to
show that the optimization was wrong; the burden is no the person making
the optimization to show it's legitimate.
 
> > The speed of control is not the same as the fact of control.
> 
> This is part of the crux of the issue.  Why do you assert that?  What
> is wrong with considering this to be an `infinite' speedup?

Because it affects the statement "is the next statement reachable".

Consider the program:

 (defun  (x)
   (seek-possibly-non-existent-solution)
   (format t "~&Eureka! A solution exists.  Oops, I forgot to write it down.~%")
   (format t "~&Here's the proof:~%")
   (seek-possibly-non-existent-solution :verbose t))

> > > Well, of course that is reason I'm arguing this.  But I suspect that
> > > this optimization may be far more common than you think.
> > 
> > I'd certainly like to know.
> > I'd call it a bug unless there were a true proof of termination.
> 
> You aren't likely to see a compiler attempting a proof of
> termination (or non-termination) except in the most trivial of
> circumstances.

Right.  Nor should it.  It simply should not do such optimizations.
It can certainly optimize
 (loop (+ x 3))
into 
 (loop)
but it can't take out the transfer of control.
From: ···@itasoftware.com
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <d74mc3zi.fsf@itasoftware.com>
Kent M Pitman <······@world.std.com> writes:

> ···@itasoftware.com writes:
> 
> > Kent M Pitman <······@world.std.com> writes:
> > 
> > > ···@itasoftware.com writes:
> > > 
> > > > Kent M Pitman <······@world.std.com> writes:
> > > > 
> > > > > > If safety is not a priority, you probably don't care if the
> > > > > > interpreted program and the compiled program exhibit identical
> > > > > > behavior when that behavior is undefined.
> > > > > 
> > > > > This use of "probably" is the key.  You simply have no basis for saying
> > > > > what I probably do or probably don't mean.  The language semantics
> > > > > say what must happen, and I think that's the end of it unless you ask for
> > > > > some nonstandard optimization to occur, in which case we ware not
> > > > > debating language semantics.  Any time you opt to breach the language
> > > > > semantics, you can do anything you like, and "caveat emptor" applies.
> > > > 
> > > > If the behavior is `undefined', it is `undefined', and the compiler
> > > > can do whatever it wishes.
> > > 
> > > The behavior of a loop that does not stop (or might not stop) is 
> > > not undefined.  It is extremely well-defined.
> > 
> > I was speaking in the general case:  Where the semantics are
> > undefined, it is not necessary for the behavior to be identical
> > between interpreted and compiled code.  (The specific question about
> > loop notwithstanding.  For instance, suppose you supplied both :test
> > and :test-not values to a sequence function.  The spec says that the
> > behavior of the function is undefined.  I wouldn't expect the effect
> > or result to be consistent between compiled and interpreted code even
> > in the same implementation).
> 
> But it is.  The effect is "undefined" in both cases.

The effect may be `undefined' but it is certainly not `unknowable',
and I expect you would have little sympathy for someone who discovered
that the effect was to return NIL when running interpreted code on
system `x', but to segfault when running compiled code on the same
machine. 

> > > > Well, of course that is reason I'm arguing this.  But I suspect that
> > > > this optimization may be far more common than you think.
> > > 
> > > I'd certainly like to know.
> > > I'd call it a bug unless there were a true proof of termination.
> > 
> > You aren't likely to see a compiler attempting a proof of
> > termination (or non-termination) except in the most trivial of
> > circumstances.
> 
> Right.  Nor should it.  

Nor could it.

> It simply should not do such optimizations.
> It can certainly optimize
>  (loop (+ x 3))
> into 
>  (loop)
> but it can't take out the transfer of control.

Assuming, of course, that + terminates, right?  This may not be easy
for the compiler to prove (sure, it's probably an axiom for the
compiler, but it isn't necessarily.)
From: Kent M Pitman
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <sfwiteestny.fsf@world.std.com>
I've said what I have to say on this, at least for now.
Anything more I would say at this point would just repeat what I've said.
From: Erik Haugan
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <873d5js9xq.fsf@kometknut.neitileu.no>
* ···@itasoftware.com
> Here is my argument.  Consider these programs:
> 
> (defun foo (x)
>   (when (integerp x)
>     (do ((i x (if (evenp i) (/ i 2) (1+ (* i 3)))))
>         ((< i 2) nil))))
> 
> (defun foo-prime (x)
>   nil)
> 
> It is easy to show that FOO cannot return a value other than NIL.  It
> is also easy to show that there is no X for which 
> 
> (eq (foo x) (foo-prime x)) => false
> 
> It is also easy to show that both FOO and FOO-PRIME are identical with
> respect to externally observable side effects.
> 
> This is my justification for replacing FOO with FOO-PRIME.

Consider this program:

(defun foo (x)
  (when (integerp x)
    (do ((i x (if (evenp i) (/ i 2) (1+ (* i 3)))))
        ((< i 2) nil))))

(defun test-foo (&optional (stream t))
  (format stream "~&Foo terminates for:~%")
  (loop for x from 1
        do (foo x)
           (format stream "~S~%" x)))

Erik
From: ···@itasoftware.com
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <r8t3axve.fsf@itasoftware.com>
Erik Haugan <····@haugan.no> writes:

> Consider this program:
> 
> (defun foo (x)
>   (when (integerp x)
>     (do ((i x (if (evenp i) (/ i 2) (1+ (* i 3)))))
>         ((< i 2) nil))))
> 
> (defun test-foo (&optional (stream t))
>   (format stream "~&Foo terminates for:~%")
>   (loop for x from 1
>         do (foo x)
>            (format stream "~S~%" x)))
> 
> Erik

Amusing, but it begs the question.
From: Paolo Amoroso
Subject: Re: Q: Side-effects with blocks and Q 6(a) ACL pp. 80
Date: 
Message-ID: <7DOiO=EBpTAasG7wlKF7ODrlvgNU@4ax.com>
On 14 Sep 2001 00:16:04 -0700, ·················@hotmail.com (Charles K
Johnson) wrote:

> I wonder if someone can recommend an online paper (or
> something from ACM's Digital Library) that discusses
> this sort of terminology from a Lisp perspective.   

This is unrelated to your questions, but if you have access to the ACM
Digital Library I suggest that you check the CACM special issue on Lisp
(September 1991).

Have (de)fun,


Paolo
-- 
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://web.mclink.it/amoroso/ency/README
[http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/]