From: Maciej Katafiasz
Subject: The origins of CL conditions system
Date: 
Message-ID: <fif4lp$dh7$1@news.net.uni-c.dk>
Hi,

there's something that's had me curious for some time now. Given who 
wrote the relevant CLtL2 chapter, it seems like a question directed 
particularly at Kent Pitman, but of course any feedback is welcome.

So, I'm interested in knowing the history and background behind the CL 
conditions and restarts system. I know it drew directly from the dialects 
it was standardising, in particular, I seem to rememeber reading another 
post that mentioned Symbolics had a particularly comprehensive collection 
of standard conditions. However, I'd like to know where it came from to 
Symbolics. Was Lisp the first to have conditions? Was it influenced by 
something else? Was it very much ahead of other languages by having a 
conditions system?

Even more interesting are restarts. Whereas most serious languages today 
offer exceptions, ones to include a built-in support for restarting 
computation afterwards are exceptionally rare; in fact, I haven't 
encountered any that wouldn't be Lisps (and Dylan is a Lisp for purpose 
of this classification. From what I understand, Smalltalk also has 
restarts, but it's arguably in the Lisp family as well). Casual googling 
around doesn't suggest I missed any very prominent languages, either. 
There's a considerable mental gap between having a system that supports 
non-local transfer control in the form of exceptions, and a system that 
includes a formalised protocol for recovering and restarting computation, 
as evidenced by the lack of languages to offer such a facility.

I'm still amazed by how woefully incomplete an exception system without 
structured means of recovery is, yet how I never felt that prior to 
learning CL. Therefore I'm really interested in learning the origins of 
the concept. I suspect the strong tradition of interactive development 
and built-in debugger naturally leads to the development of a complete 
error handling system, whereas in environments where successful and 
failed execution alike leads to process termination, one might never 
consciously realise such a need. But that's just my guess, which might, 
or might not have historical evidence to support it. I hope the village 
elders will enlighten me in this regard :)

Cheers,
Maciej

From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-B31D1F.21063226112007@news-europe.giganews.com>
In article <············@news.net.uni-c.dk>,
 Maciej Katafiasz <········@gmail.com> wrote:

> Hi,
> 
> there's something that's had me curious for some time now. Given who 
> wrote the relevant CLtL2 chapter, it seems like a question directed 
> particularly at Kent Pitman, but of course any feedback is welcome.
> 
> So, I'm interested in knowing the history and background behind the CL 
> conditions and restarts system. I know it drew directly from the dialects 
> it was standardising, in particular, I seem to rememeber reading another 
> post that mentioned Symbolics had a particularly comprehensive collection 
> of standard conditions. However, I'd like to know where it came from to 
> Symbolics. Was Lisp the first to have conditions? Was it influenced by 
> something else? Was it very much ahead of other languages by having a 
> conditions system?
> 
> Even more interesting are restarts. Whereas most serious languages today 
> offer exceptions, ones to include a built-in support for restarting 
> computation afterwards are exceptionally rare; in fact, I haven't 
> encountered any that wouldn't be Lisps (and Dylan is a Lisp for purpose 
> of this classification. From what I understand, Smalltalk also has 
> restarts, but it's arguably in the Lisp family as well). Casual googling 
> around doesn't suggest I missed any very prominent languages, either. 
> There's a considerable mental gap between having a system that supports 
> non-local transfer control in the form of exceptions, and a system that 
> includes a formalised protocol for recovering and restarting computation, 
> as evidenced by the lack of languages to offer such a facility.
> 
> I'm still amazed by how woefully incomplete an exception system without 
> structured means of recovery is, yet how I never felt that prior to 
> learning CL. Therefore I'm really interested in learning the origins of 
> the concept. I suspect the strong tradition of interactive development 
> and built-in debugger naturally leads to the development of a complete 
> error handling system, whereas in environments where successful and 
> failed execution alike leads to process termination, one might never 
> consciously realise such a need. But that's just my guess, which might, 
> or might not have historical evidence to support it. I hope the village 
> elders will enlighten me in this regard :)
> 
> Cheers,
> Maciej

Explained here:

http://www.nhplace.com/kent/Papers/Condition-Handling-2001.html
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fifa4l$dh7$3@news.net.uni-c.dk>
Den Mon, 26 Nov 2007 21:06:33 +0100 skrev Rainer Joswig:

> Explained here:
> 
> http://www.nhplace.com/kent/Papers/Condition-Handling-2001.html

Jolly good. Thanks!

Cheers,
Maciej
From: John Thingstad
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <op.t2e61yegut4oq5@pandora.alfanett.no>
P� Mon, 26 Nov 2007 21:06:33 +0100, skrev Rainer Joswig <······@lisp.de>:

>
> Explained here:
>
> http://www.nhplace.com/kent/Papers/Condition-Handling-2001.html

Interesting artice.
Maybe I'm a bit dense, but what exactly is the difference between  
handler-bind and handler-case and where would you use one or the other?


--------------
John Thingstad
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fifhof$dh7$4@news.net.uni-c.dk>
Den Mon, 26 Nov 2007 23:21:24 +0100 skrev John Thingstad:

> Interesting artice.
> Maybe I'm a bit dense, but what exactly is the difference between
> handler-bind and handler-case and where would you use one or the other?

No, it's actually confusing and not obvious at all if you don't already 
know. The crux of the issue is that spec for HANDLER-CASE says "In this 
case, the dynamic state is unwound appropriately". Which, in particular, 
means no going back to the signalling site. Thus HANDLER-CASE is pretty 
much like a try..catch block.

HANDLER-BIND, on the other hand, does not unwind the stack before 
invoking your handlers. So if the signalling site has established any 
restarts, you can invoke them.

Cheers,
Maciej
From: Edi Weitz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <uve7ovdn6.fsf@agharta.de>
On Mon, 26 Nov 2007 23:21:24 +0100, "John Thingstad" <·······@online.no> wrote:

> Maybe I'm a bit dense, but what exactly is the difference between
> handler-bind and handler-case and where would you use one or the
> other?

Peter's book

  http://gigamonkeys.com/book/beyond-exception-handling-conditions-and-restarts.html

has examples for when HANDLER-CASE won't cut it and you'll need
HANDLER-BIND instead.

-- 

Lisp is not dead, it just smells funny.

Real email: (replace (subseq ·········@agharta.de" 5) "edi")
From: Dan Muller
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <vZM2j.20764$4V6.10069@newssvr14.news.prodigy.net>
John Thingstad writes:
> Maybe I'm a bit dense, but what exactly is the difference between
> handler-bind and handler-case and where would you use one or the
> other?

Funny thing you should ask that today. I was just messing with these
two for the first time two nights ago, and completely missed the
critical difference when reading about them in the Hyperspec. I then
proceeded to waste nearly two hours puzzling over why I couldn't get
something simple working, and came this close to posting a question
here before I figured it out. Maciej explained the critical difference
exactly. I'd just add that handler-case is is analogous to a simple
case form.

       -- Dan Muller
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <uoddfu81i.fsf@nhplace.com>
Maciej Katafiasz <········@gmail.com> writes:

> So, I'm interested in knowing the history and background behind the CL 
> conditions and restarts system. I know it drew directly from the dialects 
> it was standardising, in particular, I seem to rememeber reading another 
> post that mentioned Symbolics had a particularly comprehensive collection 
> of standard conditions. However, I'd like to know where it came from to 
> Symbolics. Was Lisp the first to have conditions? Was it influenced by 
> something else? Was it very much ahead of other languages by having a 
> conditions system?

Some additional source material is in 
  http://www.nhplace.com/kent/CL/Revision-18.txt
if you haven't found that. If you look in 
  http://www.nhplace.com/kent/CL/
you'll also find the original sample implementation.

> Even more interesting are restarts. Whereas most serious languages today 
> offer exceptions, ones to include a built-in support for restarting 
> computation afterwards are exceptionally rare; in fact, I haven't 
> encountered any that wouldn't be Lisps (and Dylan is a Lisp for purpose 
> of this classification. From what I understand, Smalltalk also has 
> restarts, but it's arguably in the Lisp family as well).

It's been difficult getting these ideas out into the mainstream.  I
had a terrible time getting the CL community into them.  I didn't
invent them, but I was worried they would not ever leave the Lisp
Machine, and had numerous discussions with vendors EVEN AFTER
producing the sample implementation above and handing it to them and
telling them they could use it where they told me the ridiculous claim
that a fancy condition system like the Lisp Machine had required
special hardware.

Months or even years later, some of them called me back and said kind
of sheepishly, "uh, we tried that thing and gee, it seems to work. can
we use it?"  And then gradually one started to see marketing
literature at conferences saying they implemented Kent Pitman's
condition system, which was kind of flattering, but pointed to the
fact that it didn't have a catchy name.  

(But at least my case was better than what happened to David Gray at
TI, where his streams proposal came to be called Gray Streams and then
people on the other side of the pond started re-spelling it Grey
because they thought that it was a color ... or at least a colour.)

Anyway, the pollination of an idea from one language to another is
a tricky business that doesn't happen easily.

Just look at the similar difficulty we've had getting pretty printing
into the world in general.  Dick Waters wrote papers back in the early
80's showing that the techniques we use in CL are perfectly usable for
pretty printing code in block structured languages like Ada and PL/1.
Yet neither those languages nor their successors picked up on the option
to add such a facility to their programmatic repertoire, perhaps because
they were already too busy not picking up on the idea of having a 
homoiconic language... 

> Casual googling 
> around doesn't suggest I missed any very prominent languages, either. 

You're looking for PL/1 for Honeywell Multics, and particularly the
MIT variant, which may not have been precisely what Honeywell had, I'm
not sure.  See http://www.multicians.org/multics.html which may say.

Several of the guys who designed the Lisp Machine had previously been
using that system, which amounted to a "PL/1 machine" (not in quite
the sense that a LispM was a "Lisp Machine", but certainly in the
sense that the operating system was written in a high level language,
PL/1, and there was a unifying execution model on the stack which was
best described by PL/1).

The restart facility was not so linguistic as practical. I think it used
some sort of 'on signal' construct (my PL/1 was never advanced, though I
used it in school, and is quite rusty now, so at this point accept that
this is all blurry and was second-hand to start with).  

And remember, too, that the world was in a kind of proto-state where the
ideas were coming fast and furious ahead of platforms to implement them on.
So they had the abstractions, but they needed a place to deploy those ideas
in a new context. So they lept to the Lisp Machine, and started to 
use that platform as a forum to evolve new ideas.

Dave Moon (David A. Moon in the literature) had done Maclisp for PL/1
and was later a chief architect of the LispM system.

Bernie Greenberg was one of the most prominent Multicians, and
implemented Emacs for Multics in Lisp (no, not GNU Lisp, nor GNU
Emacs.  This was long before those days. I think he used Multics
Maclisp.)  Bernie later did the LMFS file system, among other things,
for Genera.

Others are enumerated in the Revision-18 paper mentioned above.

Anyway, if you follow the people, not the languages, you get a better
sense of the flow of things.  These people hopped from one implementation
and system to another, carrying ideas with them.

Likewise you can see that UNWIND-PROTECT is present in things like ITS
TECO [as the ..N q-register, which was executed upon unwind], but
again the unifying characteristic is the community at MIT in which
this arose.  I'm not sure who created ..N, maybe Steele, but certainly
Steele was present, and was present for UNWIND-PROTECT in CL, and later
for "finally" in Java.

Actually, what was strange and frustrating was that Steele was so
involved in Java and didn't put restarts into it, when he surely must
have known of them... and I think they mesh so nicely with
continuations, etc. that it's a shame.  

For myself, I used Lisp Machines, and liked them, but there were not
many of them and many people couldn't use them. I was amused and
saddened by how many of their ideas were regarded as "needing special
hardware", and I made it a personal mission to show that many LispM
ideas could be rehosted on stock hardware (i.e., off-the-shelf, or
non-special-purpose hardware) just fine.  That was how my ZBABYL 
mail reader came about for TECO-based Emacs, and it's why I took an
interest in getting the Zetalisp condition system into CL.  (Early CL
didn't even have ERRSET or IGNORE-ERRORS, if memory serves me, so any
program that ever signaled ERROR under CLTL was dead in the water as
far as guarantees of portability.  But the idea of adding just that
one operator as a way of "fixing" the problem was horrifying to me,
so I wanted as much of the LispM New Error System  as I could get.)

> There's a considerable mental gap between having a system that supports 
> non-local transfer control in the form of exceptions, and a system that 
> includes a formalised protocol for recovering and restarting computation, 
> as evidenced by the lack of languages to offer such a facility.

Yeah, weird, huh?  But I think the thing to understand is that people judge
the goodness of what they have in contrast to what they had before.  So most
users of languages don't see themselves as impoverished because they have
previously used languages without try/catch and now they have try/catch, so
they've moved up.  We've had our eyes opened, so it's hard to go back.
But convincing someone they are missing out is hard.

> I'm still amazed by how woefully incomplete an exception system without 
> structured means of recovery is, yet how I never felt that prior to 
> learning CL.

"A mind once stretched by a new idea never regains its original dimension."
-- Oliver Wendell Holmes

And, incidentally, without a practice of defining the recovery points.
While we got restarts into CL, we were unable to get many specific
restarts in.  And that's an aggravation.  But again it was lack of
experience with the system, not to mention considerable political
disagreement over the nature of the inheritance model (I've spoken on
this in other forums, but ask me if the relevance of this in this
context is not clear and I'll expand), that interfered with getting
some specific conditions and many specific restarts added.

But my theory was that if we could get at least a few in, people would
come to understand the impoverished nature and would naturally want
more.  I have a meta-theory of design that says that if you don't
offer a feature, no one will send bug reports, but if you offer a stub
of a feature, everyone will complain wildly for extensions to it.  So
the real hurdle, IMO, is to get the proverbial "foot in the door".
Having ABORT, CONTINUE, USE-VALUE and STORE-VALUE and having them
defined in only a few places is very weak... but it illustrates an
idea that now people know they need fleshed out.  So it was enough.

I draw your attention to 
 http://www.nhplace.com/kent/Papers/Condition-Handling-2001.html#footnote
where I remarked on this phenomenon from a similar but slightly different
angle.  

> Therefore I'm really interested in learning the origins of 
> the concept. I suspect the strong tradition of interactive development 
> and built-in debugger naturally leads to the development of a complete 
> error handling system,

This is certainly relevant, though it's a feedback loop since what leads
to liking the debugger is the good facility.

I think the homoiconicity reference above is also relevant, since the 
usefulness of the debugger is also helped by the ability to use a familiar
langauge to operate in it.  Visual Studio finally has ways of doing this 
in clumsy ways with Watch and Immediate panes, but for years this was 
really hard.  And even now it's marginal and unpleasant by comparison to
Lisp in my opinion--though some might prefer it (at least it's finally to
the point where there can be a debate on the matter).

> whereas in environments where successful and 
> failed execution alike leads to process termination, one might never 
> consciously realise such a need.

Indeed, the mere step up from debugging binary dumps was important.
The Macintosh "bomb" icon used to pass for error handling.

> But that's just my guess, which might, 
> or might not have historical evidence to support it. I hope the village 
> elders will enlighten me in this regard :)

I hope something in here is useful to your search.
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-F25827.15061027112007@news-europe.giganews.com>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Maciej Katafiasz <········@gmail.com> writes:
> 
> > So, I'm interested in knowing the history and background behind the CL 
> > conditions and restarts system. I know it drew directly from the dialects 
> > it was standardising, in particular, I seem to rememeber reading another 
> > post that mentioned Symbolics had a particularly comprehensive collection 
> > of standard conditions. However, I'd like to know where it came from to 
> > Symbolics. Was Lisp the first to have conditions? Was it influenced by 
> > something else? Was it very much ahead of other languages by having a 
> > conditions system?
> 
> Some additional source material is in 
>   http://www.nhplace.com/kent/CL/Revision-18.txt
> if you haven't found that. If you look in 
>   http://www.nhplace.com/kent/CL/
> you'll also find the original sample implementation.
> 
> > Even more interesting are restarts. Whereas most serious languages today 
> > offer exceptions, ones to include a built-in support for restarting 
> > computation afterwards are exceptionally rare; in fact, I haven't 
> > encountered any that wouldn't be Lisps (and Dylan is a Lisp for purpose 
> > of this classification. From what I understand, Smalltalk also has 
> > restarts, but it's arguably in the Lisp family as well).
> 
> It's been difficult getting these ideas out into the mainstream.  I
> had a terrible time getting the CL community into them.  I didn't
> invent them, but I was worried they would not ever leave the Lisp
> Machine, and had numerous discussions with vendors EVEN AFTER
> producing the sample implementation above and handing it to them and
> telling them they could use it where they told me the ridiculous claim
> that a fancy condition system like the Lisp Machine had required
> special hardware.

One of the best examples was that the feature that you can
continue from an error (and the handler is called
without unwinding the stack) didn't make it into the C++ standard.
Though Stroustrup (Design and Evolution of C++) reports
that they had people from Texas Instruments with Lisp
Machine experience explaining the Lisp Machine error handling.
According to the TI guys (can't remember the exact wording, don't have
the book handy), this feature was not used much. So it did
not make it into C++. 

I think the CL error handling facility is extremely useful and
I always see it as a good sign, if a Common Lisp implementation
not just provides the functionality, but actually uses it.
It helps me. Especially if the development environment
uses it (system facility, REPL, ...) and there are
many useful restarts provided.

So, Kent, I thank you for your work on helping it to be in ANSI CL
and the implementors who chose not to ignore it. ;-)
From: bob_bane
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <f0781004-ab99-4837-9122-7ab0124be1b6@s36g2000prg.googlegroups.com>
On Nov 27, 9:06 am, Rainer Joswig <······@lisp.de> wrote:
> One of the best examples was that the feature that you can
> continue from an error (and the handler is called
> without unwinding the stack) didn't make it into the C++ standard.
> Though Stroustrup (Design and Evolution of C++) reports
> that they had people from Texas Instruments with Lisp
> Machine experience explaining the Lisp Machine error handling.
> According to the TI guys (can't remember the exact wording, don't have
> the book handy), this feature was not used much. So it did
> not make it into C++.
>

I refer to that book as Stroustrup's "Design Rationalization and
Mutation of C++".  That's section 16.6, "Resumption vs. Termination".
At a meeting in 1991, several people who had had long experience with
languages with continuable exceptions (including Mary Fontana from TI
and Jim Mitchell from Sun/PARC) basically said "continuable exceptions
are not particularly useful except for debugging, and they allowed us
to write bad code."

Personally, I *like* having the debugger available all the time, and
have never been persuaded by "this feature is dangerous when used by
amateurs" arguments (which were routinely used by the C++ designers to
limit C++).
From: Andreas Davour
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <cs9abozprys.fsf@Psilocybe.Update.UU.SE>
bob_bane <········@gst.com> writes:

> On Nov 27, 9:06 am, Rainer Joswig <······@lisp.de> wrote:
>> One of the best examples was that the feature that you can
>> continue from an error (and the handler is called
>> without unwinding the stack) didn't make it into the C++ standard.
>> Though Stroustrup (Design and Evolution of C++) reports
>> that they had people from Texas Instruments with Lisp
>> Machine experience explaining the Lisp Machine error handling.
>> According to the TI guys (can't remember the exact wording, don't have
>> the book handy), this feature was not used much. So it did
>> not make it into C++.
>>
>
> I refer to that book as Stroustrup's "Design Rationalization and
> Mutation of C++".  That's section 16.6, "Resumption vs. Termination".
> At a meeting in 1991, several people who had had long experience with
> languages with continuable exceptions (including Mary Fontana from TI
> and Jim Mitchell from Sun/PARC) basically said "continuable exceptions
> are not particularly useful except for debugging, and they allowed us
> to write bad code."
>
> Personally, I *like* having the debugger available all the time, and
> have never been persuaded by "this feature is dangerous when used by
> amateurs" arguments (which were routinely used by the C++ designers to
> limit C++).

That was the most stupid thing I've read this year! They actually said
things like that? 

If you should ban features that can be dangerous when used by amateurs,
then they should ban C all together. 

/andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fihl7n$pcc$2@news.net.uni-c.dk>
Den Tue, 27 Nov 2007 08:26:34 -0800 skrev bob_bane:

> Personally, I *like* having the debugger available all the time, and
> have never been persuaded by "this feature is dangerous when used by
> amateurs" arguments (which were routinely used by the C++ designers to
> limit C++).

Right, instead C++ chose to have features that are dangerous when used by 
anyone, not just amateurs.

Cheers,
Maciej
From: Andreas Davour
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <cs9d4tul5f1.fsf@Psilocybe.Update.UU.SE>
Maciej Katafiasz <········@gmail.com> writes:

> Den Tue, 27 Nov 2007 08:26:34 -0800 skrev bob_bane:
>
>> Personally, I *like* having the debugger available all the time, and
>> have never been persuaded by "this feature is dangerous when used by
>> amateurs" arguments (which were routinely used by the C++ designers to
>> limit C++).
>
> Right, instead C++ chose to have features that are dangerous when used by 
> anyone, not just amateurs.

:-D

Take a laugh point, if you collect those. 

/Andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Damien Kick
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <13kv1ujis5df7ae@corp.supernews.com>
bob_bane wrote:
> On Nov 27, 9:06 am, Rainer Joswig <······@lisp.de> wrote:
>> One of the best examples was that the feature that you can
>> continue from an error (and the handler is called
>> without unwinding the stack) didn't make it into the C++ standard.
>> Though Stroustrup (Design and Evolution of C++) reports
>> that they had people from Texas Instruments with Lisp
>> Machine experience explaining the Lisp Machine error handling.
>> According to the TI guys (can't remember the exact wording, don't have
>> the book handy), this feature was not used much. So it did
>> not make it into C++.
> 
> I refer to that book as Stroustrup's "Design Rationalization and
> Mutation of C++".  That's section 16.6, "Resumption vs. Termination".
> At a meeting in 1991, several people who had had long experience with
> languages with continuable exceptions (including Mary Fontana from TI
> and Jim Mitchell from Sun/PARC) basically said "continuable exceptions
> are not particularly useful except for debugging, and they allowed us
> to write bad code."

Here is a bit more complete a quotation:

<blockquote cite="http://www.research.att.com/~bs/bs_faq2.html#resume">
Why can't I resume after catching an exception?

In other words, why doesn't C++ provide a primitive for returning to the 
point from which an exception was thrown and continuing execution from 
there?

Basically, someone resuming from an exception handler can never be sure 
that the code after the point of throw was written to deal with the 
excecution just continuing as if nothing had happened. An exception 
handler cannot know how much context to "get right" before resuming. To 
get such code right, the writer of the throw and the writer of the catch 
need intimate knowledge of each others code and context. This creates a 
complicated mutual dependency that wherever it has been allowed has led 
to serious maintenance problems.

I seriously considered the possibility of allowing resumption when I 
designed the C++ exception handling mechanism and this issue was 
discussed in quite some detail during standardization. See the exception 
handling chapter of The Design and Evolution of C++.

If you want to check to see if you can fix a problem before throwing an 
exception, call a function that checks and then throws only if the 
problem cannot be dealt with locally. A new_handler is an example of this.
</blockquote>

Agree with this point of view or not, I do respect the approach of 
having actually asked people with experience using languages which 
supported "resumption" to try and help inform the decision.  Kent, if 
you have the time and/or energy, I would be interested in reading a 
response to some of the views expressed by Stroustrup in this quote.  Of 
course, I should probably go back and read your paper(s) on the subject, 
at which I had only glanced in the past, before asking you to possibly 
repeat yourself.  I have Peter Seibel to thank for introducing them to 
me in _Practical Common Lisp_ in such a way that I started to understand 
how they are different from C++/Java style exceptions.  I do not really 
have any practical experience with Lisp conditions.
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <uk5o0b2c7.fsf@nhplace.com>
Damien Kick <·····@earthlink.net> writes:

> bob_bane wrote:
> > On Nov 27, 9:06 am, Rainer Joswig <······@lisp.de> wrote:
> >> One of the best examples was that the feature that you can
> >> continue from an error (and the handler is called
> >> without unwinding the stack) didn't make it into the C++ standard.
> >> Though Stroustrup (Design and Evolution of C++) reports
> >> that they had people from Texas Instruments with Lisp
> >> Machine experience explaining the Lisp Machine error handling.
> >> According to the TI guys (can't remember the exact wording, don't have
> >> the book handy), this feature was not used much. So it did
> >> not make it into C++.
> > I refer to that book as Stroustrup's "Design Rationalization and
> > Mutation of C++".  That's section 16.6, "Resumption vs. Termination".
> > At a meeting in 1991, several people who had had long experience with
> > languages with continuable exceptions (including Mary Fontana from TI
> > and Jim Mitchell from Sun/PARC) basically said "continuable exceptions
> > are not particularly useful except for debugging, and they allowed us
> > to write bad code."
> 
> Here is a bit more complete a quotation:
> 
> <blockquote cite="http://www.research.att.com/~bs/bs_faq2.html#resume">
> Why can't I resume after catching an exception?
> 
> In other words, why doesn't C++ provide a primitive for returning to
> the point from which an exception was thrown and continuing execution
> from there?
> 
> Basically, someone resuming from an exception handler can never be
> sure that the code after the point of throw was written to deal with
> the excecution just continuing as if nothing had happened. An
> exception handler cannot know how much context to "get right" before
> resuming. To get such code right, the writer of the throw and the
> writer of the catch need intimate knowledge of each others code and
> context. This creates a complicated mutual dependency that wherever it
> has been allowed has led to serious maintenance problems.
> 
> I seriously considered the possibility of allowing resumption when I
> designed the C++ exception handling mechanism and this issue was
> discussed in quite some detail during standardization. See the
> exception handling chapter of The Design and Evolution of C++.
> 
> If you want to check to see if you can fix a problem before throwing
> an exception, call a function that checks and then throws only if the
> problem cannot be dealt with locally. A new_handler is an example of
> this.
> </blockquote>
> 
> Agree with this point of view or not, I do respect the approach of
> having actually asked people with experience using languages which
> supported "resumption" to try and help inform the decision.  Kent, if
> you have the time and/or energy, I would be interested in reading a
> response to some of the views expressed by Stroustrup in this quote.
> Of course, I should probably go back and read your paper(s) on the
> subject, at which I had only glanced in the past, before asking you to
> possibly repeat yourself.  I have Peter Seibel to thank for
> introducing them to me in _Practical Common Lisp_ in such a way that I
> started to understand how they are different from C++/Java style
> exceptions.  I do not really have any practical experience with Lisp
> conditions.

Well, I think he got bad advice or made some bad decisions, but anyway,
for whatever reason, I just disagree with the choice he made.  (Though I
applaud his willingness to document his reasons so we can all discuss
them.  One reason I have a lot of thoughts on things is the number of
decisions of my own and others that I've seen gone awry, so don't take
anything I say to be a criticism of him.  I just don't like the choice in
this case.)  But here are some somewhat hastily tapped out remarks, probably
not as well thought through as what he wrote even.  As usual, sorry for
typos, etc.  Ask me if I left something unclear.

If I write 

 { f(x); g(x); }

in a traditional language, you could get all alarmist and insist that
f must never return  because there's no guarantee that g will do the
right thing .. perhaps it doesn't expect f to return.

I think the great insight of the New Error System (on the Lisp Machine)
was that there was nothing special about returning from an error that
distinguishes it from any other kind of return.

Part of the documentation of a function is that it either is or is not
expected to return.  So error does not return and signal does, and people
choose which they want to call based on their willingness to fall through.
But certainly the creation of a restart point is a proof that there's 
a willingness to return, so I just don't see why that's an issue at all.

Another possible way of restating what Stroustrup is saying, and I don't
mean to put words in his mouth--I'm just speculating trying to figure
out what he meant--is that "people shouldn't program dynamically twisty
combinations of function calls and closures".  That would again be
ridiculous, so I hope he's not saying that.  If you look at the sample
code for the condition system you'll be struck by how it's nothing more
than a bunch of very straightforward function calls and macros.  So the
question is how that could be dangerous without, basically, all function
calls and/or macros being dangerous.  I just don't get it.

I have for a long time described restarts as being just
continuations+reflection.  They happen at a point in a program when
the choice point is found but you don't know which continuation to
invoke.  But surely there's nothing wrong with invoking a continuation
any more than returning or any more than invoking a delegate.

And it would work fine with strong typing, too, since under each
protocol there is a specific set of arguments that things take.

What he may be referring to is that C++ made a mess of the unwind-protect
operation, which you have to kludge up (as boost does) through destructors
on stack-allocated objects.  That may mean implicitly that people who were
not maintaining decent stack discipline were rudely exposed by injecting
restarts too easily into code that was not prepared for that.  But I don't
think it's fair to lay the blame on restarts for that, since the real blame
if that's the case goes to the language for having created a non-modular
case.

I'd liken it to the practice I've seen at some point in the past of
vendors saying "we poll for interrupts, and only in certain specific
and predictable places, so it's ok to not write without-interrupts 
if you're in what wants to be a critical section because you're implicitly
in one just because the implementation promises".  That's just broken,
since the implementation should make you write the without-interrupts
(or without-preemption or whatever) and then it should optimize cases
where it knows that the rule allows it to be removed.  Doing anything else
means that if someone goes to change the compiler, all code will break.
And yet, it's not fair to say "that means it's dangerous to poll in other
places than were originally thought of".  What's dangerous is, instead,
to tell people they can rely on contingent truths [as per Kripke].

It is completely ridiculous to claim that any materially useful
protocol can be built on his workaround suggestion:
 | If you want to check to see if you can fix a problem before 
 | throwing an exception, call a function that checks and then 
 | throws only if the problem cannot be dealt with locally. 
since the WHOLE POINT of the error system is not to add functionality
but to add protocol.  And protocol is not about calling your own functions,
it's about calling those supplied by someone else, someone who didn't
conspire with you in the design, but rather plugged into a framework
on the promise there would be someone at the "other end". By not providing
a way to register something that could do this check, and by not providing
a way to search the registered checks, he has forced each user into doing
precisely what each user is not empowered to do: to write their own
condition system.

When I wrote the sample implementation [1] of Revision 18 of the CL
condition system ages ago there was no part I couldn't write in terms
of the actions I wanted to happen.. the only part I couldn't write was
"making everyone call my functions".  That part has to be given by the
system.  And if it is not, that's the end of the discussion.

There may also have been an issue for C++ that passing arguments over the
control return was hard.  That was before delegates, and it is such a 
pain in the neck to use parameterized functions that I can easily imagine
someone not wanting to bother (though I assume Stroustrup could have 
managed it).  And macro abstraction in C++ is a pain, which could have
contributed, too.  So maybe again he was really complaining about how 
darned cumbersome and inflexible C++ is syntactically.  I'd agree with 
that if he wanted to say that.  

C# could have, and should have, done better.  (I'm not a big fan of
static languages, but I do use them, and among that space of
languages, I really like the design of C# as a local optimum in a
design space I wish I didn't have to spend much time in).  It's a bit
disappointing that C# didn't get this right because it did make some
good decisions and really thoughtful decisions on a number of other
things.  Maybe it's because the CLR has no support for it.  Ditto with
the JVM and Java.  Alas.

But nothing that any of these languages couldn't just suddenly fix if
they get a mind to.

Meanwhile it underscores my point about why I stubbornly stick to
Lisp.  It's not that I wouldn't use something that came along if it
was just as good.  It's that people seem so far intent on not making
something that's just as good.  I still find many useful things in
Lisp, and particularly Common Lisp, that are not picked up elsewhere.
When there's a legitimately better Lisp dialect, maybe I'll use that.
When there's a better language, why wouldn't I use that?  One reason I
wrote the 2001 condition paper [2] was that it was for a forum that
would not just be about Lisp, and I thought maybe it would help some
of the ideas get out.  And maybe ultimately they will.  It sometimes
takes time.  Meanwhile, I continue to use and like CL.

[1] http://www.nhplace.com/kent/CL/Revision-18.lisp.txt

[2] http://www.nhplace.com/kent/Papers/Condition-Handling-2001.html
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fip0f8$rov$1@news.net.uni-c.dk>
Den Fri, 30 Nov 2007 02:48:56 -0500 skrev Kent M Pitman:

> Well, I think he got bad advice or made some bad decisions, but anyway,
> for whatever reason, I just disagree with the choice he made.  (Though I
> applaud his willingness to document his reasons so we can all discuss
> them.  One reason I have a lot of thoughts on things is the number of
> decisions of my own and others that I've seen gone awry, so don't take
> anything I say to be a criticism of him.  I just don't like the choice
> in this case.)  But here are some somewhat hastily tapped out remarks,
> probably not as well thought through as what he wrote even.  As usual,
> sorry for typos, etc.  Ask me if I left something unclear.
> 
> If I write
> 
>  { f(x); g(x); }
> 
> in a traditional language, you could get all alarmist and insist that f
> must never return  because there's no guarantee that g will do the right
> thing .. perhaps it doesn't expect f to return.
> 
> I think the great insight of the New Error System (on the Lisp Machine)
> was that there was nothing special about returning from an error that
> distinguishes it from any other kind of return.
> 
> Part of the documentation of a function is that it either is or is not
> expected to return.  So error does not return and signal does, and
> people choose which they want to call based on their willingness to fall
> through. But certainly the creation of a restart point is a proof that
> there's a willingness to return, so I just don't see why that's an issue
> at all.

I think there is some fundamental confusion about what restarts are and 
aren't, and it's too easy to fall into the trap of thinking about them as 
some kind of magic, *without regard for the bundled protocol*. In fact, I 
did that very thing when I was going through the condition system 
interface and wrapping my head around all the implications. There are two 
things to understanding restarts properly:

1. Conditions and restarts are just functions + some syntactic sugar 
around    them to aid with common cases.

2. There is an associated *protocol*. The language "support" for restarts 
is actually a tiny addition that allows expressing the protocol in a way 
that would be impossible to add in user code. Everything else is just a 
small matter of programming. In particular, the only "magic" primitive is 
SIGNAL; ERROR and friends are _not_.

The problems arise when you take restarts for more than they are and 
associate some magical properties with them (as I did). Consider this: 
the standard says a restart, as established by RESTART-BIND, can either 
transfer control non-locally, or return. Therefore, you could try to 
write the following:

(defun save-data ()
  nil)

(defun can-error ()
  (restart-bind ((ignore (lambda ()
                           nil) ; This should continue after ERROR and DO-MORE-STUFF
                   :report-function (lambda (s) (write "Ignore error and continue anyway"
                                                       :stream s))))
    (unless (save-data)
      (error "Could not save data")
      (do-more-stuff))))

(defun try-and-continue ()
    (handler-bind ((error #'(lambda (c)
                              (invoke-restart 'ignore))))
      (can-error)))

I was initially very confused when it invoked the debugger, and choosing 
IGNORE there only produced the message "Restart returned NIL". Doing away 
with that confusion takes two steps:

1. Understanding that signalling functions are not magic. The fact that 
(error 'foo) breaks into the debugger does not come from the condition, 
but from the fact that ERROR is essentially defined as[1]:

(defun error (datum)
  (signal datum)
  (invoke-debugger datum))

2. Because the debugger break is just a function call, and invoking a 
restart is also a function call, and the restart itself is no magic, but 
a function, all it can do is whatever a function can do. The only odd 
thing about restarts is that their dynamic environment is adjusted 
slightly at the time they are run. Therefore, if a restarts returns a 
value, what will happen is what always happens, ie. its caller will 
receive that value. And indeed, the debugger receives it. What *won't* 
happen is a magic return to the point where ERROR was called, because 
that's not a part of protocol specified by ERROR.

I believe Stroustrup's confusion stems from thinking along the above 
code's lines, which gives rise to the fear that once you introduce 
restarts, the callers will somehow be able to override each and every 
exception and force continuing anyway.

> C# could have, and should have, done better.  (I'm not a big fan of
> static languages, but I do use them, and among that space of languages,
> I really like the design of C# as a local optimum in a design space I
> wish I didn't have to spend much time in).

It's a very apt observation, and very much aligned with my own view on 
C#. It's "just a better Java", but as far as Javas go, it's a good one. 
Perhaps going beyond a certain point of improving upon Java was just too 
much unlike it, so it met with natural resistance of the language being 
designed to *be* Java.

Cheers,
Maciej

[1] All the hair of what "datum" can be aside.
From: Pascal Costanza
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <5r9uetF130lmfU1@mid.individual.net>
Damien Kick wrote:
> bob_bane wrote:
>> On Nov 27, 9:06 am, Rainer Joswig <······@lisp.de> wrote:
>>> One of the best examples was that the feature that you can
>>> continue from an error (and the handler is called
>>> without unwinding the stack) didn't make it into the C++ standard.
>>> Though Stroustrup (Design and Evolution of C++) reports
>>> that they had people from Texas Instruments with Lisp
>>> Machine experience explaining the Lisp Machine error handling.
>>> According to the TI guys (can't remember the exact wording, don't have
>>> the book handy), this feature was not used much. So it did
>>> not make it into C++.
>>
>> I refer to that book as Stroustrup's "Design Rationalization and
>> Mutation of C++".  That's section 16.6, "Resumption vs. Termination".
>> At a meeting in 1991, several people who had had long experience with
>> languages with continuable exceptions (including Mary Fontana from TI
>> and Jim Mitchell from Sun/PARC) basically said "continuable exceptions
>> are not particularly useful except for debugging, and they allowed us
>> to write bad code."
> 
> Here is a bit more complete a quotation:
> 
> <blockquote cite="http://www.research.att.com/~bs/bs_faq2.html#resume">
> Why can't I resume after catching an exception?
> 
> In other words, why doesn't C++ provide a primitive for returning to the 
> point from which an exception was thrown and continuing execution from 
> there?
> 
> Basically, someone resuming from an exception handler can never be sure 
> that the code after the point of throw was written to deal with the 
> excecution just continuing as if nothing had happened. An exception 
> handler cannot know how much context to "get right" before resuming. To 
> get such code right, the writer of the throw and the writer of the catch 
> need intimate knowledge of each others code and context. This creates a 
> complicated mutual dependency that wherever it has been allowed has led 
> to serious maintenance problems.
> 
> I seriously considered the possibility of allowing resumption when I 
> designed the C++ exception handling mechanism and this issue was 
> discussed in quite some detail during standardization. See the exception 
> handling chapter of The Design and Evolution of C++.
> 
> If you want to check to see if you can fix a problem before throwing an 
> exception, call a function that checks and then throws only if the 
> problem cannot be dealt with locally. A new_handler is an example of this.
> </blockquote>

I think resumable exception make a lot less sense in a static language 
(where the goal is that the program is 'correct' and shouldn't require 
interactive 'fixing' at any point in time).

The problem is that whenever there is an exception, there are generally 
multiple ways to get out of the exceptional situation and resume. In a 
dynamic language, this is pretty straightforward: You present the 
different possibilities to the programmer / user and let them decide 
what to do. (This could, for example, be presented in user-friendly 
dialogs.)

In a static language, you would have to rely on automagically picking 
the 'correct' restart - and I guess that's where the mental model of the 
average static thinker doesn't cope anymore and they just bail out.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Barry Margolin
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <barmar-14BF11.05392030112007@localhost>
In article <···············@mid.individual.net>,
 Pascal Costanza <··@p-cos.net> wrote:

> I think resumable exception make a lot less sense in a static language 
> (where the goal is that the program is 'correct' and shouldn't require 
> interactive 'fixing' at any point in time).

Many (most?) exceptions have little to do with program "correctness", 
they're generally related to dynamic input data.  Is "file not found" a 
bug in the program?

The ones that are due to program bugs are hardware exceptions like bus 
error or segmentation violation, which are caused by improper use of 
pointers.

I suppose dynamically-typed langages do allow for some exceptions -- 
TYPE-ERROR, NO-APPLICABLE-METHOD, WRONG-NUMBER-OF-ARGUMENTS, etc. -- 
that would be prevented by compile-time type checking.  But these aren't 
the most interesting uses of the condition system.
-- 
Barry Margolin
Arlington, MA
From: Pascal Costanza
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <5raaonF13b1huU1@mid.individual.net>
Barry Margolin wrote:
> In article <···············@mid.individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
>> I think resumable exception make a lot less sense in a static language 
>> (where the goal is that the program is 'correct' and shouldn't require 
>> interactive 'fixing' at any point in time).
> 
> Many (most?) exceptions have little to do with program "correctness", 
> they're generally related to dynamic input data.  Is "file not found" a 
> bug in the program?
> 
> The ones that are due to program bugs are hardware exceptions like bus 
> error or segmentation violation, which are caused by improper use of 
> pointers.
> 
> I suppose dynamically-typed langages do allow for some exceptions -- 
> TYPE-ERROR, NO-APPLICABLE-METHOD, WRONG-NUMBER-OF-ARGUMENTS, etc. -- 
> that would be prevented by compile-time type checking.  But these aren't 
> the most interesting uses of the condition system.

I don't know in general, but for example many of the Java exceptions 
don't sound as if they were related to dynamic input data.

See http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Exception.html and 
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/RuntimeException.html

BTW, my favorite Java exception is this: 
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/UnsupportedOperationException.html

;)



Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Pascal Costanza
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <5raaqsF13b1huU2@mid.individual.net>
Pascal Costanza wrote:
> Barry Margolin wrote:
>> In article <···············@mid.individual.net>,
>>  Pascal Costanza <··@p-cos.net> wrote:
>>
>>> I think resumable exception make a lot less sense in a static 
>>> language (where the goal is that the program is 'correct' and 
>>> shouldn't require interactive 'fixing' at any point in time).
>>
>> Many (most?) exceptions have little to do with program "correctness", 
>> they're generally related to dynamic input data.  Is "file not found" 
>> a bug in the program?
>>
>> The ones that are due to program bugs are hardware exceptions like bus 
>> error or segmentation violation, which are caused by improper use of 
>> pointers.
>>
>> I suppose dynamically-typed langages do allow for some exceptions -- 
>> TYPE-ERROR, NO-APPLICABLE-METHOD, WRONG-NUMBER-OF-ARGUMENTS, etc. -- 
>> that would be prevented by compile-time type checking.  But these 
>> aren't the most interesting uses of the condition system.
> 
> I don't know in general, but for example many of the Java exceptions 
> don't sound as if they were related to dynamic input data.

...which could of course be because of premature overengineering of the 
Java exception hierarchy, which is not one of the best ones anyway...


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-3571B4.16021130112007@news-europe.giganews.com>
In article <····························@localhost>,
 Barry Margolin <······@alum.mit.edu> wrote:

> In article <···············@mid.individual.net>,
>  Pascal Costanza <··@p-cos.net> wrote:
> 
> > I think resumable exception make a lot less sense in a static language 
> > (where the goal is that the program is 'correct' and shouldn't require 
> > interactive 'fixing' at any point in time).
> 
> Many (most?) exceptions have little to do with program "correctness", 
> they're generally related to dynamic input data.  Is "file not found" a 
> bug in the program?
> 
> The ones that are due to program bugs are hardware exceptions like bus 
> error or segmentation violation, which are caused by improper use of 
> pointers.
> 
> I suppose dynamically-typed langages do allow for some exceptions -- 
> TYPE-ERROR, NO-APPLICABLE-METHOD, WRONG-NUMBER-OF-ARGUMENTS, etc. -- 
> that would be prevented by compile-time type checking.  But these aren't 
> the most interesting uses of the condition system.

Right, the most interesting uses of the condition system are
those in the development environment, the application
or even the operating system.

-- 
http://lispm.dyndns.org/
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <ulk8faezy.fsf@nhplace.com>
Pascal Costanza <··@p-cos.net> writes:

> The problem is that whenever there is an exception, there are
> generally multiple ways to get out of the exceptional situation and
> resume. In a dynamic language, this is pretty straightforward: You
> present the different possibilities to the programmer / user and let
> them decide what to do. (This could, for example, be presented in
> user-friendly dialogs.)
> 
> In a static language, you would have to rely on automagically picking
> the 'correct' restart - and I guess that's where the mental model of
> the average static thinker doesn't cope anymore and they just bail out.

I don't see any reason why a static language is impoverished in the
slightest in its set of options.  Can you elaborate on that?

Also, while there are many things affecting static language use (some
of which are coercion, some of which are lack of training otherwise,
and some of which are perhaps actual manner of thinking), I do worry
that some of the issue of static language use is based on an implicit
premise that things do not change, almost out of a desire to minimize
cognitive dissonance, because if you think that's true you will be
more likely to think it's ok to program in a static language.  And if
you get used to things not changing, you are more confident that what
you used to think was true is still true, and less likely to explore
alternatives or reexamine alleged lemmas.  I've sometimes shortened
this to say that people who use static languages are going to think
statically, or vice versa, though that's plainly an oversimplification
mostly for the purpose of highlighting an idea, not for the purpose of
pigeon-holing people... 

But however you look at it, it all comes down to this question I
periodically harp on (we hit on it with the gethash-randomly debate
quite recently) about whether people are content with the problem
description they receive and merely try to implement it, or whether
they explore the space of nearby problem descriptions and try to
adjust the world so there is a solution, where under a literal reading
none might have appeared.  Maciej made this point on a companion
message in this thread, so I won't reiterate the power of things like
"protocol" to help one feel less comfortable that there is a Plan in
the space of concerns such as those Stroustrup raises.  And as people
have rightly pointed out on many occasions, if worry that someone
could program the wrong thing if they weren't careful were a strong
design principle, we certainly wouldn't have C nor probably any Turing
powerful programming language.
From: D Herring
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <3rOdneS9CaHcJc3anZ2dnUVZ_tGonZ2d@comcast.com>
Kent M Pitman wrote:
> I don't see any reason why a static language is impoverished in the
> slightest in its set of options.  Can you elaborate on that?


I think he meant the people and their thought model; not the language 
proper.  Aren't most lisps implemented using such static languages as 
C and assembly?  Assembly being the epitome of a static language -- 
adding new instructions generally requires buying a new processor.

Now try telling average Joe programmer that self-modifying code (i.e. 
a dynamic language) is a good thing.  They glimpse the power, and run 
in fear and socially-instilled ignorance (e.g. "isn't that what 
viruses do?").  The key to harnessing such power is a "dynamic" 
language which structures the modifications just as structured 
programming harnessed the goto.

My $0.02US (now more than Canadian!)
- Daniel

[xe.com, 2007.12.01 01:22:50 UTC -- 1 USD = 1.00005 CAD]
From: Rob Warnock
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <L8mdna02B64zfc3anZ2dnUVZ_oCvnZ2d@speakeasy.net>
D Herring  <········@at.tentpost.dot.com> wrote:
+---------------
| Aren't most lisps implemented using such static languages as 
| C and assembly?
+---------------

No, that's another ancient "Lisp myth". Most Lisps with decent
compilers are written in Lisp, with perhaps a *tiny* amount
of C or assembler on the side to "get the world started" in
a C-dominated O/S. For just one example, CMUCL's executable
is ~350 KB of C-compiled binary machine code, which then
loads ~20 MB of Lisp-compiled binary machine code. So the C
code is only ~1.5% of the total binary code size.


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: D Herring
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <F5ydneXUL8TIiMzanZ2dnUVZ_uHinZ2d@comcast.com>
Rob Warnock wrote:
> D Herring  <········@at.tentpost.dot.com> wrote:
> +---------------
> | Aren't most lisps implemented using such static languages as 
> | C and assembly?
> +---------------
> 
> No, that's another ancient "Lisp myth". Most Lisps with decent
> compilers are written in Lisp, with perhaps a *tiny* amount
> of C or assembler on the side to "get the world started" in
> a C-dominated O/S. For just one example, CMUCL's executable
> is ~350 KB of C-compiled binary machine code, which then
> loads ~20 MB of Lisp-compiled binary machine code. So the C
> code is only ~1.5% of the total binary code size.

I wasn't trying to propagate the myth, but your clarification still
supports what I was trying to say.  Nothing in the static language
prevents the construction of an interactive, dynamic language sitting
on top of it.  Many Lisps dynamically create native machine
code/assembly; others (GCL) compile and link C code; but at the end of
the day, they are all an abstraction layer on top of that static 
substrate.

Now whether this line of thinking leads to a "Turing tarpit" of sorts
(that static languages are superior; the dynamic features can always
be built on top) is another story.

- Daniel
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fir4pj$vl0$3@news.net.uni-c.dk>
Den Sat, 01 Dec 2007 03:03:33 -0500 skrev D Herring:

> Now whether this line of thinking leads to a "Turing tarpit" of sorts
> (that static languages are superior; the dynamic features can always be
> built on top) is another story.

This is not a useful argument at all, static languages can be built on 
top of dynamic ones just as easily (or just as hardly, as the case might 
be).

Cheers,
Maciej
From: Thomas Lindgren
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <87fxymejhv.fsf@dev.null>
D Herring <········@at.tentpost.dot.com> writes:

> Kent M Pitman wrote:
> > I don't see any reason why a static language is impoverished in the
> > slightest in its set of options.  Can you elaborate on that?
> 
> I think he meant the people and their thought model; not the
> language proper.  Aren't most lisps implemented using such static
> languages as C and assembly?  Assembly being the epitome of a static
> language -- adding new instructions generally requires buying a new
> processor.

Assembly 'the epitome of ... static'? Do you mean the usual untyped
almost-machine language in which one can write self-modifying code?
Huh.

Best,
                        Thomas
-- 
Thomas Lindgren			

"Man at last knows that he is alone in the unfeeling immensity of the
universe, out of which he emerged only by chance." -- Jacques Monod
From: Pascal Costanza
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <5rd5h6F13bvn3U1@mid.individual.net>
Kent M Pitman wrote:
> Pascal Costanza <··@p-cos.net> writes:
> 
>> The problem is that whenever there is an exception, there are
>> generally multiple ways to get out of the exceptional situation and
>> resume. In a dynamic language, this is pretty straightforward: You
>> present the different possibilities to the programmer / user and let
>> them decide what to do. (This could, for example, be presented in
>> user-friendly dialogs.)
>>
>> In a static language, you would have to rely on automagically picking
>> the 'correct' restart - and I guess that's where the mental model of
>> the average static thinker doesn't cope anymore and they just bail out.
> 
> I don't see any reason why a static language is impoverished in the
> slightest in its set of options.  Can you elaborate on that?

I wasn't precise enough in my wording, but it is indeed the case that I 
am more talking about the mindset of a static languages proponent, 
rather than about static languages per se.

It is the case that with resumable exceptions, there always many options 
to resume, and with the design of the Common Lisp condition system, for 
example, several options may even be available under the same name. So 
there is no 'guarantee' that the 'right thing' happens when you try to 
programmatically resume execution of the program.

Proponents of static languages typically have a hard to accept the idea 
that it may be good to let the user steer the behavior of the program. 
(What if the user makes the wrong choice? Aren't our 'correctness 
guarantees' potentially violated then?) So when they hear about 
resumable exceptions they first think about the programmatic option, not 
about the interactive options. (And if they happen to accept the 
interactive options, then they only accept it 'for debugging purposes', 
not unlike the idea that redefining functions, classes and other 
entities at runtime should also only be allowed 'for debugging purposes'.)

Maybe it's not so much a philosophical question, but rather more 
profane: Exception handling mechanisms are by nature dynamically scoped, 
and static thinkers typically feel uncomfortable with dynamic scoping, 
whereas for dynamic thinkers it's one of many possible options.

I am probably strongly oversimplifying here, but I think that's the 
general direction.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <uhcj2b7s1.fsf@nhplace.com>
Pascal Costanza <··@p-cos.net> writes:

> It is the case that with resumable exceptions, there always many
> options to resume, and with the design of the Common Lisp condition
> system, for example, several options may even be available under the
> same name. So there is no 'guarantee' that the 'right thing' happens
> when you try to programmatically resume execution of the program.

That's why you can associate a restart with a continuation, btw.

Sometimes the handling of errors trigger recursive errors. (By far
the most common case of this is when you're in the thing I sometimes
call the "interactive handler", that is, the debugger and when the
code that's doing the handling is a human being typing to a REPL.
But it can happen programmatically, too.)  

For example: When a handler gets a problem, let's say a TYPE-ERROR due
to a wrong argument type, it can happen that you land in a recursive
error if you get another such error while trying to handle it.  Now
you're in a handler probably two USE-VALUE and STORE-VALUE restarts to
choose from.  How do you decide?  The first part of the answer is that
you poke at the data to see if it's data you like recovering from, and
then you look around for options.  You should generally do something like:
  #'(lambda (condition)
      (let ((r (find-restart 'use-value condition)))
        (when r (invoke-restart r whatever))))
and not the other form:
  #'(lambda (condition)
      (declare (ignore condition))
      (use-value whatever))
because in the latter case you are selecting the innermost restart
of kind use-value, while in the former you are selecting the innermost
RELEVANT restart of kind use-value [where relevance is defined as
"not having been specifically associated with a condition other than 
the given condition"].

> Proponents of static languages typically have a hard to accept the
> idea that it may be good to let the user steer the behavior of the
> program. (What if the user makes the wrong choice? Aren't our
> 'correctness guarantees' potentially violated then?)

Yes, and the hard part is to see that this steering issue applies to
everywhere... all kinds of notations, not just conditions.
Conditions, I believe, are singled out disproportionately not because
of their nature but because they missed the boat in the metaphorical
Noah's Ark that separated the proto world of programming languages
that were initially conceived in early CS and the relatively small zoo
of animals that are present in the modern marketplace.  The world
reified around the Algol/Pascal/C lineage of languages, and they
didn't have those characteristics, so people have come to think of the
characteristics as strange and exotic and ill-formed, when really
their only crime was to be omitted and therefore not to have a careful
history of established practice.  People fear the unknown, whatever its
form.

It will presumably be almost inconceivable to people when I say this,
but there was HUGE opposition in the key players of the Maclisp
community to using base 10 by default.  It was base 8 originally and a
great many Lisp luminaries stubbornly insisted that was the right
base--a much better base than base 10. I wanted base 10 and found
almost no one to support my desire.  I recall Dave Touretzky wanted
base 10, too, but he was geographically elsewhere and even though we
had the ARPANET and email, a lot of things were still done in person
back then.  A lot of it was that I think people feared change.

Probably better documented (or at least documentable) was the
resistance to lexical scoping.  A LOT of people were VERY worried that
dynamic scoping had been used a long time and was tested (and, geez,
it didn't work that great since it compiled away if you didn't use
SPECIAL declarations, making compiled code and interpreted code work
way different from one another ... but people were used to that, even
as icky and rough-edged as it was, and were leary of this lexical
thing).  And yet, the people who had used lexical variables were
convinced that the fear was over nothing.  (I'm glad to say I didn't
resist the lexicality--it made a lot of sense and seemed to work well
in Scheme ... and Algol for that matter, so it had a fairly long
history.)

In fact, one of the strongest arguments in favor of lexical scoping
was that it was clear that with dynamic variables you couldn't as
easily write (even informal) proofs about certain program behaviors
people wanted to use all the time--people were getting routine program
bugs with it.  Discussion of it even showed up in papers [my 1980
paper on Special Forms mentions the accidental capture problem, though
I'm don't recall what the terminology I used for describig it, and
other papers did as well].  And with lexical scoping it was the
opposite--people knew they could prove what they needed, by which
again I don't always mean formal proofs, I usually mean more practical
things like they could compile correct and predictable code from user
source code (... the practical result of proof technique matters more
to me than the texty form of it).  

And so my point is only that people in general have a history of
ignoring actual logic and actual proof technique when making the claim
that their worried about correctness guarantees.  I've seen too many
smart people claim to me they're worried about correctness when they
can't show the correctness of what they're afraid to trade away and
when you can show them the situation would be improved otherwise.  And
this is just one more case.  So you have to learn to sometimes laugh
and say that people can be just emotional about it... and that's what
I think is in play here. Just mostly emotion. Fear of the unknown.
Lack of certainty about how to formulate things.

Remember, too, that some mathematicians [I don't know enough math to
know all the little corners of it, so I'll stop short of making overly
sweeping claims here] formulate errors by appeals to things like
bottom, which has a kind of viral/infectious feel and they get very
ill at ease reasoning about things when bottom shows up.  But the
whole point of the condition system was to transform the situation of
detected errors into the domain of "just regular programs" where the
memory system is still intact, the operators still apply as normal,
and logical consistency is still in play.  It's a leap of faith to
think this can be done, but having done it, it makes things easier.

> So when they hear
> about resumable exceptions they first think about the programmatic
> option, not about the interactive options. (And if they happen to
> accept the interactive options, then they only accept it 'for
> debugging purposes', not unlike the idea that redefining functions,
> classes and other entities at runtime should also only be allowed 'for
> debugging purposes'.)

Hmmm.  Interesting.  I've often thought they ignore the programmatic
angle entirely and are just afraid that people will try to meddle with
program internals from the debugger.

In my background, for example, the ITS operating system (Incompatible
Timesharing System) for the PDP-10, one worked at a debuggger (DDT) in
which you could both issue commands and examine your program memory
all in one unified command set.  I imagine this is a metaphor for what
people have traditionally imagined Lisp must be if they didn't get close
up and learn it.  At DDT, you could type ":lisp" and it would start lisp,
but you could also type "5000/" and it would open memory location 5000
and show you its contents (in various formats).  If at that moment you
typed a number and pressed RETURN, it would store that number in that
location.  That was very scary to the uninitiated, and felt like at any
instant you could break something.  The words "is in an error state and
lets you choose what to do" when spoken to someone who thinks "error state"
= "core dump" (or, at least, program is in screwed up state) makes people
queasy to start. And if they think the only operators you might have to
offer are just "calls to functions in a manner uncoordinated with the
running program", that might sound scary.

I think the subtlety they are not seeing is that we aren't suggesting
they get the C++ equivalent of a REPL and start depositing data in 
random places in an uncontrolled way ... rather, we're suggesting that
the designer of the program specifically choose a set of options that
are appropriate to use in an error that might occur in a dynamic part
of a program's execution, and so the invocation of those recovery
operations is NOT fundamentally uncoordinated with the program, and so
is not unpredictable exactly because of that.

I think about the debugger as a form of discovery, where I might learn
about things my program should be doing.  Over time, things I do
manually in the debugger tend to migrate into programs so I don't have
to spend time there.

I've thought for decades now, ever since this came along, that it
would be cool to have something that "watched" what people clicked on
in the debugger and said things like "I notice you just went into the
debugger and asked to see the value of FOO, then returned that value.
Do you want to attach a handler to some function on the stack which
will always do this action? Or perhaps it would allow you to do it
conditionally.  Or maybe it would be a paper clip thing that comes out
the next time you see the same error saying "last time you did..."
There is a rich opportunity for research here that I've always been
surprised to see no one explore.  Perhaps a good paper topic for
someone who's sitting around trying to figure out how to write about
Lisp for a conference that isn't Lisp-centric but is looking for
"Advances in..." or "Recent topics in...", to include anything from AI
to User Interfaces.  [If anyone does write such a paper, they should
send me a copy so I can know to stop worrying it's not being done.]

> Maybe it's not so much a philosophical question, but rather more
> profane: Exception handling mechanisms are by nature dynamically
> scoped, and static thinkers typically feel uncomfortable with dynamic
> scoping, whereas for dynamic thinkers it's one of many possible
> options.

Heh.  Could be.  Oddly, I don't know that a lot of people understand
exception systems well enough to realize this.  For example, I
actually worked for a while with some people trying to design a
lexical errors system and finally came to the conclusion it can't
work. It's like trying to lexically compile your web server and client
together so you don't have to talk to anyone who dynamically
approaches.  Protocols are about disjoint pieces.  And yeah, that makes
it harder, but more interesting.  

But I think it's a sophisticated understanding of condition systems even
to get far enough to fear that aspect of them. Possible I guess, and worth
mentioning, but my personal take is that this probably isn't the dominant
factor.

I bet a simple, general fear of the unknown is closer.
 
> I am probably strongly oversimplifying here, but I think that's the
> general direction.

What is abstraction but oversimplification? :)
To be less simple would often be to get lost in the details...

And, for myself, I am probably UNDERsimplifying here, which is why I
run on for so much longer than others do. Sorry for rambling.  I never
know what parts people will care about and what they won't...
From: Paul Wallich
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fisid4$sur$1@reader1.panix.com>
Kent M Pitman wrote:
> Pascal Costanza <··@p-cos.net> writes:
> 
>> It is the case that with resumable exceptions, there always many
>> options to resume, and with the design of the Common Lisp condition
>> system, for example, several options may even be available under the
>> same name. So there is no 'guarantee' that the 'right thing' happens
>> when you try to programmatically resume execution of the program.
[...]
> I think the subtlety they are not seeing is that we aren't suggesting
> they get the C++ equivalent of a REPL and start depositing data in 
> random places in an uncontrolled way ... rather, we're suggesting that
> the designer of the program specifically choose a set of options that
> are appropriate to use in an error that might occur in a dynamic part
> of a program's execution, and so the invocation of those recovery
> operations is NOT fundamentally uncoordinated with the program, and so
> is not unpredictable exactly because of that.

I think this might be a very important insight. If you're used to the 
"fact" that an error will take you off the end of an array into garbage 
data, or stuff some arbitrary value into your program counter or your 
code space, then it will be hard to grasp that error conditions can 
almost always be cleanly repaired and/or unwound within the same 
programming framework. Interactive debugging in other contexts always 
seems to involve lying to the computer about what's going on.

paul
From: Geoff Wozniak
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <056877f5-544c-404b-953d-1d04e3b15aa9@d27g2000prf.googlegroups.com>
On Dec 1, 1:15 pm, Kent M Pitman <······@nhplace.com> wrote:
> I've thought for decades now, ever since this came along, that it
> would be cool to have something that "watched" what people clicked on
> in the debugger and...

It felt weird reading this because I'm working on something very much
like you described, except it watches what you use in your code and
makes suggestions about it (like type declarations for names, and type
definitions for classes and structures).  Still a long way from
finished, though (but I make liberal use of restarts).

The feeling of weirdness comes from my inability to find anyone
talking about such things (either colloquially or in the literature)
and that the way you phrased it is almost exactly how I phrased it to
myself. :)
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fiscea$nbs$1@news.net.uni-c.dk>
Den Sat, 01 Dec 2007 13:15:58 -0500 skrev Kent M Pitman:

>> Proponents of static languages typically have a hard to accept the idea
>> that it may be good to let the user steer the behavior of the program.
>> (What if the user makes the wrong choice? Aren't our 'correctness
>> guarantees' potentially violated then?)
> 
> Yes, and the hard part is to see that this steering issue applies to
> everywhere... all kinds of notations, not just conditions. Conditions, I
> believe, are singled out disproportionately not because of their nature
> but because they missed the boat in the metaphorical Noah's Ark that
> separated the proto world of programming languages that were initially
> conceived in early CS and the relatively small zoo of animals that are
> present in the modern marketplace.  The world reified around the
> Algol/Pascal/C lineage of languages, and they didn't have those
> characteristics, so people have come to think of the characteristics as
> strange and exotic and ill-formed, when really their only crime was to
> be omitted and therefore not to have a careful history of established
> practice.  People fear the unknown, whatever its form.

I wonder if some sort of insight couldn't be gained from the final form 
C++ exceptions took, to try and reconstruct what might've been going 
through its designers' minds at the time. C++ exceptions are a constant 
source of problems, their odd suspension between static and dynamic 
causing them to get the worst of both world. Problems and hacks like 
"throw;" to work around the exception object being truncated to the base 
class declared in catch(), oddities of catching by pointer and by 
reference, the possibility of throwing destroyed objects, and the 
incredibly shabby introspection capabilities RTTI offers; they all hint 
at what might've been the idea of condition system the C++ committee got. 
Presumably in this world introducing restarts was indeed a scary idea.

> Probably better documented (or at least documentable) was the resistance
> to lexical scoping.  A LOT of people were VERY worried that dynamic
> scoping had been used a long time and was tested (and, geez, it didn't
> work that great since it compiled away if you didn't use SPECIAL
> declarations, making compiled code and interpreted code work way
> different from one another ... but people were used to that, even as
> icky and rough-edged as it was, and were leary of this lexical thing). 

I insist that you explain this in greater detail.

> Heh.  Could be.  Oddly, I don't know that a lot of people understand
> exception systems well enough to realize this.  For example, I actually
> worked for a while with some people trying to design a lexical errors
> system and finally came to the conclusion it can't work. It's like
> trying to lexically compile your web server and client together so you
> don't have to talk to anyone who dynamically approaches.  Protocols are
> about disjoint pieces.  And yeah, that makes it harder, but more
> interesting.

As one of my favourite quotes states, "Functions are usually invoked at 
runtime...". I always found C++'s pretense of having a real static type 
system, when the starting point was portable PDP-10 assembly, laughable 
at best.

> And, for myself, I am probably UNDERsimplifying here, which is why I run
> on for so much longer than others do. Sorry for rambling.  I never know
> what parts people will care about and what they won't...

No, no, why do you think I started that thread, if not to get you to 
share some of stories from when the Earth was still green?

Cheers,
Maciej
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <umysu9kb2.fsf@nhplace.com>
Maciej Katafiasz <········@gmail.com> writes:

> > Probably better documented (or at least documentable) was the resistance
> > to lexical scoping.  A LOT of people were VERY worried that dynamic
> > scoping had been used a long time and was tested (and, geez, it didn't
> > work that great since it compiled away if you didn't use SPECIAL
> > declarations, making compiled code and interpreted code work way
> > different from one another ... but people were used to that, even as
> > icky and rough-edged as it was, and were leary of this lexical thing). 
> 
> I insist that you explain this in greater detail.

Well, approximately, when you wrote 
  (defun f (x) (g))
  (defun g () x)
in interpreted code, f would be the identity operation.
In compiled code, the non-free names would "compile away" (so F would
see an unused variable and call G without G being able to see its X).
G would see a free X and make it special, so any free X assigned by
global SETQ would become visible in compiled code.  This was made worse
by FEXPRs which were functions that didn't evaluate their arguments.
They were for implementing control structure, but had the problem that
people typically called EVAL in them, and so there was a risk that when
run interpreted they'd see their own variables. A FEXPR is kind of like
it had implicit &REST [there was no &REST concept primitively in MACLISP,
so that's an anachronistic description], so
 (defun foo fexpr (x) x)
 (foo a b c) => (A B C)
But
 (defun my-progn fexpr (forms)
   ;; inefficient but perhaps illustrative
   (car (last (mapcar #'eval forms))))
would work fine when my-progn was compiled but not when it was interpreted
unless forms did not include a free reference to FORMS, which would see
the wrong value. See http://www.nhplace.com/kent/Papers/Special-Forms.html
for more context and explanation of the problems with fexprs, including
the fact that the callers of a fexpr are hard to compile.

But basically, code was often annotated as "this only works interpreted"
or "this only works compiled".  Some things, like tricks with CAR and CDR
worked better if you did declarations to "compile out" the error checking.
Others, like things involving specials, worked better interpreted.  It was
messy, and mostly the goal was to get working code at all.  We all had
a million ideas and the systems were fighting us in various ways to keep
us from testing them, so we were just delighted to get something to work
some way or another... 

I have the impression this was somewhat even true of Interlisp, which
I never used, but did once read the manual for. I remember being surprised
by the fact that it had "compiler macros" but no corresponding interpreted
facility; rather, according to the doc, DWIM ["do what I mean", its heuristic
error correction facility] would "correct the error" of using a compiler
macro in interpreted code by expanding the interpreted code as if it should
be compiled and trying that.  Of course, DWIM was supposed to be an optional
facility, so that meant you couldn't turn off DWIM. And many really HATED
dwim--there were many horror stories about the "helpful" things it would
do that didn't turn out so benign.  It had both fans and foes.

But CL was interesting to me when it came along because it said that
the compiled code and interpreted code would work the same.  You'd think
earlier languages would have had this, too, but they didn't seem to. At
least not all of them.  There was a kind of evolution of style, mostly
through doing things badly, seeing that it was a problem, and trying to
get better.

> As one of my favourite quotes states, "Functions are usually invoked at 
> runtime...". I always found C++'s pretense of having a real static type 
> system, when the starting point was portable PDP-10 assembly, laughable 
> at best.

I am not familiar with the origins of C++, but I'd be very surprised
if it started on the PDP-10. I think it was the PDP-11, since that's
where C started.  The 10 and the 11 were VERY different machines,
not part of a smooth series.  The easiest to see difference was that
the 10 was a 36-bit machine with 18 bit addresses, while the 11 was a
32-bit machine with 16 bit addresses, but the instruction sets were
wildly different, too.
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fislp5$nbs$2@news.net.uni-c.dk>
Den Sat, 01 Dec 2007 16:28:17 -0500 skrev Kent M Pitman:

>> I insist that you explain this in greater detail.
> 
> Well, approximately, when you wrote
>   (defun f (x) (g))
>   (defun g () x)
> in interpreted code, f would be the identity operation. In compiled
> code, the non-free names would "compile away" (so F would see an unused
> variable and call G without G being able to see its X). G would see a
> free X and make it special, so any free X assigned by global SETQ would
> become visible in compiled code.  This was made worse by FEXPRs which
> were functions that didn't evaluate their arguments. They were for
> implementing control structure, but had the problem that people
> typically called EVAL in them, and so there was a risk that when run
> interpreted they'd see their own variables. A FEXPR is kind of like it
> had implicit &REST [there was no &REST concept primitively in MACLISP,
> so that's an anachronistic description], so
>  (defun foo fexpr (x) x)
>  (foo a b c) => (A B C)
> But
>  (defun my-progn fexpr (forms)
>    ;; inefficient but perhaps illustrative (car (last (mapcar #'eval
>    forms))))
> would work fine when my-progn was compiled but not when it was
> interpreted unless forms did not include a free reference to FORMS,
> which would see the wrong value. See
> http://www.nhplace.com/kent/Papers/Special-Forms.html for more context
> and explanation of the problems with fexprs, including the fact that the
> callers of a fexpr are hard to compile.

That is incredibly odd. The change of semantics between compiled and 
interpreted code seems like something INTERCAL would have, not a serious 
language intended for actual use...

> Of course, DWIM was supposed to be an optional facility, so that meant 
> you couldn't turn off DWIM.

Heh.

>> As one of my favourite quotes states, "Functions are usually invoked at
>> runtime...". I always found C++'s pretense of having a real static type
>> system, when the starting point was portable PDP-10 assembly, laughable
>> at best.
> 
> I am not familiar with the origins of C++, but I'd be very surprised if
> it started on the PDP-10. I think it was the PDP-11, since that's where
> C started.  The 10 and the 11 were VERY different machines, not part of
> a smooth series.  The easiest to see difference was that the 10 was a
> 36-bit machine with 18 bit addresses, while the 11 was a 32-bit machine
> with 16 bit addresses, but the instruction sets were wildly different,
> too.

Yes, I always confuse the two.

Cheers,
Maciej
From: Lars Brinkhoff
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <85ve6vhtge.fsf@junk.nocrew.org>
Kent M Pitman <······@nhplace.com> writes:
> the 10 was a 36-bit machine with 18 bit addresses, while the 11 was
> a 32-bit machine with 16 bit addresses, but the instruction sets
> were wildly different, too.

I believe the PDP-11 is a 16-bit machine: addresses, instructions, and
registers are all 16 bits.
From: Rob Warnock
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <M8OdnSr1BIVHavXanZ2dnUVZ_gednZ2d@speakeasy.net>
Lars Brinkhoff  <·········@nocrew.org> wrote:
+---------------
| Kent M Pitman <······@nhplace.com> writes:
| > the 10 was a 36-bit machine with 18 bit addresses, while the 11 was
| > a 32-bit machine with 16 bit addresses, but the instruction sets
| > were wildly different, too.
| 
| I believe the PDP-11 is a 16-bit machine: addresses, instructions,
| and registers are all 16 bits.
+---------------

That's correct. Though in some models there was a larger physical
address space on the bus, e.g., PDP-11/20 Unibus had 18 bits of
physical address, the Q-Bus had 22. These extra bits were accessible
only when using the 8-segment MMU to map virtual (16-bit) addresses
to physical (18, 22, whatever).


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <u4peen0at.fsf@nhplace.com>
····@rpw3.org (Rob Warnock) writes:

> Lars Brinkhoff  <·········@nocrew.org> wrote:
> +---------------
> | Kent M Pitman <······@nhplace.com> writes:
> | > the 10 was a 36-bit machine with 18 bit addresses, while the 11 was
> | > a 32-bit machine with 16 bit addresses, but the instruction sets
> | > were wildly different, too.
> | 
> | I believe the PDP-11 is a 16-bit machine: addresses, instructions,
> | and registers are all 16 bits.
> +---------------
> 
> That's correct. Though in some models there was a larger physical
> address space on the bus, e.g., PDP-11/20 Unibus had 18 bits of
> physical address, the Q-Bus had 22. These extra bits were accessible
> only when using the 8-segment MMU to map virtual (16-bit) addresses
> to physical (18, 22, whatever).

Thanks for all the corrections, btw.

As you can probably tell, I didn't program the 11 (at that level); I
did write Algol and some Lisp (delphi lisp) on it, but I never used it
at the machine level except through some weird emulator they used for
teaching.
From: Damien Kick
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <13l5m89frrscp5e@corp.supernews.com>
Pascal Costanza wrote:
> Kent M Pitman wrote:
>> Pascal Costanza <··@p-cos.net> writes:
>>
>>> The problem is that whenever there is an exception, there are
>>> generally multiple ways to get out of the exceptional situation and
>>> resume. In a dynamic language, this is pretty straightforward: You
>>> present the different possibilities to the programmer / user and let
>>> them decide what to do.
>>>
>>> In a static language, you would have to rely on automagically picking
>>> the 'correct' restart - and I guess that's where the mental model of
>>> the average static thinker doesn't cope anymore and they just bail out.
>>
>> I don't see any reason why a static language is impoverished in the
>> slightest in its set of options.  Can you elaborate on that?
> 
> I wasn't precise enough in my wording, but it is indeed the case that I 
> am more talking about the mindset of a static languages proponent, 
> rather than about static languages per se.

I don't think that is a fair characterization of the issues. 
Apparently, Stroustrup was asking for the input from people who had 
hands-on experience with systems in which resumption was supported.  It 
wasn't that these people had "the mental model of the average static 
thinker [which couldn't] cope anymore and they just [bailed] out."  They 
could cope just fine.  They were using these semantics.  Some had even 
implemented these semantics.  Though we at c.l.lisp do appreciate your 
contribution to the image of the smug lisp weenie, Pascal.  Please 
excuse the length of this quotation (any typos are mine), but from _The 
Design and Evolution of C++_, page 392, Stroustrup is writing about the 
debate to design C++ to support either resumption of termination semantics:

<blockquote>
After a couple of years of discussion, I was left with the impression 
that one could concoct a convincing logical argument for either 
position.  Even the original paper on exception handling had done so. 
We were in the position of the ancient Greek philosophers debating the 
nature of the universe with such intensity and subtlety that the forgot 
to study it.  Consequently, I kept asking anyone with genuine experience 
with large systems to come forward with data.  On the side of 
resumption, Martin O'Riordan reported that "Microsoft had several years 
of positive experience with resumable exception handling," but the 
absence of specific examples weakened his case.  Experiences with PL/I's 
ON-conditions were mentioned as arguments both for and against resumption.

The, at the Palo Alto meeting in November 1991, we heard a brilliant 
summary of the arguments for termination semantics backed with both 
personal experience and data from Jim Mitchell (from Sun, formerly from 
Xerox PARC).  Jim had used exception handling in half a dozen languages 
over a period of 20 years and was an early proponent of resumption 
semantics as one of the main designers and implementers of Xeorx's 
Cedar/Mesa system.  His message was "termination is preferred over 
resumption; this is not a matter of opinion but a matter of years of 
experience.  Resumption is seductive, but not valid."  He backed this 
statement with experience from several operating systems.  The key 
example was Cedar/Mesa: It was written by people who liked and used 
resumption, but after ten years of use, there was only one use of 
resumption left in the half million line system - and that was a context 
inquiry.  Because resumption wasn't actually necessary for such a 
context inquiry, they removed it and found a significant speed increase 
in that part of the system.  In each and every case where resumption had 
been used it had - over the ten years - become a problem and a more 
appropriate design had replaced it.  Basically, every use of resumption 
had represented a failure to keep separate levels of abstraction disjoint.

Mary Fontana presented similar data from the TI Explorer system where 
resumption was found to be used for debugging only, Aron Insinga 
presnted evidence of the very limited and nonessential use of resumption 
in DEC's VMS, and Kim Knuttilla related exactly the same story as Jim 
Mitchell for two large and long-lived projects inside IBM.  To this we 
added a strong opinion in favor of termination based on experience at 
L.M.Ericsson relayed to us by Dag Bruck.
</blockquote>

Again, regardless of whether or not one agrees with the conclusion of 
these people, it seems to me to clearly be an unfair characterization to 
dismiss their conclusions as the result of the failed mental model of 
the average static thinker which just couldn't cope anymore and so they 
decided to bail out.  Of course, I find it interesting to read the 
citation from Jim Mitchell such that his view was "not a matter of 
opinion but a matter of years of experience."  As if the result of years 
of experience is somehow not an opinion which has been informed by that 
experience.  With this as in most things, one can find people of similar 
levels of experience, intelligence, and wisdom who will come to form 
differing opinions.  I suppose that this is the place from which 
political parties, religious denominations, philosophical schools of 
thought, etc. come to be.  Lambda, the Ultimate Political Party.

I also find it interesting to read that Martin O'Riordan had reported 
several years of positive experience but that a lack of specific 
examples was seen as weakening his case.  I wonder how one could go 
about supplying specific examples of the positives of the use of 
resumption or, for that matter, any programming language feature.  This 
seems to me to lead down the path to the Turing Tar Pit.  As surely as 
one can use a certain language feature to implement certain 
functionality, one can nearly always achieve the same effect by avoiding 
that particular language feature and reworking the implementation to use 
other language features.  The existence of a system which does or does 
not use a particular feature is not, in my opinion, a convincing 
argument, in and of itself, for either the presence or lack of a 
particular programming language feature.  As is surely a comp.lang.lisp 
mantra, the success or failure of a technology can be decided by factors 
external to that technology itself and it would be a mistake to draw a 
causal relation from the use or disuse of any technology and the 
commercial or social (in so much as geeks are social) success of that 
technology.  In other words, sometimes crap makes a lot of money, 
quality goes bankrupt, and the most popular is not always the best.

Kent, apparently, had had positive experiences with the use of 
conditions which support/ed resumption semantics.  I think it's 
interesting that Kent mentions, in "Condition Handling in the Lisp 
Language Family", Multics PL/I as being an influence for ZetaLisp and, 
therefore, on Common Lisp conditions because Stroustrup also mentioned 
PL/I as being a source of both positive and negative experiences with 
restarts, to use the Lisp terminology.  I would be rather interested to 
learn more about what the C++ committee had thought was good about 
PL/I's ON-Conditions to see if it corresponds to what the Common Lisp 
committee thought was good about it, in so much as it was the influence 
of the New Error System and, thusly, Common Lisp conditions.  When 
Stroustrup wrote that the Cedar/Mesa team had taken the sole remaining 
use of resumption, "removed it and found a significant speed increase in 
that part of the system," was this perhaps simply a matter of a poor 
implementation of resumption?  Most C++ gurus (all of those with which 
I'm familiar) recommend avoiding the use of exceptions for any code path 
for which run-time performance is important.  Is this also true of Lisp 
conditions or does the performance of most Lisp implementations of 
conditions make this not as much of an issue?

Anyway, to try and wrap up this post, I am asking anyone with genuine 
experience using conditions and restarts with large systems to come 
forward with data.  <laugh> I'm not exactly sure what kind of data or 
how one should analyze any data that might be provided, but if not 
exactly data in the sense in which it is used as the physical sciences, 
I'd even settle for anecdote.
From: Pascal Costanza
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <5rg4c0F14j4k6U1@mid.individual.net>
Damien Kick wrote:

> Again, regardless of whether or not one agrees with the conclusion of 
> these people, it seems to me to clearly be an unfair characterization to 
> dismiss their conclusions as the result of the failed mental model of 
> the average static thinker which just couldn't cope anymore and so they 
> decided to bail out.

I wasn't aware of this historical background, so thanks for pointing it 
out. Sorry for my mischaracterization.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
From: Alan Crowe
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <86bq97tvro.fsf@cawtech.freeserve.co.uk>
Damien Kick <·····@earthlink.net> takes the time and trouble
to copy type a long quote from page 392 of Stroustrup.

I've picked out two passages that particularly caught my
attention

> His message was "termination is preferred over 
> resumption; this is not a matter of opinion but a matter of years of 
> experience.  Resumption is seductive, but not valid."  He backed this 
> statement with experience from several operating systems.  The key 
> example was Cedar/Mesa: It was written by people who liked and used 
> resumption, but after ten years of use, there was only one use of 
> resumption left in the half million line system - and that was a context 
> inquiry.  

> Mary Fontana presented similar data from the TI Explorer system where 
> resumption was found to be used for debugging only, Aron Insinga 
> presnted evidence of the very limited and nonessential use of resumption 
> in DEC's VMS, and Kim Knuttilla related exactly the same story as Jim 
> Mitchell for two large and long-lived projects inside IBM. 

What strikes me is that debugging is rather
important. Drawing up your business plan you note that the
cost of writing the software in the first place has a
substantial allocation of money to debugging. You also
intend to sell support. Costing that part of the plan
involves thinking about the cost of finding bugs and
offering customers work arounds.

The Cedar/Mesa example is consistent with the idea of a
software product life cycle in which resumable exceptions
earn their keep in the first five years of intensive
debugging, but lose their relevance as the software matures.

I'm left with the impression that Stroustrup didn't ask the
right questions. He missed "Will they help with debugging
and the first five years?" and decided the issue on "Will 
a mature project still be using them?"

> Anyway, to try and wrap up this post, I am asking anyone with genuine 
> experience using conditions and restarts with large systems to come 
> forward with data.  <laugh> I'm not exactly sure what kind of data or 
> how one should analyze any data that might be provided, but if not 
> exactly data in the sense in which it is used as the physical sciences, 
> I'd even settle for anecdote.

I found the examples on which Stroustrup based his decision
somewhat double edged.

Think of programming as automation. Early in a project
routine stuff is automated, but tricky corner cases raise
exceptions and require manual intervention. Part of the
learning experience of a long lived project is discovering
the corner cases and working out how to automate them. The
natural progress of a project is from partial automation to
full automation.

There is something quite odd about citing VMS in this
context. VMS is an operating system for minicomputers. What
possible role is there for a ressumable exception in such an
operating system? I can imagine that in the early days of
mainframes, with an operator manning the console, ready to
choose a restart, a ressumable exception might have a
place. By the time VMS came along the expectation was that
the operating system itself would be stable. (Microsoft
reversed that expectation for a while.)

Numerical recipes is full of iterative solvers that are
supposed to return an answer within a modest number of
iterations. The loops count the iterations and contain error
exits. For example qsimp and and qrom are both limited to 20
iterations. It would be very nice to be able to fire up
another Lisp image, solve the particular instance some other
way (solve for the location of a mild sigularity, integrate
piece wise around it, something like that) and then resume
the main computation. But I've never actually done this, so
I cannot even offer an anecdote. Sorry.

Alan Crowe
Edinburgh
Scotland
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <47554FAE.4070500@alum.mit.edu>
I think your posting is insightful, but I'd add one thing.
Saying that they lost relevance as the software matures
is mainly true if you consider "mature" to be quite late
in the life cycle of the software, a point where there
are very few changes in requirements and very little update.
But after the "first five years of intensive debugging",
there is often another ten years of adding features,
speeding things up, closing security loopholes, etc.,
during which any debugging aids are quite valuable.

In a separate point, I do not think that an exception
is necessarily only used for an error (a bug).  To
my mind, there are lots of appropriate places to
use exceptions as an expected and proper part of the
running of a program.  Exceptions are how a function
or method reports an "unusual result", such as an
"open file" reporting "file not found".  Very often
a program knows perfectly well that a file might
not be found and is designed to behave a certain way
when that happens (e.g. prompt the user for a filename,
if this is an interactive application).  There's nothing
at all wrong with coding this as an exception; in fact
there's a lot to be said for it.

One of these days I will write an essay on my view of exceptions
and post it on my blog at dlweinreb.wordpress.com.  Thanks
for giving me the idea to do this.


Alan Crowe wrote:
> Damien Kick <·····@earthlink.net> takes the time and trouble
> to copy type a long quote from page 392 of Stroustrup.
> 
> I've picked out two passages that particularly caught my
> attention
> 
>> His message was "termination is preferred over 
>> resumption; this is not a matter of opinion but a matter of years of 
>> experience.  Resumption is seductive, but not valid."  He backed this 
>> statement with experience from several operating systems.  The key 
>> example was Cedar/Mesa: It was written by people who liked and used 
>> resumption, but after ten years of use, there was only one use of 
>> resumption left in the half million line system - and that was a context 
>> inquiry.  
> 
>> Mary Fontana presented similar data from the TI Explorer system where 
>> resumption was found to be used for debugging only, Aron Insinga 
>> presnted evidence of the very limited and nonessential use of resumption 
>> in DEC's VMS, and Kim Knuttilla related exactly the same story as Jim 
>> Mitchell for two large and long-lived projects inside IBM. 
> 
> What strikes me is that debugging is rather
> important. Drawing up your business plan you note that the
> cost of writing the software in the first place has a
> substantial allocation of money to debugging. You also
> intend to sell support. Costing that part of the plan
> involves thinking about the cost of finding bugs and
> offering customers work arounds.
> 
> The Cedar/Mesa example is consistent with the idea of a
> software product life cycle in which resumable exceptions
> earn their keep in the first five years of intensive
> debugging, but lose their relevance as the software matures.
> 
> I'm left with the impression that Stroustrup didn't ask the
> right questions. He missed "Will they help with debugging
> and the first five years?" and decided the issue on "Will 
> a mature project still be using them?"
> 
>> Anyway, to try and wrap up this post, I am asking anyone with genuine 
>> experience using conditions and restarts with large systems to come 
>> forward with data.  <laugh> I'm not exactly sure what kind of data or 
>> how one should analyze any data that might be provided, but if not 
>> exactly data in the sense in which it is used as the physical sciences, 
>> I'd even settle for anecdote.
> 
> I found the examples on which Stroustrup based his decision
> somewhat double edged.
> 
> Think of programming as automation. Early in a project
> routine stuff is automated, but tricky corner cases raise
> exceptions and require manual intervention. Part of the
> learning experience of a long lived project is discovering
> the corner cases and working out how to automate them. The
> natural progress of a project is from partial automation to
> full automation.
> 
> There is something quite odd about citing VMS in this
> context. VMS is an operating system for minicomputers. What
> possible role is there for a ressumable exception in such an
> operating system? I can imagine that in the early days of
> mainframes, with an operator manning the console, ready to
> choose a restart, a ressumable exception might have a
> place. By the time VMS came along the expectation was that
> the operating system itself would be stable. (Microsoft
> reversed that expectation for a while.)
> 
> Numerical recipes is full of iterative solvers that are
> supposed to return an answer within a modest number of
> iterations. The loops count the iterations and contain error
> exits. For example qsimp and and qrom are both limited to 20
> iterations. It would be very nice to be able to fire up
> another Lisp image, solve the particular instance some other
> way (solve for the location of a mild sigularity, integrate
> piece wise around it, something like that) and then resume
> the main computation. But I've never actually done this, so
> I cannot even offer an anecdote. Sorry.
> 
> Alan Crowe
> Edinburgh
> Scotland
From: Geoff Wozniak
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <7396dfaf-9394-4742-a221-964a0d20bf81@e23g2000prf.googlegroups.com>
On Dec 4, 8:01 am, Daniel Weinreb <····@alum.mit.edu> wrote:
> One of these days I will write an essay on my view of exceptions
> and post it on my blog at dlweinreb.wordpress.com.  Thanks
> for giving me the idea to do this.
>

I don't know about anyone else, but I would be extremely interested in
reading that.
From: Rob Warnock
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <0dudnfoyYfsNu8vanZ2dnUVZ_sKqnZ2d@speakeasy.net>
Daniel Weinreb  <···@alum.mit.edu> wrote:
+---------------
| In a separate point, I do not think that an exception
| is necessarily only used for an error (a bug).  To
| my mind, there are lots of appropriate places to
| use exceptions as an expected and proper part of the
| running of a program.  Exceptions are how a function
| or method reports an "unusual result", such as an
| "open file" reporting "file not found".  Very often
| a program knows perfectly well that a file might
| not be found and is designed to behave a certain way
| when that happens (e.g. prompt the user for a filename,
| if this is an interactive application).  There's nothing
| at all wrong with coding this as an exception; in fact
| there's a lot to be said for it.
+---------------

Kent Pitman's 2001 paper makes the same point:

    http://nhplace.com/kent/Papers/Condition-Handling-2001.html
    Condition Handling in the Lisp Language Family
    ...
    Condition Systems vs Error Systems
    The Common Lisp community typically prefers to speak about its
    condition system rather than its error system to emphasize that
    there are not just fatal but also non-fatal situations in which
    the capabilities provided by this system are useful.

+---------------
| One of these days I will write an essay on my view of
| exceptions and post it on my blog at dlweinreb.wordpress.com.
+---------------

Oops! Then maybe I shouldn't have mentioned Kent's paper!  ;-}


-Rob

-----
Rob Warnock			<····@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607
From: D Herring
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <m6ednQNUx8GSucvanZ2dnUVZ_rqlnZ2d@comcast.com>
Daniel Weinreb wrote:
> In a separate point, I do not think that an exception
> is necessarily only used for an error (a bug).  To
> my mind, there are lots of appropriate places to
> use exceptions as an expected and proper part of the
> running of a program.  Exceptions are how a function
> or method reports an "unusual result", such as an
> "open file" reporting "file not found".  Very often
> a program knows perfectly well that a file might
> not be found and is designed to behave a certain way
> when that happens (e.g. prompt the user for a filename,
> if this is an interactive application).  There's nothing
> at all wrong with coding this as an exception; in fact
> there's a lot to be said for it.

ISTR that even Stroustrup's "The C++ Programming Language" makes that 
general point.  All the standard C++ exceptions cover things which are 
expected to fail occasionally.  Exceptions don't throw themselves...

I never understood how the C++ committee could see this yet find it 
acceptable to unwind the stack until someone knew what to do *at the 
exception site*.  Instead, you have to repeatedly implement your own 
callback framework for continuable exception handling.

- Daniel
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <ud4traepg.fsf@nhplace.com>
Pascal Costanza <··@p-cos.net> writes:

> I think resumable exception make a lot less sense in a static language
> (where the goal is that the program is 'correct' and shouldn't require
> interactive 'fixing' at any point in time).

Btw, I 100% disagree with this.  Resumable exceptions are a luxury in
Lisp, but they are an essential in static languages.  Heavily reflective
and relinkable languages can imagine more ways around the problem than 
can static ones, hence static ones need built-in support more because it
cannot be supplied on the fly.

In a static language, the information about the variables is compiled
out, so it makes MORE sense, not less.  In a dynamic system, you can
imagine redefining a function and restarting things.  In a static
system, all you have is what the compiler has compiled.  So if you
don't have a continuation you can call that will fix the error, you
have nothing, and that's why you desperately need the continuation.
Restarts are nothing more than continuations that have been put on a
shelf and labeled with a tag so they can be found later by that name
if/when they are needed.
From: Alan Crowe
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <86oddf4bin.fsf@cawtech.freeserve.co.uk>
Rainer Joswig <······@lisp.de> writes:

> According to the TI guys (can't remember the exact wording, don't have
> the book handy), this feature was not used much. So it did
> not make it into C++. 

I claim that you can benefit from features you don't
use. I've put up a web page detailling my argument

http://www.cawtech.demon.co.uk/lisp/ghost-benefits.html

and have submitted it to reddit

http://programming.reddit.com/info/61kgl/comments/

The idea that a feature can be valuable beyond the number of
times it is used is of interest beyond the confines of
c.l.l. I wonder what redditors will make of it?


Do features you do not use confer benefits?
===========================================
A baroque language
------------------
Common Lisp has some abstruse features that are rarely
used. DEFINE-METHOD-COMBINATION allows the programmer to
create his own method combinations. The condition system has
RESTART-BIND in addition to RESTART-CASE.

This leads to the criticism that Common Lisp is a baroque
language with too many features. Part of the logic of the
critique is that programmers gain no benefit from the
features that they do not use. This is too glib. Large
programs need planning, which introduces subtle
complications.  

Why plan?
---------
Why not plunge right in and get on with coding? At the heart
of planning computer programs lies the idea of a contrast
between some bits of code that are routine and other bits
that are tricky. Coding the routine bits always goes
smoothly. Coding the tricky bits sometimes ends in tears;
the code cannot be written as originally designed and the
work done on the tricky bit and on many routine parts of the
program feeding in to it must be scrapped.

When we plan, we lightly sketch in the routine parts of the
program, concentrating our efforts on anticipating the
problems in the tricky bits. The benefit we obtain from
planning is a reduction in the costs imposed upon us by the
the discovery of show-stopping problems in the tricky
bits. We lose the work we did implementing the tricky
bit. That is unavoidable. We lose the work we did on the
routine parts of the program feeding into the tricky bit,
but we only sketched those parts, so the loss is greatly
reduced compared to plunging into coding without looking
ahead.  

The effectiveness of planning
-----------------------------
The benefit of planning flows from the routine parts of the
code being mere sketches. The more that we can classify as
routine, the more effective the the planning process will be
at eliminating wasted effort.

When we plan we make two kinds of mistake.

   1. Sometimes we think that a part of the program is
      tricky when it is actually routine. This causes us to
      spend effort filling in the details earlier than we
      should. If these details survive into the final
      program, the mistake costs us nothing. If this
      detailed worked is discarded later in the planning
      process we pay a price for the mistake. Had we seen
      that the work was routine we would only have sketched
      and only discarded a sketch.

   2. Sometimes we think that a part of the program is
      routine when it is actually tricky. When we come to
      code it we may succeed, in which case our mistake has
      cost us nothing, or we may encounter a show-stopping
      problem and fail. Now we must discard much detailed
      work before trying again. This kind of mistake can be
      very expensive, emerging late in the project and
      impacting much detailed work

Calling a tricky part routine is potentially very
expensive. This creates a painful tension. When we are
conservative and call a routine part tricky, just to be on
the safe side, we are reducing the effectiveness of the
planning process. On the other hand, when we strain to say
that a part of the program is routine, hoping to maximize
the benefit from planning, we had better be right.

Language features and planning
------------------------------
Now that we are clear about how planning delivers its
benefits we can look at how this interacts with the features
present in or absent from our programming language.

For example, we may be called upon to decide whether a
mildly ambitious plan, based around an unusual method
combination is routine or tricky. We need to think ahead. A
bug may show us that the method combination we had planned
on is not in fact quite suitable. If the project is coded in
a language with custom method combinations we may be able to
anticipate that the code is routine. If problems arise, we
can code our way out of trouble. If the project is coded in
a language with a fixed set of method combinations we may
call the code tricky and feel obliged to nail down the
details early on.

The interest arises from the fact that planning needs to be
fairly conservative. As we explained earlier the costs of
the two kinds of error are different. We only call code
routine if we have some notion of Plan B, which we have in
reserve if things don't go smoothly. We are only justified
in calling it Plan B if Plan A usually works. Sometimes Plan
B is to use the advanced version of a language feature. In
this case we gain an advantage, in the planning stage, from
a feature that we don't actually get round to using.

That seems paradoxical. What are the ingredients that make
this happen? There seem to be two.

   1. The feature has to be never used in the ordinary sense
      that actually Fred used it a couple of months ago, but
      that doesn't count because it was a special case. If
      it is genuinely never used it can hardly be counted on
      as Plan B and doesn't help at the planning stage.

   2. It seems most likely to happen with features that
      offer a more general version of a commonly used
      built-in feature or some way of brute forcing past
      rare problems.

For example, Common Lisp has macros PUSH, POP, INCF for
modifying places. You can define your own with
DEFINE-MODIFY-MACRO. The interest is that you can plan with
confidence on the basis of using DEFINE-MODIFY-MACRO.

DEFINE-MODIFY-MACRO has limitations. As Graham explains on
page 169 of ANSI Common Lisp neither PUSH nor POP can be
defined as modify-macros. The reason that you can plan with
confidence is that you have DEFINE-SETF-EXPANDER available
as Plan B. You might never use DEFINE-SETF-EXPANDER, but you
benefit from it anyway because you can plan on the basis
that code that is going to use DEFINE-MODIFY-MACRO is
routine and leave the details until other parts of your plan
are firmed up.  

Ghost benefits are real!
------------------------
Software projects require planning. Planning gains in
effectiveness the more often you can declare "This bit of
code is routine, we can fill in the details later." The
right kind of advanced programming language feature helps
with this, allowing you to plan with confidence. In this way
programmers benefit from features that they don't use.

Alan Crowe
Edinburgh
Scotland
From: Raffael Cavallaro
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <2007112717135216807-raffaelcavallaro@pasdespamsilvousplaitmaccom>
On 2007-11-27 16:35:44 -0500, Alan Crowe <····@cawtech.freeserve.co.uk> said:

> Do features you do not use confer benefits?

Graham reached the same conclusion from a slightly different perspective:

"Why do we plan before implementing? The big danger in plunging right 
into a project is the possibility that we will paint ourselves into a 
corner. If we had a more flexible language, could this worry be 
lessened? We do, and it is. The flexibility of Lisp has spawned a whole 
new style of programming. In Lisp, you can do much of your planning as 
you write the program.

Why wait for hindsight? As Montaigne found, nothing clarifies your 
ideas like trying to write them down. Once you're freed from the worry 
that you'll paint yourself into a corner, you can take full advantage 
of this possibility."

from <http://www.bookshelf.jp//texi/onlisp/onlisp_2.html#SEC7>
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-F1E94B.00410228112007@news-europe.giganews.com>
In article <··············@cawtech.freeserve.co.uk>,
 Alan Crowe <····@cawtech.freeserve.co.uk> wrote:

> Rainer Joswig <······@lisp.de> writes:
> 
> > According to the TI guys (can't remember the exact wording, don't have
> > the book handy), this feature was not used much. So it did
> > not make it into C++. 
> 
> I claim that you can benefit from features you don't
> use. I've put up a web page detailling my argument
> 
> http://www.cawtech.demon.co.uk/lisp/ghost-benefits.html
> 
> and have submitted it to reddit
> 
> http://programming.reddit.com/info/61kgl/comments/
> 
> The idea that a feature can be valuable beyond the number of
> times it is used is of interest beyond the confines of
> c.l.l. I wonder what redditors will make of it?
> 
> 
> Do features you do not use confer benefits?
> ===========================================
> A baroque language
> ------------------
> Common Lisp has some abstruse features that are rarely
> used. DEFINE-METHOD-COMBINATION allows the programmer to
> create his own method combinations. The condition system has
> RESTART-BIND in addition to RESTART-CASE.

Actually many of the features came from practical
use. The condition system was in use already.
The Lisp Machine OS was one application
were much was used: conditions, objects,
pathnames, streams as objects, ... you name it.
Common Lisp was taking a lot from earlier experience.
For much it was more a battle what to incorporate in
the new language and what to leave out
(from existing dialects and implementations).
These companies had commercial interests, because a
feature not in the new language would mean more
porting effort for existing code to the new language.
On the other hand a new feature in the language would mean
work for the implementors. Users wanted some compatibility
with a migration path. Implementors wanted to
avoid the cost of new constructs. Especially those
on 'stock' hardware/software tried to avoid
the cost of 'performance hungry' new features
or added 'bloat'. If the product was only running
on a single type of platform, why have a pathname
system that has classes for different pathname
types (vms, ufs, hfs, FAT, lmfs, ...)? Why
add more dynamic features (like conditions and streams
as classes/objects/methods) when runtime dispatch
was slow?
From: ····@domain.invalid
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <YUl4j.4614$gs.2834@trndny08>
Rainer Joswig wrote:
> In article <·············@nhplace.com>,

> One of the best examples was that the feature that you can
> continue from an error (and the handler is called
> without unwinding the stack) didn't make it into the C++ standard.
> Though Stroustrup (Design and Evolution of C++) reports
> that they had people from Texas Instruments with Lisp
> Machine experience explaining the Lisp Machine error handling.
>

That would be me.

During one of the OOPSLA conferences, probably 1989, I sat
down with Stroustrup to talk about exceptions in C++.
At that time, exceptions had to be statically-allocated.

I argued that exceptions should be objects that were
created at the time that they were signalled, so that
they could have values in their data members (i.e.
instance variables, slots) that had particular
information about the exception.  For example,
a file not found exception could have a data
member containing the name of the file that was
not found.

Stroustrup resisted the idea because
he was worried about the cost of creating the object.
We worked out that it would be easy enough to define
C++ so that you could do it either way, the existing
way, or the new way that I proposed.  He agreed to
that.

I was hardly about to get into advanced issues
like restartable exceptions!  There's no way
he would have been interested in that, and if
I had started talking that way I could have
easily lost my credibility and influence with him.

-- Dan
From: ····@domain.invalid
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <qXl4j.4616$gs.4003@trndny08>
There seems to be a problem with my newsgroup software
(Thunderbird 2.0.0.9).  The previous posting was from
Dan Weinreb, ···@alum.mit.edu.

····@domain.invalid wrote:
> Rainer Joswig wrote:
>> In article <·············@nhplace.com>,
> 
>> One of the best examples was that the feature that you can
>> continue from an error (and the handler is called
>> without unwinding the stack) didn't make it into the C++ standard.
>> Though Stroustrup (Design and Evolution of C++) reports
>> that they had people from Texas Instruments with Lisp
>> Machine experience explaining the Lisp Machine error handling.
>>
> 
> That would be me.
> 
> During one of the OOPSLA conferences, probably 1989, I sat
> down with Stroustrup to talk about exceptions in C++.
> At that time, exceptions had to be statically-allocated.
> 
> I argued that exceptions should be objects that were
> created at the time that they were signalled, so that
> they could have values in their data members (i.e.
> instance variables, slots) that had particular
> information about the exception.  For example,
> a file not found exception could have a data
> member containing the name of the file that was
> not found.
> 
> Stroustrup resisted the idea because
> he was worried about the cost of creating the object.
> We worked out that it would be easy enough to define
> C++ so that you could do it either way, the existing
> way, or the new way that I proposed.  He agreed to
> that.
> 
> I was hardly about to get into advanced issues
> like restartable exceptions!  There's no way
> he would have been interested in that, and if
> I had started talking that way I could have
> easily lost my credibility and influence with him.
> 
> -- Dan
> 
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <u63zhafkl.fsf@nhplace.com>
····@domain.invalid writes:

> There seems to be a problem with my newsgroup software
> (Thunderbird 2.0.0.9).  The previous posting was from
> Dan Weinreb, ···@alum.mit.edu.

Hi Dan.  Gee, my first thought had just been that you'd done a fine
job of privacy protection there...

> > I argued that exceptions should be objects that were created at
> > the time that they were signalled, so that they could have values
> > in their data members (i.e.  instance variables, slots) that had
> > particular information about the exception.  For example, a file
> > not found exception could have a data member containing the name
> > of the file that was not found.

Of curiosity, how was this done in PL/1?  The on statement didn't
allow for that, did it?

> > Stroustrup resisted the idea because he was worried about the cost
> > of creating the object.

Funny how quickly people can go from making statements like "How can I
keep from having my application die and dump core?" to making
statements like "Gee, you mean this highly reliable and
well-structured tool for allowing me to gracefully recover from errors
is going to cons three extra words or take an extra millisecond?"  :)

But I don't think he was the only one who worried a lot about that
kind of thing back then (and perhaps even now).  The issue of making
the signaling process be as fast and as cons-free as possible is one I
remember people asked about a lot when designing the CL condition
system.

> > We worked out that it would be easy enough to define C++ so that
> > you could do it either way, the existing way, or the new way that
> > I proposed.  He agreed to that.  I was hardly about to get into
> > advanced issues like restartable exceptions!  There's no way he
> > would have been interested in that, and if I had started talking
> > that way I could have easily lost my credibility and influence
> > with him.  -- Dan

Sounds like a painful but wise judgment.  Alas.
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <4752DFA6.3070504@alum.mit.edu>
Kent M Pitman wrote:
> ····@domain.invalid writes:
> 
>> There seems to be a problem with my newsgroup software
>> (Thunderbird 2.0.0.9).  The previous posting was from
>> Dan Weinreb, ···@alum.mit.edu.
> 
> Hi Dan.  Gee, my first thought had just been that you'd done a fine
> job of privacy protection there...
> 
>>> I argued that exceptions should be objects that were created at
>>> the time that they were signalled, so that they could have values
>>> in their data members (i.e.  instance variables, slots) that had
>>> particular information about the exception.  For example, a file
>>> not found exception could have a data member containing the name
>>> of the file that was not found.
> 
> Of curiosity, how was this done in PL/1?  The on statement didn't
> allow for that, did it?

No, I don't think this could be done in PL/I.

> 
>>> Stroustrup resisted the idea because he was worried about the cost
>>> of creating the object.
> 
> Funny how quickly people can go from making statements like "How can I
> keep from having my application die and dump core?" to making
> statements like "Gee, you mean this highly reliable and
> well-structured tool for allowing me to gracefully recover from errors
> is going to cons three extra words or take an extra millisecond?"  :)

Yes, that was Stroustup's design criterion.  Amazing, isn't it?
Usually his justification was that he didn't want costs incurred
that the programmer would not see.  In the end, he didn't really
achieve this (C++ does a lot of by-value object copying that's not
obvious to a non-expert), anyway, but in this particular case, you'd
clearly see the "new" statement.

> 
> But I don't think he was the only one who worried a lot about that
> kind of thing back then (and perhaps even now).  The issue of making
> the signaling process be as fast and as cons-free as possible is one I
> remember people asked about a lot when designing the CL condition
> system.
> 
>>> We worked out that it would be easy enough to define C++ so that
>>> you could do it either way, the existing way, or the new way that
>>> I proposed.  He agreed to that.  I was hardly about to get into
>>> advanced issues like restartable exceptions!  There's no way he
>>> would have been interested in that, and if I had started talking
>>> that way I could have easily lost my credibility and influence
>>> with him.  -- Dan
> 
> Sounds like a painful but wise judgment.  Alas.
> 
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-E97CB9.00454102122007@news-europe.giganews.com>
In article <··················@trndny08>, ····@domain.invalid wrote:

> There seems to be a problem with my newsgroup software
> (Thunderbird 2.0.0.9).  The previous posting was from
> Dan Weinreb, ···@alum.mit.edu.
> 
> ····@domain.invalid wrote:
> > Rainer Joswig wrote:
> >> In article <·············@nhplace.com>,
> > 
> >> One of the best examples was that the feature that you can
> >> continue from an error (and the handler is called
> >> without unwinding the stack) didn't make it into the C++ standard.
> >> Though Stroustrup (Design and Evolution of C++) reports
> >> that they had people from Texas Instruments with Lisp
> >> Machine experience explaining the Lisp Machine error handling.
> >>
> > 
> > That would be me.
> > 
> > During one of the OOPSLA conferences, probably 1989, I sat
> > down with Stroustrup to talk about exceptions in C++.
> > At that time, exceptions had to be statically-allocated.
> > 
> > I argued that exceptions should be objects that were
> > created at the time that they were signalled, so that
> > they could have values in their data members (i.e.
> > instance variables, slots) that had particular
> > information about the exception.  For example,
> > a file not found exception could have a data
> > member containing the name of the file that was
> > not found.
> > 
> > Stroustrup resisted the idea because
> > he was worried about the cost of creating the object.
> > We worked out that it would be easy enough to define
> > C++ so that you could do it either way, the existing
> > way, or the new way that I proposed.  He agreed to
> > that.
> > 
> > I was hardly about to get into advanced issues
> > like restartable exceptions!  There's no way
> > he would have been interested in that, and if
> > I had started talking that way I could have
> > easily lost my credibility and influence with him.

I have the book (The Design and Evolution of C++)
now here. Some more quotes on the topic of Resumption
vs. Termination:

 "My personal starting point was: 'Why not? That seems to be
 a useful feature. I can see quite a few situations where
 I would use resumption.' Over the next four years, I learned
 otherwise, and thus the C++ exception handling mechanism embodies
 the opposite view, often called the termination model."

He then describes that there was a lot of discussion in
the ANSI C++ committee for quite some time (Dec 1989 - Nov 1990).

Then he summarizes the arguments for resumption:

 - More general (powerful, includes termination)
 - Unifies similar concepts/implementations
 - Essential for very complex, very dynamic systems (that is, OS/2)
 - Not significantly more complex/expensive to implement
 - If you don't have, you must fake it
 - Provides simple solutions for resource exhaustion problems.

Next the arguments for termination:

 - Simpler, cleaner, cheaper
 - Leads to more manageable systems
 - Powerful enough for everything
 - Avoids horrendous coding tricks
 - Significant negative experience with resumption

He mentions that the discussion was more technical than the
list above shows.

He then describes the experience of Jim Mitchell (SUN, Xerox PARC)
using resumption in Xerox's Cedar/MESA system.

He also mentions Mary Fontana presenting data from
the TI Explorer System where resumption was found to be used
for debugging only.

So, they looked into it and decided
against resumption based on technical issues and
prior experience. Which is fine, still I'd say it is
the wrong decision - but, with limited experience,
C++ is not a favorite language design anyway.
I guess the more interactive nature of Lisp as a language and
Lisp applications makes resumption much more
attractive.


> > 
> > -- Dan
> >

-- 
http://lispm.dyndns.org/
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <ufxylq53o.fsf@nhplace.com>
Rainer Joswig <······@lisp.de> writes:

> He also mentions Mary Fontana presenting data from
> the TI Explorer System where resumption was found to be used
> for debugging only.

After some time, Symbolics used to do a pop-up menu that would just
show you restarts and not put you in the debugger--you could press
something to get to the debugger.  This had a nicer feel, but even
then had some drawbacks for some users because some restarts were more
technical in nature than others.  I'm pretty sure we eventually
concluded that some situations, some options should be hidden from
users by default to make these things more friendly and we decided to
redesign the way in which it presented the initial set of options to
show you only certain of the options and not others.  Things that
talked about resuming individual instructions (you could get a
breakpoint mid-instruction (which really just trapped out into code
again, you weren't really running in hardware at that point) and it
confused/scared people) got overlooked, I think .. but you could break
into a debugger that showed you more options.  Today that would have
manifested as a [More Options]/[Fewer Options] switch or as an
[Advanced...] button.  I remember hardcopying some examples of the
pages documenting the subtle transition and filing them somewhere,
since it's the kind of thing people wouldn't have necessarily noticed
if it wasn't pointed out to them, but I'll have to look around to see
if I still find it and maybe scan it if I do... (don't hold your
breath, it could be filed deep somewhere).

I do agree with them that understanding this distinction is important,
and I'd say it was a useful area to think about how to annotate
restarts further, allowing them to say what level of expertise or
product abstraction they were related to.
From: Alan Crowe
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <86d4tpys7g.fsf@cawtech.freeserve.co.uk>
Kent M Pitman <······@nhplace.com> writes:

> I'm pretty sure we eventually
> concluded that some situations, some options should be hidden from
> users by default to make these things more friendly and we decided to
> redesign the way in which it presented the initial set of options to
> show you only certain of the options and not others. 
...snip... 
> I do agree with them that understanding this distinction is important,
> and I'd say it was a useful area to think about how to annotate
> restarts further, allowing them to say what level of expertise or
> product abstraction they were related to.

[18]> (defun err ()
           (restart-bind ((more #'(lambda()
                                    (restart-case
                                        (error "Need more options.")
                                      (extra ()
                                        (return-from err 'extra))
                                      (advanced ()
                                        (return-from err 'advanced))))))
             (restart-case
                 (error "Whoops!")
               (basic ()
                 (return-from err 'basic)))))
ERR
[19]> (err)
*** - Whoops!
The following restarts are available:
R1 = BASIC
R2 = MORE
1. Break [20]> R1

=> BASIC

[21]> (err)
*** - Whoops!
The following restarts are available:
R1 = BASIC
R2 = MORE

1. Break [22]> R2
*** - Need more options.
The following restarts are available:
R1 = EXTRA
R2 = ADVANCED
R3 = BASIC
R4 = MORE

2. Break [23]> R2

=> ADVANCED

You are not confined to *thinking* "about how to annotate
restarts further". The Common Lisp condition system is
sufficiently powerful that you can experiment with these
ideas from within the standard. Somebody did an awesome job
coming up with this brilliant design :-)

Alan Crowe
Edinburgh
Scotland
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <K9B4j.6318$Lg.5677@trndny09>
Kent M Pitman wrote:
  I'm pretty sure we eventually
> concluded that some situations, some options should be hidden from
> users by default to make these things more friendly and we decided to
> redesign the way in which it presented the initial set of options to
> show you only certain of the options and not others. 

Yes, I think it only makes sense to show users the options
that can be interpreted at a high enough level of abstraction
that it means something to the user, as opposed to something
that you can only understand if you know all about the
implementation.  That might mean that what you show depends
on who the user is, although, as you say, the "more options"
option is probably better.

In software I've been working on lately, there are some
restart handlers that only make sense to be called
by a program, never from an API, and I've wished I
had a way to censor them from the listing provided
by SLIME. Instead I just give them names saying
"don't use this".  Only developers ever see this,
not end users, so it's not so bad.
From: Tobias C. Rittweiler
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <87prxnzlzc.fsf@freebits.de>
Daniel Weinreb <···@alum.mit.edu> writes:

> In software I've been working on lately, there are some
> restart handlers that only make sense to be called
> by a program, never from an API, and I've wished I
> had a way to censor them from the listing provided
> by SLIME. Instead I just give them names saying
> "don't use this".  Only developers ever see this,
> not end users, so it's not so bad.

I haven't followed this thread too closely, but I think you want the
:test-function option for RESTART-BIND, or the :test option for
RESTART-CASE respectively. See 9.1.4.2.3.

  -T.
From: Alan Crowe
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <868x4btt1f.fsf@cawtech.freeserve.co.uk>
Daniel Weinreb <···@alum.mit.edu> writes:

> In software I've been working on lately, there are some
> restart handlers that only make sense to be called
> by a program, never from an API, and I've wished I
> had a way to censor them from the listing provided
> by SLIME. Instead I just give them names saying
> "don't use this".  Only developers ever see this,
> not end users, so it's not so bad.

Can you use the test option to restart-case to check a
global variable that gets rebound in a debugger hook
function that wraps the invocation of the debugger?

I've got it working talking to CMUCL via EMACS'
inferior-lisp mode

* (defvar *hide-programmatic-restarts* nil)
; 
*HIDE-PROGRAMMATIC-RESTARTS*

;; a function that signals a condition
* (defun screw-up ()
           (restart-case (error "Whoops!")
             (visible ()
               :test (lambda (condition)
                       (declare (ignore condition))
                       t)
               'manual)
             (hidden ()
               :test (lambda(condition)
                       (declare (ignore condition))
                       (not *hide-programmatic-restarts*))
               'automatic)))

SCREW-UP

* (screw-up)
Error in function SCREW-UP:  Whoops!
   [Condition of type SIMPLE-ERROR]

Restarts:
  0: [VISIBLE] VISIBLE
  1: [HIDDEN ] HIDDEN
  2: [ABORT  ] Return to Top-Level.


0] 1

AUTOMATIC

* (setf *debugger-hook*
        (lambda (condition self)
          (declare (ignore self))
          (let ((*hide-programmatic-restarts* t))
            (invoke-debugger condition))))

* (screw-up)
Error in function SCREW-UP:  Whoops!
   [Condition of type SIMPLE-ERROR]

Restarts:
  0: [VISIBLE] VISIBLE
  1: [ABORT  ] Return to Top-Level.

Debug  (type H for help)

0] 0

MANUAL

* (handler-bind ((error (lambda(condition)
                        (invoke-restart 'hidden))))
                (screw-up))

AUTOMATIC


I've not got this to work with SWANK. The version I'm
using seems to overwrite my *debugger-hook*, but I'm running
a really old version.



Alan Crowe
Edinburgh
Scotland
From: Madhu
Subject: INVOKE-RESTART (was: The origins of CL conditions system)
Date: 
Message-ID: <m3ir3ept0z.fsf_-_@robolove.meer.net>
* Alan Crowe in <··············@cawtech.freeserve.co.uk> :
| Daniel Weinreb <···@alum.mit.edu> writes:
|> In software I've been working on lately, there are some restart
|> handlers that only make sense to be called by a program, never from
|> an API, and I've wished I had a way to censor them from the listing
|> provided by SLIME. Instead I just give them names saying "don't use
|> this".  Only developers ever see this, not end users, so it's not so
|> bad.
| Can you use the test option to restart-case to check a global variable
| that gets rebound in a debugger hook function that wraps the
| invocation of the debugger?
|
| I've got it working talking to CMUCL via EMACS' inferior-lisp mode
[...]
| I've not got this to work with SWANK. The version I'm using seems to
| overwrite my *debugger-hook*, but I'm running a really old version.

[SLIME _has_ to override *debugger-hook* with
#'SWANK:SWANK-DEBUGGER-HOOK for sldb.]


But With CMUCL there is another as yet unresolved issue[1] in using the
TEST option, which involves an interpretation of the CL spec.

If your function is modified to use the CONDITION in the restart TEST:

* (defun screw-up ()
  (restart-case (error "Whoops!")
    (visible ()
      :test (lambda (condition)
              (typep condition 'condition))
      'manual)
    (hidden ()
      :test (lambda(condition)
              (declare (ignore condition))
              (not *hide-programmatic-restarts*))
      'automatic)))

You will notice that invoking the VISIBLE restart[2] via SLIME/sldb results
in an error:

Restart #<RESTART {58177C5D}> is not active.
   (Condition of type KERNEL:SIMPLE-CONTROL-ERROR)

This error also results from doing this at the REPL:

* (handler-bind ((error (lambda (c)
                  (invoke-restart (car (compute-restarts c))))))
  (screw-up))

The problem is CMUCL's implementation of INVOKE-RESTART works by
checking if the restart is active, i.e. whether it can be found via
  (COMPUTE-RESTARTS NIL)
before invoking it. [This check is bypassed in the debugger, so you
won't see a problem if invoking a restart in CMUCL's own debugger.]

(COMPUTE-RESTARTS NIL) does not return VISIBLE as an active restart
because the test fails.

So the question is is INVOKE-RESTART justified by the spec in invoking
only active restarts?

--
Madhu

[1] For details see the thread from cmucl-imp list in 2005, starting
with Helmut Eller's post <··············@stud3.tuwien.ac.at>, archived
at <URL:http://article.gmane.org/gmane.lisp.cmucl.devel/7677>

[2] Without modifying *debugger-hook*
From: Alan Crowe
Subject: Re: INVOKE-RESTART (was: The origins of CL conditions system)
Date: 
Message-ID: <86lk8apq9n.fsf@cawtech.freeserve.co.uk>
Madhu <·······@meer.net> writes:

> * Alan Crowe in <··············@cawtech.freeserve.co.uk> :
> | I've got it working talking to CMUCL via EMACS' inferior-lisp mode
> [...]
> | I've not got this to work with SWANK. The version I'm using seems to
> | overwrite my *debugger-hook*, but I'm running a really old version.
> 
> [SLIME _has_ to override *debugger-hook* with
> #'SWANK:SWANK-DEBUGGER-HOOK for sldb.]

I don't understand the issue here. In my first attempt I
crafted a wrapper-maker

(defun make-debugger-hook (old-hook)
  (lambda(condition me-or-my-wrapper)
    (let ((*hide-programmatic-restarts* t))
      (funcall old-hook condition me-or-my-wrapper))))

and 

(setf *old-debugger-hook* 
      (shiftf *debugger-hook*
              (make-debugger-hook *debugger-hook*))

so I could get back if my hook didn't work.

It looks as though Swank overwrites the *debugger-hook*
inside the Swank R-E-P-Loop rather than once, at startup. That
seems undesirable. 

> But With CMUCL there is another as yet unresolved issue[1] in using the
> TEST option, which involves an interpretation of the CL spec.

> [1] For details see the thread from cmucl-imp list in 2005, starting
> with Helmut Eller's post <··············@stud3.tuwien.ac.at>, archived
> at <URL:http://article.gmane.org/gmane.lisp.cmucl.devel/7677>

Thanks for the warning. I think I have bumped into this, but
only when playing about, to get a feel for the
functionality, so I just shrugged and moved on.

Alan Crowe
Edinburgh
Scotland
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-AD64E3.19014702122007@news-europe.giganews.com>
In article <·············@nhplace.com>,
 Kent M Pitman <······@nhplace.com> wrote:

> Rainer Joswig <······@lisp.de> writes:
> 
> > He also mentions Mary Fontana presenting data from
> > the TI Explorer System where resumption was found to be used
> > for debugging only.
> 
> After some time, Symbolics used to do a pop-up menu that would just
> show you restarts and not put you in the debugger--you could press
> something to get to the debugger.  This had a nicer feel, but even
> then had some drawbacks for some users because some restarts were more
> technical in nature than others.  I'm pretty sure we eventually
> concluded that some situations, some options should be hidden from
> users by default to make these things more friendly and we decided to
> redesign the way in which it presented the initial set of options to
> show you only certain of the options and not others.  Things that
> talked about resuming individual instructions (you could get a
> breakpoint mid-instruction (which really just trapped out into code
> again, you weren't really running in hardware at that point) and it
> confused/scared people) got overlooked, I think .. but you could break
> into a debugger that showed you more options.  Today that would have
> manifested as a [More Options]/[Fewer Options] switch or as an
> [Advanced...] button.  I remember hardcopying some examples of the
> pages documenting the subtle transition and filing them somewhere,
> since it's the kind of thing people wouldn't have necessarily noticed
> if it wasn't pointed out to them, but I'll have to look around to see
> if I still find it and maybe scan it if I do... (don't hold your
> breath, it could be filed deep somewhere).
> 
> I do agree with them that understanding this distinction is important,
> and I'd say it was a useful area to think about how to annotate
> restarts further, allowing them to say what level of expertise or
> product abstraction they were related to.

Symbolics also had a facility called 'Firewall' which should
keep ordinary users away from the development tools and
other dangerous things.

I was expecting that it also had some special code to modify
the user interface for conditions, but I couldn't find anything
like that in the package.

-- 
http://lispm.dyndns.org/
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <qdy5j.9033$T41.542@trndny01>
Rainer Joswig wrote:
> In article <·············@nhplace.com>,
>  Kent M Pitman <······@nhplace.com> wrote:
> 
>> Rainer Joswig <······@lisp.de> writes:
>
> 
> Symbolics also had a facility called 'Firewall' which should
> keep ordinary users away from the development tools and
> other dangerous things.

Could you please remind me what that was?  I asked a
couple of other ex-Symbolics people and they can't
remember, either.  "Firewall" might be one of those
marketing names that was invented by an outside
naming firm, the same guys who came up with "Genera".
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-20C979.15333205122007@news-europe.giganews.com>
In article <··················@trndny01>,
 Daniel Weinreb <···@alum.mit.edu> wrote:

> Rainer Joswig wrote:
> > In article <·············@nhplace.com>,
> >  Kent M Pitman <······@nhplace.com> wrote:
> > 
> >> Rainer Joswig <······@lisp.de> writes:
> >
> > 
> > Symbolics also had a facility called 'Firewall' which should
> > keep ordinary users away from the development tools and
> > other dangerous things.
> 
> Could you please remind me what that was?  I asked a
> couple of other ex-Symbolics people and they can't
> remember, either.  "Firewall" might be one of those
> marketing names that was invented by an outside
> naming firm, the same guys who came up with "Genera".

;-)

I have it in ...>unsupported>firewall>...

It is at least from 1987. It is a collection
of utilities:


@i[Firewall] is a method by which application developers can shield their
application's end users from certain parts of the Genera software
environment @em[] the Debugger, the System menu, system notifications,
and other software that end users might not want or need.

The @i[Firewall Application Development Guide] shows you how to develop
an application in the context of Firewall.  It is @i[not] an all-purpose
guide for developing applications on Symbolics systems.

...

Firewall is not a separate software facility unto itself.  It is an
application development technique and strategy.  Firewall software is
essentially a collection of existing pieces of Genera plus some new
features specially developed for Firewall.  Most of the new features are
grouped together in a new software package, the @ls[firewall] package,
abbreviated @ls[fw].  Other new features are in the @ls[tv] package.

This book corresponds with the first release of Firewall software.  The
Firewall tape consists of all the Firewall features plus the
documentation files for online use.  Because Firewall is completely
task-oriented, the documentation is essential to understanding it.

Firewall can run on any Symbolics 3600 series development system.  You
must have Genera 7.0 or later software to use Firewall.

....

Firewall is a software shield that insulates your application's end
users from the Genera software environment.  It is a programming
technique that manipulates new and existing pieces of Genera in certain
ways to hide several parts of the system that are potentially confusing
to end users and are not needed in your application environment.  For
example, Firewall can hide the Debugger, the System menu,
and system notifications, so that end users never interact with them.
Firewall is a set of Symbolics recommendations for altering
Genera to make your application easier to use.

Firewall is @i[not] a security system.  It does not provide your
application environment with total, airtight protection from Genera, and
it does not restrict your own access to Genera.  It is not supposed to.
In fact, Firewall provides you with a way to reset or "undo" Firewall,
in case you need Genera back in a hurry, perhaps to debug the
application at the end-user site.  @Reference(Topic={Resetting Firewall
to Genera},Type={Chapter},View={CrossRef})

Because Firewall is a programming technique, the documentation is the
most important part of Firewall.  The documentation is completely
task-oriented and is essential to learning how to firewall.  That is,
Firewall is not a self-contained programming tool.  It is a set of
instructions that teaches you how to finesse various parts of Genera to
accomplish a particular task @em[] firewalling.

...

-- 
http://lispm.dyndns.org/
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <nTR5j.8391$3W.5046@trndny04>
Re: Firewall:  Thank you very much.  I have no memory
of this.  It may have been implemented after I left
Symbolics in 1988, or else I might just be getting
old and losing gray cells. :)
From: Andreas Davour
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <cs963zd9pl5.fsf@Psilocybe.Update.UU.SE>
Daniel Weinreb <···@alum.mit.edu> writes:

> Rainer Joswig wrote:
>> In article <·············@nhplace.com>,
>>  Kent M Pitman <······@nhplace.com> wrote:
>>
>>> Rainer Joswig <······@lisp.de> writes:
>>
>>
>> Symbolics also had a facility called 'Firewall' which should
>> keep ordinary users away from the development tools and
>> other dangerous things.
>
> Could you please remind me what that was?  I asked a
> couple of other ex-Symbolics people and they can't
> remember, either.  "Firewall" might be one of those
> marketing names that was invented by an outside
> naming firm, the same guys who came up with "Genera".

Curious minds want to know; what was it know as at Symbolics then? "The
system"?

/Andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <gUR5j.8392$3W.4433@trndny04>
Andreas Davour wrote:
> Daniel Weinreb <···@alum.mit.edu> writes:
> 
>> Rainer Joswig wrote:
>>> In article <·············@nhplace.com>,
>>>  Kent M Pitman <······@nhplace.com> wrote:
>>>
>>>> Rainer Joswig <······@lisp.de> writes:
>>>
>>> Symbolics also had a facility called 'Firewall' which should
>>> keep ordinary users away from the development tools and
>>> other dangerous things.
>> Could you please remind me what that was?  I asked a
>> couple of other ex-Symbolics people and they can't
>> remember, either.  "Firewall" might be one of those
>> marketing names that was invented by an outside
>> naming firm, the same guys who came up with "Genera".
> 
> Curious minds want to know; what was it know as at Symbolics then? "The
> system"?
> 
> /Andreas
> 

Yeah, more or less, although I think the name "Genera"
did finally catch on within Symbolics.
From: Andreas Davour
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <cs9mysn4q0d.fsf@Psilocybe.Update.UU.SE>
Daniel Weinreb <···@alum.mit.edu> writes:

> Andreas Davour wrote:
>> Daniel Weinreb <···@alum.mit.edu> writes:
>>
>>> Rainer Joswig wrote:
>>>> In article <·············@nhplace.com>,
>>>>  Kent M Pitman <······@nhplace.com> wrote:
>>>>
>>>>> Rainer Joswig <······@lisp.de> writes:
>>>>
>>>> Symbolics also had a facility called 'Firewall' which should
>>>> keep ordinary users away from the development tools and
>>>> other dangerous things.
>>> Could you please remind me what that was?  I asked a
>>> couple of other ex-Symbolics people and they can't
>>> remember, either.  "Firewall" might be one of those
>>> marketing names that was invented by an outside
>>> naming firm, the same guys who came up with "Genera".
>>
>> Curious minds want to know; what was it know as at Symbolics then? "The
>> system"?
>
> Yeah, more or less, although I think the name "Genera"
> did finally catch on within Symbolics.

Interesting. Thanks.

/Andreas

-- 
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-4962C4.20300906122007@news-europe.giganews.com>
In article <···············@Psilocybe.Update.UU.SE>,
 Andreas Davour <·······@updateLIKE.uu.HELLse> wrote:

> Daniel Weinreb <···@alum.mit.edu> writes:
> 
> > Andreas Davour wrote:
> >> Daniel Weinreb <···@alum.mit.edu> writes:
> >>
> >>> Rainer Joswig wrote:
> >>>> In article <·············@nhplace.com>,
> >>>>  Kent M Pitman <······@nhplace.com> wrote:
> >>>>
> >>>>> Rainer Joswig <······@lisp.de> writes:
> >>>>
> >>>> Symbolics also had a facility called 'Firewall' which should
> >>>> keep ordinary users away from the development tools and
> >>>> other dangerous things.
> >>> Could you please remind me what that was?  I asked a
> >>> couple of other ex-Symbolics people and they can't
> >>> remember, either.  "Firewall" might be one of those
> >>> marketing names that was invented by an outside
> >>> naming firm, the same guys who came up with "Genera".
> >>
> >> Curious minds want to know; what was it know as at Symbolics then? "The
> >> system"?
> >
> > Yeah, more or less, although I think the name "Genera"
> > did finally catch on within Symbolics.
> 
> Interesting. Thanks.
> 
> /Andreas

The system is technically simply called SYSTEM. ;-) So the computer
runs System 452 for example. As a release it
might be called Genera 8.5, but on the machine it
is technically 'System X.Y' with X.Y as a
system version number.

-- 
http://lispm.dyndns.org/
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <4752E5AC.4030008@alum.mit.edu>
I don't have a copy of "Design and Evolution of C++".
Could you please explain more precisely what he
meant by the "termination model"?

After all, even if you don't have resumption as
a feature, signaling of exceptions obviously
does not always terminate the program.  Now,
signaling when the exception is unhandled
might terminate the program, but resumption
is not primary about unhandled exceptions.

You mentioned "the handler is called without
unwinding the stack".  That's like handler-case
in Common Lisp, but that's separate from restarts!

So I don't understand "resumption versus termination"
as a dichotomy.

Thank you.


Rainer Joswig wrote:
> In article <··················@trndny08>, ····@domain.invalid wrote:
> 
>> There seems to be a problem with my newsgroup software
>> (Thunderbird 2.0.0.9).  The previous posting was from
>> Dan Weinreb, ···@alum.mit.edu.
>>
>> ····@domain.invalid wrote:
>>> Rainer Joswig wrote:
>>>> In article <·············@nhplace.com>,
>>>> One of the best examples was that the feature that you can
>>>> continue from an error (and the handler is called
>>>> without unwinding the stack) didn't make it into the C++ standard.
>>>> Though Stroustrup (Design and Evolution of C++) reports
>>>> that they had people from Texas Instruments with Lisp
>>>> Machine experience explaining the Lisp Machine error handling.
>>>>
>>> That would be me.
>>>
>>> During one of the OOPSLA conferences, probably 1989, I sat
>>> down with Stroustrup to talk about exceptions in C++.
>>> At that time, exceptions had to be statically-allocated.
>>>
>>> I argued that exceptions should be objects that were
>>> created at the time that they were signalled, so that
>>> they could have values in their data members (i.e.
>>> instance variables, slots) that had particular
>>> information about the exception.  For example,
>>> a file not found exception could have a data
>>> member containing the name of the file that was
>>> not found.
>>>
>>> Stroustrup resisted the idea because
>>> he was worried about the cost of creating the object.
>>> We worked out that it would be easy enough to define
>>> C++ so that you could do it either way, the existing
>>> way, or the new way that I proposed.  He agreed to
>>> that.
>>>
>>> I was hardly about to get into advanced issues
>>> like restartable exceptions!  There's no way
>>> he would have been interested in that, and if
>>> I had started talking that way I could have
>>> easily lost my credibility and influence with him.
> 
> I have the book (The Design and Evolution of C++)
> now here. Some more quotes on the topic of Resumption
> vs. Termination:
> 
>  "My personal starting point was: 'Why not? That seems to be
>  a useful feature. I can see quite a few situations where
>  I would use resumption.' Over the next four years, I learned
>  otherwise, and thus the C++ exception handling mechanism embodies
>  the opposite view, often called the termination model."
> 
> He then describes that there was a lot of discussion in
> the ANSI C++ committee for quite some time (Dec 1989 - Nov 1990).
> 
> Then he summarizes the arguments for resumption:
> 
>  - More general (powerful, includes termination)
>  - Unifies similar concepts/implementations
>  - Essential for very complex, very dynamic systems (that is, OS/2)
>  - Not significantly more complex/expensive to implement
>  - If you don't have, you must fake it
>  - Provides simple solutions for resource exhaustion problems.
> 
> Next the arguments for termination:
> 
>  - Simpler, cleaner, cheaper
>  - Leads to more manageable systems
>  - Powerful enough for everything
>  - Avoids horrendous coding tricks
>  - Significant negative experience with resumption
> 
> He mentions that the discussion was more technical than the
> list above shows.
> 
> He then describes the experience of Jim Mitchell (SUN, Xerox PARC)
> using resumption in Xerox's Cedar/MESA system.
> 
> He also mentions Mary Fontana presenting data from
> the TI Explorer System where resumption was found to be used
> for debugging only.
> 
> So, they looked into it and decided
> against resumption based on technical issues and
> prior experience. Which is fine, still I'd say it is
> the wrong decision - but, with limited experience,
> C++ is not a favorite language design anyway.
> I guess the more interactive nature of Lisp as a language and
> Lisp applications makes resumption much more
> attractive.
> 
> 
>>> -- Dan
>>>
> 
From: Damien Kick
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <13l5qq6busg6532@corp.supernews.com>
Daniel Weinreb wrote:
> I don't have a copy of "Design and Evolution of C++".
> Could you please explain more precisely what he
> meant by the "termination model"?

 From D&E, page 390,

<blockquote>
During the design of the exception handling mechanism, the most 
contentious issue turned out to be whether or not it should support 
termination semantics or resumption semantics; this is, whether it 
should be possible for an exception handler to require execution to 
resume from the point where the execution was thrown.  [...]  The [...] 
debate took place in the ANSI C++ commitee [...]  This was followed by 
the acceptance of the exception handling proposal as presented in the 
ARM (this is, with termination semantics) [...]
</blockquote>

So it seems that Stroustrup is using "termination model" to mean exactly 
what has come to be the existing C++ exception semantics.  As you've 
mentioned in the thread, this term is confusing as throwing an exception 
in C++ does not always result in program termination.  I suppose one 
could think of "termination model" meaning the termination of the 
current flow of control, i.e. once you leave, there is no going back. 
Regardless of the wisdom of the choice of terminology, though, it seems 
that he writes "termination model" he means the exception model adopted 
by C++, Java, Python, etc.
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <47555325.5080805@alum.mit.edu>
Damien Kick wrote:
> Daniel Weinreb wrote:

> 
> So it seems that Stroustrup is using "termination model" to mean exactly 
> what has come to be the existing C++ exception semantics.  As you've 
> mentioned in the thread, this term is confusing as throwing an exception 
> in C++ does not always result in program termination.  I suppose one 
> could think of "termination model" meaning the termination of the 
> current flow of control, i.e. once you leave, there is no going back.\

Ah, that makes a lot of sense: termination of the current
flow of control, what Lisp people familiar with the post-MacLisp
dialects such as Common Lisp would call a "throw" and what
Java calls "abrupt completion".  Thank you.

> Regardless of the wisdom of the choice of terminology, though, it seems 
> that he writes "termination model" he means the exception model adopted 
> by C++, Java, Python, etc.

In the terms that I would use, then, the termination model
means that handling of an exception always throws.  It's
like saying that you have handler-case but you don't
have handler-bind.

handler-bind is actually quite useful even if the last
thing it does is to throw, i.e. there is no actual
"resumption" going on.  We commonly use it to
send a copy of the stack trace to our log files,
to pick the simplest example.  Of course for every
one handler-bind we have 20 handler-case's. (I'm
just making up that number but it must be something
like that.)

I am referring to a large (many 100KLOC's) corpus of Lisp
code in the business layer of the airline reservation
system that we're building at ITA, which is a server-type
system rather than an interactive application (from the
point of view of the business layer) and which must
be up 99.99% of the time.

Actually a lot of our resumption really means the use of
restarts, rather than actually continuing execution
after the point of signaling the exception, which,
as a matter of fact, I don't think we ever do
(e.g. the Common Lisp "signal" function).

As I think I've said earlier, we designed quite a lot
of generality into the exception system, and it was
quite carefully constructed to allow a clean and
well-defined modular interface between the signaller
and the handler.  But a lot of the fancy stuff rarely
turns out to be needed, and if I were designing a
language that, unlike Lisp, was not intended to be
a testbed for new language ideas, I would almost
certainly leave that stuff out.  But I would
certainly keep handler-bind and the restart family.
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <ubq962gde.fsf@nhplace.com>
Daniel Weinreb <···@alum.mit.edu> writes:

> In the terms that I would use, then, the termination model
> means that handling of an exception always throws.  It's
> like saying that you have handler-case but you don't
> have handler-bind.

There's another axis here that may or may not be implicitly in play in
this discussion.  Passive vs. active.

In the LispM, restarting was done by passing proceed option values,
not by throwing.  I think there was a proceed-case that just picked
up the values as passive values from SIGNAL and had to do the right
thing with them.  You weren't supposed to bypass the protocol but you
easily could by accident.

A consequence of this was that it had a bit of the feel of the "float
trapping problem" (where if you are in passive mode and forget to
check, you can do the wrong thing).  C's I/O had this problem in the
need to check for particular eof characters, and it's more in the
style of CL to actively signal an error unless READ has been called
with an arg saying you want another value.

It is possible that if this is the model you discussed with Stroustrup
that he pattern-matched on the passiveness in his worrying about
fallthrough.

I was happy we had a chance to fix this detail when we moved to CL,
making things more active, since I think that's less error-prone.

Even if that's not in play here, it's important to understand that the
final CL condition system was, in fact, different than the LispM
condition system.

> handler-bind is actually quite useful even if the last
> thing it does is to throw, i.e. there is no actual
> "resumption" going on.  We commonly use it to
> send a copy of the stack trace to our log files,
> to pick the simplest example.  Of course for every
> one handler-bind we have 20 handler-case's. (I'm
> just making up that number but it must be something
> like that.)
> 
> I am referring to a large (many 100KLOC's) corpus of Lisp
> code in the business layer of the airline reservation
> system that we're building at ITA, which is a server-type
> system rather than an interactive application (from the
> point of view of the business layer) and which must
> be up 99.99% of the time.
> 
> Actually a lot of our resumption really means the use of
> restarts, rather than actually continuing execution
> after the point of signaling the exception, which,
> as a matter of fact, I don't think we ever do
> (e.g. the Common Lisp "signal" function).

Random other trivia: On the LispM, there were two ways to stop
execution: one was to mix error (or something similar--maybe it was
sys:debugger-condition) into the error class, and other to call
signal.  In effect, signal became error when given certain conditions.
In CL, we got rid of that bit of "helpful" morphing.

> As I think I've said earlier, we designed quite a lot
> of generality into the exception system, and it was
> quite carefully constructed to allow a clean and
> well-defined modular interface between the signaller
> and the handler.  But a lot of the fancy stuff rarely
> turns out to be needed, and if I were designing a
> language that, unlike Lisp, was not intended to be
> a testbed for new language ideas, I would almost
> certainly leave that stuff out.  But I would
> certainly keep handler-bind and the restart family.

Ever look at Dylan's condition system? What do you think of that?
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <CLn5j.11438$Lg.4820@trndny09>
Kent M Pitman wrote:
> Daniel Weinreb <···@alum.mit.edu> writes:
>
> It is possible that if this is the model you discussed with Stroustrup
> that he pattern-matched on the passiveness in his worrying about
> fallthrough.

No, the only thing I discussed with Stroustrup was making it
possible to dynamically create exception objects with data
members (instance variables). I didn't talk to him about
anything else.

> 
> I was happy we had a chance to fix this detail when we moved to CL,
> making things more active, since I think that's less error-prone.
> 
> Even if that's not in play here, it's important to understand that the
> final CL condition system was, in fact, different than the LispM
> condition system.

Indeed, and I think you improved it.


> 
> Random other trivia: On the LispM, there were two ways to stop
> execution: one was to mix error (or something similar--maybe it was
> sys:debugger-condition) into the error class, and other to call
> signal.  In effect, signal became error when given certain conditions.
> In CL, we got rid of that bit of "helpful" morphing.
> 

I guess I've forgotten the details at this point.

> Ever look at Dylan's condition system? What do you think of that?

No, I keep meaning to learn about Dylan but I'm always too
busy.  The people who designed it are top-notch and I'm sure
it must be worth learning.  One of these days...
From: Duncan  Mak
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <39ea1b71-5bb8-4eb5-8406-d4bcd8dceb43@a39g2000pre.googlegroups.com>
On Dec 4, 6:24 pm, Kent M Pitman <······@nhplace.com> wrote:
>
> Ever look at Dylan's condition system? What do you think of that?

How is Dylan's system different from Common Lisp's?

I've been trying to compile a list of differences between Dylan and
Common Lisp (aside from syntax, i.e. CLOS vs. Dylan's Object system).
I haven't gotten very far, though.

--
Duncan.
From: Kent M Pitman
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <ulk88rbmo.fsf@nhplace.com>
Duncan  Mak <·········@gmail.com> writes:

> On Dec 4, 6:24 pm, Kent M Pitman <······@nhplace.com> wrote:
> >
> > Ever look at Dylan's condition system? What do you think of that?
> 
> How is Dylan's system different from Common Lisp's?

I don't know that I could tell you off the top of my head.  But one
way is that it unified conditions and restarts.  Another is that it
got rid of the so-called "condition firewall".  My 2001 paper on exceptions
 http://www.nhplace.com/kent/Papers/Condition-Handling-2001.html
alludes to some of the issues.
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-96BFB1.19140402122007@news-europe.giganews.com>
In article <················@alum.mit.edu>,
 Daniel Weinreb <···@alum.mit.edu> wrote:

> I don't have a copy of "Design and Evolution of C++".
> Could you please explain more precisely what he
> meant by the "termination model"?

p390.

'During the design of the exception handling mechanism, the most
contentious issue turned out to be whether it should support
termination semantics or resumption semantics; that is, whether
it should be possible for an exception handler to require execution
to resume from the point where the exception was thrown.'

He also mentions following workaround, translated to Lisp
by me:


(defun grab-x ()
  "Acquire resource X"
  (loop
    do (when (can-acquire-an-x-p)
         ; ...
         (return-from grab-x some-x))
    ; oops, can't acquire an X, try to recover
    (grab-x-failed)))

(defun grab-x-failed ()
  (when (can-make-x-available)
      ; make x available
    (return-from grab-x-failed nil))
  (error 'cannot-get-x)) ; give up



> 
> After all, even if you don't have resumption as
> a feature, signaling of exceptions obviously
> does not always terminate the program.  Now,
> signaling when the exception is unhandled
> might terminate the program, but resumption
> is not primary about unhandled exceptions.
> 
> You mentioned "the handler is called without
> unwinding the stack".  That's like handler-case
> in Common Lisp, but that's separate from restarts!
> 
> So I don't understand "resumption versus termination"
> as a dichotomy.
> 
> Thank you.
> 
> 
> Rainer Joswig wrote:
> > In article <··················@trndny08>, ····@domain.invalid wrote:
> > 
> >> There seems to be a problem with my newsgroup software
> >> (Thunderbird 2.0.0.9).  The previous posting was from
> >> Dan Weinreb, ···@alum.mit.edu.
> >>
> >> ····@domain.invalid wrote:
> >>> Rainer Joswig wrote:
> >>>> In article <·············@nhplace.com>,
> >>>> One of the best examples was that the feature that you can
> >>>> continue from an error (and the handler is called
> >>>> without unwinding the stack) didn't make it into the C++ standard.
> >>>> Though Stroustrup (Design and Evolution of C++) reports
> >>>> that they had people from Texas Instruments with Lisp
> >>>> Machine experience explaining the Lisp Machine error handling.
> >>>>
> >>> That would be me.
> >>>
> >>> During one of the OOPSLA conferences, probably 1989, I sat
> >>> down with Stroustrup to talk about exceptions in C++.
> >>> At that time, exceptions had to be statically-allocated.
> >>>
> >>> I argued that exceptions should be objects that were
> >>> created at the time that they were signalled, so that
> >>> they could have values in their data members (i.e.
> >>> instance variables, slots) that had particular
> >>> information about the exception.  For example,
> >>> a file not found exception could have a data
> >>> member containing the name of the file that was
> >>> not found.
> >>>
> >>> Stroustrup resisted the idea because
> >>> he was worried about the cost of creating the object.
> >>> We worked out that it would be easy enough to define
> >>> C++ so that you could do it either way, the existing
> >>> way, or the new way that I proposed.  He agreed to
> >>> that.
> >>>
> >>> I was hardly about to get into advanced issues
> >>> like restartable exceptions!  There's no way
> >>> he would have been interested in that, and if
> >>> I had started talking that way I could have
> >>> easily lost my credibility and influence with him.
> > 
> > I have the book (The Design and Evolution of C++)
> > now here. Some more quotes on the topic of Resumption
> > vs. Termination:
> > 
> >  "My personal starting point was: 'Why not? That seems to be
> >  a useful feature. I can see quite a few situations where
> >  I would use resumption.' Over the next four years, I learned
> >  otherwise, and thus the C++ exception handling mechanism embodies
> >  the opposite view, often called the termination model."
> > 
> > He then describes that there was a lot of discussion in
> > the ANSI C++ committee for quite some time (Dec 1989 - Nov 1990).
> > 
> > Then he summarizes the arguments for resumption:
> > 
> >  - More general (powerful, includes termination)
> >  - Unifies similar concepts/implementations
> >  - Essential for very complex, very dynamic systems (that is, OS/2)
> >  - Not significantly more complex/expensive to implement
> >  - If you don't have, you must fake it
> >  - Provides simple solutions for resource exhaustion problems.
> > 
> > Next the arguments for termination:
> > 
> >  - Simpler, cleaner, cheaper
> >  - Leads to more manageable systems
> >  - Powerful enough for everything
> >  - Avoids horrendous coding tricks
> >  - Significant negative experience with resumption
> > 
> > He mentions that the discussion was more technical than the
> > list above shows.
> > 
> > He then describes the experience of Jim Mitchell (SUN, Xerox PARC)
> > using resumption in Xerox's Cedar/MESA system.
> > 
> > He also mentions Mary Fontana presenting data from
> > the TI Explorer System where resumption was found to be used
> > for debugging only.
> > 
> > So, they looked into it and decided
> > against resumption based on technical issues and
> > prior experience. Which is fine, still I'd say it is
> > the wrong decision - but, with limited experience,
> > C++ is not a favorite language design anyway.
> > I guess the more interactive nature of Lisp as a language and
> > Lisp applications makes resumption much more
> > attractive.
> > 
> > 
> >>> -- Dan
> >>>
> >

-- 
http://lispm.dyndns.org/
From: Maciej Katafiasz
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <fihl2a$pcc$1@news.net.uni-c.dk>
Den Tue, 27 Nov 2007 08:34:01 -0500 skrev Kent M Pitman:

> Just look at the similar difficulty we've had getting pretty printing
> into the world in general.  Dick Waters wrote papers back in the early
> 80's showing that the techniques we use in CL are perfectly usable for
> pretty printing code in block structured languages like Ada and PL/1.
> Yet neither those languages nor their successors picked up on the option
> to add such a facility to their programmatic repertoire, perhaps because
> they were already too busy not picking up on the idea of having a
> homoiconic language...

Pretty printing is still on my list of things I feel I should read about, 
but haven't had the time to yet.

>> Casual googling
>> around doesn't suggest I missed any very prominent languages, either.
>
> You're looking for PL/1 for Honeywell Multics, and particularly the MIT
> variant, which may not have been precisely what Honeywell had, I'm not
> sure.  See http://www.multicians.org/multics.html which may say.
>
> Several of the guys who designed the Lisp Machine had previously been
> using that system, which amounted to a "PL/1 machine" (not in quite the
> sense that a LispM was a "Lisp Machine", but certainly in the sense that
> the operating system was written in a high level language, PL/1, and
> there was a unifying execution model on the stack which was best
> described by PL/1).
>
> The restart facility was not so linguistic as practical. I think it used
> some sort of 'on signal' construct (my PL/1 was never advanced, though I
> used it in school, and is quite rusty now, so at this point accept that
> this is all blurry and was second-hand to start with).

Interesting. My idea about about PL/1 is vague at best, but from reading 
a little bit of multicians.org, it seems that it had considerable 
flexibility of expression for things that weren't accommodated for in the 
base language. Implementing support for restarts, which requires the 
ability to traverse the stack without unwinding, _in_ the language 
doesn't sound trivial. Just look at all the doomed attempts at adding 
exception handling to plain C.

> And remember, too, that the world was in a kind of proto-state where the
> ideas were coming fast and furious ahead of platforms to implement them
> on. So they had the abstractions, but they needed a place to deploy
> those ideas in a new context. So they lept to the Lisp Machine, and
> started to use that platform as a forum to evolve new ideas.
[...]
> Anyway, if you follow the people, not the languages, you get a better
> sense of the flow of things.  These people hopped from one
> implementation and system to another, carrying ideas with them.

Right.

> Likewise you can see that UNWIND-PROTECT is present in things like ITS
> TECO [as the ..N q-register, which was executed upon unwind], but again
> the unifying characteristic is the community at MIT in which this arose.
>  I'm not sure who created ..N, maybe Steele, but certainly Steele was
> present, and was present for UNWIND-PROTECT in CL, and later for
> "finally" in Java.

TECO scares me senseless.

> Actually, what was strange and frustrating was that Steele was so
> involved in Java and didn't put restarts into it, when he surely must
> have known of them... and I think they mesh so nicely with
> continuations, etc. that it's a shame.

Indeed. In general, I find the claim that Java was dragging people 
halfway to Lisp a stretch at the very least. It's true that it finally 
got the idea that GC and JIT can be fast into the mainstream, but 
otherwise it's been an exercise in how verbose, restricted, confused and 
obfuscated[1] you can make a language before you kill all benefits of 
having a GC.

> For myself, I used Lisp Machines, and liked them, but there were not
> many of them and many people couldn't use them. I was amused and
> saddened by how many of their ideas were regarded as "needing special
> hardware", and I made it a personal mission to show that many LispM
> ideas could be rehosted on stock hardware (i.e., off-the-shelf, or
> non-special-purpose hardware) just fine.  That was how my ZBABYL mail
> reader came about for TECO-based Emacs, and it's why I took an interest
> in getting the Zetalisp condition system into CL.  (Early CL didn't even
> have ERRSET or IGNORE-ERRORS, if memory serves me, so any program that
> ever signaled ERROR under CLTL was dead in the water as far as
> guarantees of portability.  But the idea of adding just that one
> operator as a way of "fixing" the problem was horrifying to me, so I
> wanted as much of the LispM New Error System  as I could get.)

I guess we should be thankful for your efforts then, the thought of 
having ERRSET for your error handling is indeed terrifying.

>> There's a considerable mental gap between having a system that supports
>> non-local transfer control in the form of exceptions, and a system that
>> includes a formalised protocol for recovering and restarting
>> computation, as evidenced by the lack of languages to offer such a
>> facility.
>
> Yeah, weird, huh?  But I think the thing to understand is that people
> judge the goodness of what they have in contrast to what they had
> before.  So most users of languages don't see themselves as impoverished
> because they have previously used languages without try/catch and now
> they have try/catch, so they've moved up.  We've had our eyes opened, so
> it's hard to go back. But convincing someone they are missing out is
> hard.

That is true. Even sadder is the never ending tale of people who have 
literally no idea about Lisp, yet somehow managed to acquire the idea 
that the defining property is that it's interpreted and slow.

>> I'm still amazed by how woefully incomplete an exception system without
>> structured means of recovery is, yet how I never felt that prior to
>> learning CL.
>
> "A mind once stretched by a new idea never regains its original
> dimension." -- Oliver Wendell Holmes
>
> And, incidentally, without a practice of defining the recovery points.
> While we got restarts into CL, we were unable to get many specific
> restarts in.  And that's an aggravation.  But again it was lack of
> experience with the system, not to mention considerable political
> disagreement over the nature of the inheritance model (I've spoken on
> this in other forums, but ask me if the relevance of this in this
> context is not clear and I'll expand), that interfered with getting some
> specific conditions and many specific restarts added.

This is the post I was referring to above, though I have no idea where 
exactly I read it. But I remember you mentioned that multiple inheritance 
in particular was a big contention point, with vendors such as Gold Hill 
fighting vehemently not to have it in the standard, which prevented a 
large class of standard Zetalisp conditions from making it into the spec, 
such as FILE-ERRORs that are also NETWORK-ERRORs.

>> Therefore I'm really interested in learning the origins of the concept.
>> I suspect the strong tradition of interactive development and built-in
>> debugger naturally leads to the development of a complete error
>> handling system,
>
> This is certainly relevant, though it's a feedback loop since what leads
> to liking the debugger is the good facility.
>
> I think the homoiconicity reference above is also relevant, since the
> usefulness of the debugger is also helped by the ability to use a
> familiar langauge to operate in it.  Visual Studio finally has ways of
> doing this in clumsy ways with Watch and Immediate panes, but for years
> this was really hard.  And even now it's marginal and unpleasant by
> comparison to Lisp in my opinion--though some might prefer it (at least
> it's finally to the point where there can be a debate on the matter).

MSVS debugger was so laughably bad for years that I really wondered if 
they were making it deliberately unusable.

> I hope something in here is useful to your search.

Definitely. Thanks a lot for the most interesting material.

Cheers,
Maciej

[1] I'm in large part referring to the years-long strategy of Sun to do 
_everything_ to convince everyone that Java-the-platform, Java-the-
language and Java-the-implementation are one and the same thing, which 
earned it my eternal hate and disgust. In this regard, MSFT stepping in 
with .NET was a blessing for competition and progress.
From: Scott Burson
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <6becc0c3-313c-41f4-b55d-ec8d6e6efc67@d27g2000prf.googlegroups.com>
On Nov 27, 5:34 am, Kent M Pitman <······@nhplace.com> wrote:
> Bernie Greenberg was one of the most prominent Multicians, and
> implemented Emacs for Multics in Lisp (no, not GNU Lisp, nor GNU
> Emacs.  This was long before those days. I think he used Multics
> Maclisp.)

Yes, Multics Emacs was written in Multics Maclisp.

-- Scott
From: Daniel Weinreb
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <U7m4j.788$RB.402@trnddc03>
I wrote the first version of conditions for the Lisp machine.
At least, I think I did; it was a long time ago.

One influence was Multics PL/I.  PL/I
had a condition feature, although it was very simple.  If
I remember correctly, there was an "on condition_name:"
statement, and a "signal condition_name", which would
do a non-local exit to the point of the "on" statement
and continue execution from there.  It was a lot like
the catch/throw facility of Maclisp.

Multics PL/I added "on cleanup:", which provided cleanup
handlers.  This was the inspiration for the unwind-protect
facility of Lisp.  I think I was responsible for this too.
Of course, everything I did would have been discussed
with and approved by other people, particularly David A. Moon.

The 1981 version of the Lisp machine condition system
predated object-oriented programming.  A condition
simply had a name, which was a symbol, by convention
a Lisp keyword, a format control string, and a list of parameters
to be passed to format; this formed the error message in
case the signal was not handled.  There was a special form called
condition-bind which is like Common Lisp's handler-bind.
There was an attempt to define restartable an proceedable
conditions, but it didn't actually work.  (I had to
go back and read the third edition of the Lisp Machine
Manual to remember all this!)

This was all replaced by the "New Error System" (NES),
some years later, based on Flavors.  If memory
serves, I was involved in the design of this, as
was David Andre, and perhaps also Bernie Greenberg.

-- Dan Weinreb

Maciej Katafiasz wrote:
> Hi,
> 
> there's something that's had me curious for some time now. Given who 
> wrote the relevant CLtL2 chapter, it seems like a question directed 
> particularly at Kent Pitman, but of course any feedback is welcome.
> 
> So, I'm interested in knowing the history and background behind the CL 
> conditions and restarts system. I know it drew directly from the dialects 
> it was standardising, in particular, I seem to rememeber reading another 
> post that mentioned Symbolics had a particularly comprehensive collection 
> of standard conditions. However, I'd like to know where it came from to 
> Symbolics. Was Lisp the first to have conditions? Was it influenced by 
> something else? Was it very much ahead of other languages by having a 
> conditions system?
> 
> Even more interesting are restarts. Whereas most serious languages today 
> offer exceptions, ones to include a built-in support for restarting 
> computation afterwards are exceptionally rare; in fact, I haven't 
> encountered any that wouldn't be Lisps (and Dylan is a Lisp for purpose 
> of this classification. From what I understand, Smalltalk also has 
> restarts, but it's arguably in the Lisp family as well). Casual googling 
> around doesn't suggest I missed any very prominent languages, either. 
> There's a considerable mental gap between having a system that supports 
> non-local transfer control in the form of exceptions, and a system that 
> includes a formalised protocol for recovering and restarting computation, 
> as evidenced by the lack of languages to offer such a facility.
> 
> I'm still amazed by how woefully incomplete an exception system without 
> structured means of recovery is, yet how I never felt that prior to 
> learning CL. Therefore I'm really interested in learning the origins of 
> the concept. I suspect the strong tradition of interactive development 
> and built-in debugger naturally leads to the development of a complete 
> error handling system, whereas in environments where successful and 
> failed execution alike leads to process termination, one might never 
> consciously realise such a need. But that's just my guess, which might, 
> or might not have historical evidence to support it. I hope the village 
> elders will enlighten me in this regard :)
> 
> Cheers,
> Maciej
From: Rainer Joswig
Subject: Re: The origins of CL conditions system
Date: 
Message-ID: <joswig-6B94E7.00501702122007@news-europe.giganews.com>
In article <················@trnddc03>,
 Daniel Weinreb <···@alum.mit.edu> wrote:

> I wrote the first version of conditions for the Lisp machine.
> At least, I think I did; it was a long time ago.
> 
> One influence was Multics PL/I.  PL/I
> had a condition feature, although it was very simple.  If
> I remember correctly, there was an "on condition_name:"
> statement, and a "signal condition_name", which would
> do a non-local exit to the point of the "on" statement
> and continue execution from there.  It was a lot like
> the catch/throw facility of Maclisp.
> 
> Multics PL/I added "on cleanup:", which provided cleanup
> handlers.  This was the inspiration for the unwind-protect
> facility of Lisp.  I think I was responsible for this too.
> Of course, everything I did would have been discussed
> with and approved by other people, particularly David A. Moon.
> 
> The 1981 version of the Lisp machine condition system
> predated object-oriented programming.  A condition
> simply had a name, which was a symbol, by convention
> a Lisp keyword, a format control string, and a list of parameters
> to be passed to format; this formed the error message in
> case the signal was not handled.  There was a special form called
> condition-bind which is like Common Lisp's handler-bind.
> There was an attempt to define restartable an proceedable
> conditions, but it didn't actually work.  (I had to
> go back and read the third edition of the Lisp Machine
> Manual to remember all this!)
> 
> This was all replaced by the "New Error System" (NES),
> some years later, based on Flavors.  If memory
> serves, I was involved in the design of this, as
> was David Andre, and perhaps also Bernie Greenberg.

6th Edition of the Lisp Machine Manual describes it:

  http://common-lisp.net/project/bknr/static/lmman/errors.xml

(needs Firefox/Camino/... for viewing)

> 
> -- Dan Weinreb
> 
> Maciej Katafiasz wrote:
> > Hi,
> > 
> > there's something that's had me curious for some time now. Given who 
> > wrote the relevant CLtL2 chapter, it seems like a question directed 
> > particularly at Kent Pitman, but of course any feedback is welcome.
> > 
> > So, I'm interested in knowing the history and background behind the CL 
> > conditions and restarts system. I know it drew directly from the dialects 
> > it was standardising, in particular, I seem to rememeber reading another 
> > post that mentioned Symbolics had a particularly comprehensive collection 
> > of standard conditions. However, I'd like to know where it came from to 
> > Symbolics. Was Lisp the first to have conditions? Was it influenced by 
> > something else? Was it very much ahead of other languages by having a 
> > conditions system?
> > 
> > Even more interesting are restarts. Whereas most serious languages today 
> > offer exceptions, ones to include a built-in support for restarting 
> > computation afterwards are exceptionally rare; in fact, I haven't 
> > encountered any that wouldn't be Lisps (and Dylan is a Lisp for purpose 
> > of this classification. From what I understand, Smalltalk also has 
> > restarts, but it's arguably in the Lisp family as well). Casual googling 
> > around doesn't suggest I missed any very prominent languages, either. 
> > There's a considerable mental gap between having a system that supports 
> > non-local transfer control in the form of exceptions, and a system that 
> > includes a formalised protocol for recovering and restarting computation, 
> > as evidenced by the lack of languages to offer such a facility.
> > 
> > I'm still amazed by how woefully incomplete an exception system without 
> > structured means of recovery is, yet how I never felt that prior to 
> > learning CL. Therefore I'm really interested in learning the origins of 
> > the concept. I suspect the strong tradition of interactive development 
> > and built-in debugger naturally leads to the development of a complete 
> > error handling system, whereas in environments where successful and 
> > failed execution alike leads to process termination, one might never 
> > consciously realise such a need. But that's just my guess, which might, 
> > or might not have historical evidence to support it. I hope the village 
> > elders will enlighten me in this regard :)
> > 
> > Cheers,
> > Maciej

-- 
http://lispm.dyndns.org/